id stringlengths 4 18 | dataset stringclasses 9 values | question stringlengths 222 13.9k | options listlengths 2 12 | answer stringclasses 8 values | explanation stringlengths 0 12.8k β |
|---|---|---|---|---|---|
mmlu_aux_3185 | mmlu_auxiliary | <ctx>
ecological succession is the process of change in the species that make up an ecological community over time. the process of succession occurs either after the initial colonization of a newly created habitat, or after a disturbance substantially alters a pre - existing habitat. succession that begins in new habitats, uninfluenced by pre - existing communities, is called primary succession, whereas succession that follows disruption of a pre - existing community is called secondary succession. primary succession may happen after a lava flow or the emergence of a new island from the ocean. surtsey, a volcanic island off the southern coast of iceland, is an important example of a place where primary succession has been observed. on the other hand, secondary succession happens after disturbance of a community, such as from a fire, severe windthrow, or logging. succession was among the first theories advanced in ecology. ecological succession was first documented in the indiana dunes of northwest indiana and remains an important ecological topic of study. over time, the understanding of succession has changed from a linear progression to a stable climax state, to a more complex, cyclical model that de - emphasizes the idea of organisms having fixed roles or relationships. history precursors of the idea of ecological succession go back to the beginning of the 19th century. as early as 1742 french naturalist buffon noted that poplars precede oaks and beeches in the natural evolution of a forest. buffon was later forced by the theological committee at the university of paris to recant many of his ideas because they contradicted the biblical narrative of creation. swiss geologist jean - andre deluc and the later french naturalist adolphe dureau de la malle were the first to make use of the word succession concerning the vegetation development after forest clear - cutting. in 1859 henry david thoreau wrote an address called " the succession of forest trees " in which he described succession in an oak - pine forest. " it has long been known to observers that squirrels bury nuts in the ground, but i am not aware that any one has thus accounted for the regular succession of forests. " the austrian botanist anton kerner published a study about the succession of plants in the danube river basin in 1863. ragnar hult's 1885 study on the stages of
tied to natural disturbance events such as fire, flooding, and windstorm. as an example, many shade - intolerant plant species rely on disturbances for successful establishment and to limit competition. without this perpetual thinning, diversity of forest flora can decline, affecting animals dependent on those plants as well. a good example of this role of disturbance is in ponderosa pine ( pinus ponderosa ) forests in the western united states, where surface fires frequently thin existing vegetation allowing for new growth. if fire is suppressed, douglas fir ( pesudotsuga menziesii ), a shade tolerant species, eventually replaces the pines. douglas firs, having dense crowns, severely limit the amount of sunlight reaching the forest floor. without sufficient light new growth is severely limited. as the diversity of surface plants decreases, animal species that rely on them diminish as well. fire, in this case, is important not only to the species directly affected but also to many other organisms whose survival depends on those key plants. diversity is low in harsh environments because of the intolerance of all but opportunistic and highly resistant species to such conditions. the interplay between disturbance and these biological processes seems to account for a major portion of the organization and spatial patterning of natural communities. disturbance variability and species diversity are heavily linked, and as a result require adaptations that help increase plant fitness necessary for survival. relationship to climate change adaptation disturbance in ecosystems can form a way of modeling future ability of ecosystems to adapt to climate change. likewise, adaptation of a species to disturbance may be a predictor of its future ability to survive the current biodiversity crisis. see also environmental disaster ecological succession forest dynamics forest pathology habitat destruction human β wildlife conflict intermediate disturbance hypothesis patch dynamics stressor references external links microdocs : disturbance landscape ecology ecological succession ecology terminology environmental terminology habitat
and water, an increase of 119 million ha since 1990. thus, forest ecological studies are sometimes closely aligned with meteorological and hydrological studies in regional ecosystem or resource planning studies. perhaps more importantly the duff or leaf litter can form a major repository of water storage. when this litter is removed or compacted ( through grazing or human overuse ), erosion and flooding are exacerbated as well as deprivation of dry season water for forest organisms. death and regeneration woody material, often referred to as coarse woody debris, decays relatively slowly in many forests in comparison to most other organic materials, due to a combination of environmental factors and wood chemistry ( see lignin ). trees growing in arid and / or cold environments do so especially slowly. thus, tree trunks and branches can remain on the forest floor for long periods, affecting such things as wildlife habitat, fire behaviour, and tree regeneration processes. some trees leave behind eerie skeletons after death. in reality these deaths are actually very few compared to the amount of tree deaths that go unnoticed. thousands of seedlings can be produced from a single tree but only a few can actually grow to maturity. most of those deaths are caused from competition for light, water, or soil nutrients, this is called natural thinning. singular deaths caused by natural thinning go unnoticed, but many deaths can help form forest ecosystems. there are four stages to forest regrowth after a disturbance, the establishment phase which is rapid increase in seedlings, the thinning phase which happens after a canopy is formed and the seedlings covered by it die, the transition phase which occurs when one tree from the canopy dies and creates a pocket of light giving new seedlings opportunity to grow, and lastly the steady - state phase which happens when the forest has different sizes and ages of trees. see also clear cutting close to nature forestry deforestation and climate change forest ecology and management ( journal ) forest principles intact forest landscapes mountain ecology old - growth forest plant ecology regeneration ( ecology ) references bibliography philip joseph burton. 2003. towards sustainable management of the boreal forest 1039 pages robert w. christopherson. 1996. geosystems : an introduction to physical geography. prentice hall inc. c. michael hogan. 2008. wild turkey : mel
fire ecology is a scientific discipline concerned with the effects of fire on natural ecosystems. many ecosystems, particularly prairie, savanna, chaparral and coniferous forests, have evolved with fire as an essential contributor to habitat vitality and renewal. many plant species in fire - affected environments use fire to germinate, establish, or to reproduce. wildfire suppression not only endangers these species, but also the animals that depend upon them. wildfire suppression campaigns in the united states have historically molded public opinion to believe that wildfires are harmful to nature. ecological research has shown, however, that fire is an integral component in the function and biodiversity of many natural habitats, and that the organisms within these communities have adapted to withstand, and even to exploit, natural wildfire. more generally, fire is now regarded as a'natural disturbance ', similar to flooding, windstorms, and landslides, that has driven the evolution of species and controls the characteristics of ecosystems. fire suppression, in combination with other human - caused environmental changes, may have resulted in unforeseen consequences for natural ecosystems. some large wildfires in the united states have been blamed on years of fire suppression and the continuing expansion of people into fire - adapted ecosystems as well as climate change. land managers are faced with tough questions regarding how to restore a natural fire regime, but allowing wildfires to burn is likely the least expensive and most effective method in many situations. history fire has played a major role in shaping the world's vegetation. the biological process of photosynthesis began to concentrate the atmospheric oxygen needed for combustion during the devonian approximately 350 million years ago. then, approximately 125 million years ago, fire began to influence the habitat of land plants. in the 20th century ecologist charles cooper made a plea for fire as an ecosystem process. fire components a fire regime describes the characteristics of fire and how it interacts with a particular ecosystem. its " severity " is a term that ecologists use to refer to the impact that a fire has on an ecosystem. it is usually studied using tools such as remote sensing which can detect burned area estimates, severity and fire risk associated with an area. ecologists can define this in many ways, but one way is through
</ctx>
The following are multiple choice questions (with answers) about STEM.
Over a period of time, many habitats change with respect to the types of plants and animals that live there. This change is known as succession. Succession occurs because plants and animals cause a change in the environment in which they live. The first weeds and grasses that appear on a bare field, for example, change the environment by shielding the soil from direct sunlight. As these plants spread, the ground becomes cooler and more moist than it was originally. Thus, the environment at the ground surface has been changed. The new surface conditions favor the sprouting of shrubs. As shrubs grow, they kill the grasses by preventing light from reaching them and also enhance the soil. Pine seedlings soon take hold and as they grow, they in turn shade out the shrubs. They are not able to shade out oak and hickory seedlings, however, that have found the forest floor suitable. These seedlings grew into large trees that eventually shade out the pines. It can be inferred from the passage that _ .
A. oak and hickory trees grow taller than pines
B. weeds and grasses prefer cold climate
C. pines and grasses can exist together
D. birds encourage the growth of shrubs
Answer: | [
"oak and hickory trees grow taller than pines",
"weeds and grasses prefer cold climate",
"pines and grasses can exist together",
"birds encourage the growth of shrubs"
] | A | null |
aquarat_38697 | aquarat | <ctx>
change convention ; move xia fa " 1 " two position right. estimate 3rd digit of shang to be 4. multiply new digit of shang 4 with xia fa 1, combined with fang fa to make 964. subtract successively 4 * 9 = 36, 4 * 6 = 24, 4 * 4 = 16 from the shi, leaving 311 double the last digit 4 of fang fa into 8 and merge with fang fa result north song dynasty mathematician jia xian developed an additive multiplicative algorithm for square root extraction, in which he replaced the traditional " doubling " of " fang fa " by adding shang digit to fang fa digit, with same effect. extraction of cubic root jiuzhang suanshu vol iv " shaoguang " provided algorithm for extraction of cubic root. problem 19 : we have a 1860867 cubic chi, what is the length of a side? answer : 123 chi. north song dynasty mathematician jia xian invented a method similar to simplified form of horner scheme for extraction of cubic root. the animation at right shows jia xian's algorithm for solving problem 19 in jiuzhang suanshu vol 4. polynomial equation north song dynasty mathematician jia xian invented horner scheme for solving simple 4th order equation of the form south song dynasty mathematician qin jiushao improved jia xian's horner method to solve polynomial equation up to 10th order. the following is algorithm for solving in his mathematical treatise in nine sections vol 6 problem 2. this equation was arranged bottom up with counting rods on counting board in tabular form algorithm : arrange the coefficients in tabular form, constant at shi, coeffienct of x at shang lian, the coeffiecnt of at yi yu ; align the numbers at unit rank. advance shang lian two ranks advance yi yu three ranks estimate shang = 20 let xia lian = shang * yi yu let fu lian = shang * yi yu merge fu lian with shang lian let fang = shang * shang lian subtract shang * fang from shi add shang * yi yu to xia lian retract xia lian 3 ranks, retract yi yu 4 ranks the second digit of shang is 0
40, y = 24. computations of the aharganas and the numbers of revolutions ahargana is the number of days elapsed since the beginning of the yuga. saturn let u be the value of the ahargana corresponding the residue 24 for saturn. during u days, saturn would have completed ( 36, 641 / 394, 479, 375 ) Γu number of revolutions. since there is a residue of 24, this number would include the fractional number 24 / 394, 479, 375 of revolutions also. hence during the ahargana u, the number of revolutions completed would be which would be an integer. denoting this integer by v, the problem reduces to solving the following linear diophantine equation : kuttaka may be applied to solve this equation. the smallest solution is u = 346, 688, 814 and v = 32, 202. mars let u be the value of the ahargana corresponding the residue 40 for mars. during u days, mars would have completed ( 190, 412 / 131, 493, 125 ) Γ u number of revolutions. since there is a residue of 40, this number would include the fractional number 40 / 131, 493, 125 of revolutions also. hence during the ahargana u, the number of revolutions completed would be which would be an integer. denoting this integer by v, the problem reduces to solving the following linear diophantine equation :. kuttaka may be applied to solve this equation. the smallest solution is u = 118, 076, 020 and v = 171, 872. references further reading for a comparison of indian and chinese methods for solving linear diophantine equations : for a comparison of the complexity of the aryabhata algorithm with the complexities of euclidean algorithm, chinese remainder theorem and garner's algorithm : for a popular readable account of the kuttaka : for an application of kuttaka in computing full moon days : for a discussion of the computational aspects of aryabhata algorithm : for the interpretation of aryabhata's original formulation of algorithm : for a detailed exposition of the kuttaka algorithm
to the same real number β and there are no other duplicate images. in the decimal system, for example, there is 0. = 1. = 1 ; in the balanced ternary system there is 0. = 1. =. a rational number has an indefinitely repeating sequence of finite length, if the reduced fraction's denominator contains a prime factor that is not a factor of the base. if is the maximal factor of the reduced denominator which is coprime to the base, is the smallest exponent such that divides. it is the multiplicative order of the residue class which is a divisor of the carmichael function which in turn is smaller than. the repeating sequence is preceded by a transient of finite length if the reduced fraction also shares a prime factor with the base. a repeating sequence represents the fraction an irrational number has a representation of infinite length that is not, from any point, an indefinitely repeating sequence of finite length. for example, in duodecimal, = 0. 6, = 0. 4, = 0. 3 and = 0. 2 all terminate ; = 0. repeats with period length 4, in contrast with the equivalent decimal expansion of 0. 2 ; = 0. has period 6 in duodecimal, just as it does in decimal. if is an integer base and is an integer, then for example 1 / 7 in duodecimal : which is 0. base12. 10base12 is 12base10, 102base12 is 144base10, 21base12 is 25base10, a5base12 is 125base10. algorithm for positive bases for a rational ( and base ) there is the following algorithm producing the repetend together with its length : function b _ adic ( b, p, q ) / / b β₯ 2 ; 0 < p < q static digits = " 0123... " ; / / up to the digit with value b β 1 begin s = " " ; / / the string of digits pos = 0 ; / / all places are right to the radix point while not defined ( occurs [ p ] ) do occurs [ p ] = pos ; / / the position of the
; thence comes the english word sine. indeterminate equations a problem of great interest to indian mathematicians since ancient times has been to find integer solutions to diophantine equations that have the form ax + by = c. ( this problem was also studied in ancient chinese mathematics, and its solution is usually referred to as the chinese remainder theorem. ) this is an example from bhaskara's commentary on aryabhatiya : find the number which gives 5 as the remainder when divided by 8, 4 as the remainder when divided by 9, and 1 as the remainder when divided by 7 that is, find n = 8x + 5 = 9y + 4 = 7z + 1. it turns out that the smallest value for n is 85. in general, diophantine equations, such as this, can be notoriously difficult. they were discussed extensively in ancient vedic text sulba sutras, whose more ancient parts might date to 800 bce. aryabhata's method of solving such problems, elaborated by bhaskara in 621 ce, is called the ( ΰ€ΰ€ΰ€ΰ€ ) method. kuttaka means " pulverizing " or " breaking into small pieces ", and the method involves a recursive algorithm for writing the original factors in smaller numbers. this algorithm became the standard method for solving first - order diophantine equations in indian mathematics, and initially the whole subject of algebra was called kuttaka - ganita or simply kuttaka. algebra in aryabhatiya, aryabhata provided elegant results for the summation of series of squares and cubes : and ( see squared triangular number ) astronomy aryabhata's system of astronomy was called the audayaka system, in which days are reckoned from uday, dawn at lanka or " equator ". some of his later writings on astronomy, which apparently proposed a second model ( or ardha - ratrika, midnight ) are lost but can be partly reconstructed from the discussion in brahmagupta's khandakhadyaka. in some texts, he seems to ascribe the apparent motions of the heavens to the earth's rotation. he may have believed that the planet's orbits
</ctx>
The following are multiple choice questions (with answers) about STEM.
Present ages of Sameer and Anand are in the ratio of 5 : 4 respectively. Three years hence, the ratio of their ages will become 11 : 9 respectively. What is Anand's present age in years?
A. A)12 years
B. B)19 years
C. C)26 years
D. D)24 years
Answer: | [
"A)12 years",
"B)19 years",
"C)26 years",
"D)24 years"
] | D | Explanation:
Let the present ages of Sameer and Anand be 5x years and 4x years respectively.
Then,5x +3/4X+3=11/9
9(5x + 3) = 11(4x + 3)
45x + 27 = 44x + 33
45x - 44x = 33 - 27
x = 6.
Anand's present age = 4x = 24 years.
Answer: D |
arc_easy_1596 | arc_easy | <ctx>
this article considers the history of zoology since the theory of evolution by natural selection proposed by charles darwin in 1859. charles darwin gave new direction to morphology and physiology, by uniting them in a common biological theory : the theory of organic evolution. the result was a reconstruction of the classification of animals upon a genealogical basis, fresh investigation of the development of animals, and early attempts to determine their genetic relationships. the end of the 19th century saw the fall of spontaneous generation and the rise of the germ theory of disease, though the mechanism of inheritance remained a mystery. in the early 20th century, the rediscovery of mendel's work led to the rapid development of genetics by thomas hunt morgan and his students, and by the 1930s the combination of population genetics and natural selection in the " neo - darwinian synthesis ". second half of nineteenth century darwin and the theory of evolution the 1859 publication of darwin's theory in on the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life is often considered the central event in the history of modern zoology. darwin's established credibility as a naturalist, the sober tone of the work, and most of all the sheer strength and volume of evidence presented, allowed origin to succeed where previous evolutionary works such as the anonymous vestiges of creation had failed. most scientists were convinced of evolution and common descent by the end of the 19th century. however, natural selection would not be accepted as the primary mechanism of evolution until well into the 20th century, as most contemporary theories of heredity seemed incompatible with the inheritance of random variation. alfred russel wallace, following on earlier work by de candolle, humboldt and darwin, made major contributions to zoogeography. because of his interest in the transmutation hypothesis, he paid particular attention to the geographical distribution of closely allied species during his field work first in south america and then in the malay archipelago. while in the archipelago he identified the wallace line, which runs through the spice islands dividing the fauna of the archipelago between an asian zone and a new guinea / australian zone. his key question, as to why the fauna of islands with such similar climates should be so different, could only be answered by considering their origin
human evolution and darwin's ideas applied to human evolution. evolution and infectious agents β the origin and evolution of aids and how bacteria acquire pathogenic features creationism and intelligent design β containing the spread of creationism and intelligent design, while improving the public β s understanding of evolution throughout the americas and elsewhere. location the world summit on evolution takes place at galapagos academic institute for the arts and sciences ( gaias ), part of the universidad san francisco de quito. gaias was established in 2002 at the capital town of the galapagos province, puerto baquerizo moreno, on the island of san cristobal, one of the largest of the galapagos islands. its 4. 5 hectare campus is the only one located on the historically significant galapagos islands. gaias was founded on the principle that would become a first - rate institution for international students and researchers. the galapagos islands inspired charles darwin to define his evolutionary theory, which revolutionized human understanding in relation to the diversity of species, including humans. his ideas were presented in on the origin of species. the galapagos islands, are important for the scientific studies that have been developed over the centuries after his visit. past and future summits 9 β 12 june 2005 - first world summit on evolution 22 β 26 august 2009 - second world summit on evolution the second world summit on evolution was launched to celebrate charles darwin's 200th birthday. the 2009 summit included the first meeting of the sociedad iberoamericana de biologia evolutiva ( sibe ). sibe led to the establishment of academic and intellectual bonds between the spanish - and portuguese - speaking specialists in evolutionary biology. 1 β 5 june 2013 - third world summit on evolution the summit adopted the theme β why does evolution matter β. 200 - attendees met, to listen to 12 keynote speakers, 20 oral presentations and 31 posters by faculty, postdocs and graduate and undergraduate students. the summit encompassed five sessions : evolution and society, pre - cellular evolution and the rna world, behavior and environment, genome, and microbes and diseases. usfq and gaias launched officially the lynn margulis center for evolutionary biology and showcased the galapagos science center
of reptiles'that had preceded even the prehistoric mammals. these discoveries captured the public imagination and focused attention on the history of life on earth. most of these geologists held to catastrophism, but charles lyell's influential principles of geology ( 1830 ) popularised hutton's uniformitarianism, a theory that explained the geological past and present on equal terms. evolution and biogeography the most significant evolutionary theory before darwin's was that of jean - baptiste lamarck ; based on the inheritance of acquired characteristics ( an inheritance mechanism that was widely accepted until the 20th century ), it described a chain of development stretching from the lowliest microbe to humans. the british naturalist charles darwin, combining the biogeographical approach of humboldt, the uniformitarian geology of lyell, thomas malthus's writings on population growth, and his own morphological expertise, created a more successful evolutionary theory based on natural selection ; similar evidence led alfred russel wallace to independently reach the same conclusions. the 1859 publication of darwin's theory in on the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life is often considered the central event in the history of modern biology. darwin's established credibility as a naturalist, the sober tone of the work, and most of all the sheer strength and volume of evidence presented, allowed origin to succeed where previous evolutionary works such as the anonymous vestiges of creation had failed. most scientists were convinced of evolution and common descent by the end of the 19th century. however, natural selection would not be accepted as the primary mechanism of evolution until well into the 20th century, as most contemporary theories of heredity seemed incompatible with the inheritance of random variation. wallace, following on earlier work by de candolle, humboldt and darwin, made major contributions to zoogeography. because of his interest in the transmutation hypothesis, he paid particular attention to the geographical distribution of closely allied species during his field work first in south america and then in the malay archipelago. while in the archipelago he identified the wallace line, which runs through the spice islands dividing the fauna of the archipelago between an asian zone and a new guinea / australian zone. his key question, as
##dow and others. the galapagos finches were especially influential in the development of charles darwin's theory of evolution. his contemporary alfred russel wallace also noted these variations and the geographical separations between different forms leading to the study of biogeography. wallace was influenced by the work of philip lutley sclater on the distribution patterns of birds. for darwin, the problem was how species arose from a common ancestor, but he did not attempt to find rules for delineation of species. the species problem was tackled by the ornithologist ernst mayr, who was able to demonstrate that geographical isolation and the accumulation of genetic differences led to the splitting of species. early ornithologists were preoccupied with matters of species identification. only systematics counted as true science and field studies were considered inferior through much of the 19th century. in 1901, robert ridgway wrote in the introduction to the birds of north and middle america that : this early idea that the study of living birds was merely recreation held sway until ecological theories became the predominant focus of ornithological studies. the study of birds in their habitats was particularly advanced in germany with bird ringing stations established as early as 1903. by the 1920s, the journal fur ornithologie included many papers on the behaviour, ecology, anatomy, and physiology, many written by erwin stresemann. stresemann changed the editorial policy of the journal, leading both to a unification of field and laboratory studies and a shift of research from museums to universities. ornithology in the united states continued to be dominated by museum studies of morphological variations, species identities, and geographic distributions, until it was influenced by stresemann's student ernst mayr. in britain, some of the earliest ornithological works that used the word ecology appeared in 1915. the ibis, however, resisted the introduction of these new methods of study, and no paper on ecology appeared until 1943. the work of david lack on population ecology was pioneering. newer quantitative approaches were introduced for the study of ecology and behaviour, and this was not readily accepted. for instance, claud ticehurst wrote : david lack's studies on population ecology sought to find the processes involved in
</ctx>
The following are multiple choice questions (with answers) about STEM.
Darwin's theory that animal species can change over time was inspired by his research on which set of islands?
A. the Philippine Islands
B. the Virgin Islands
C. the Hawaiian Islands
D. the Galapagos Islands
Answer: | [
"the Philippine Islands",
"the Virgin Islands",
"the Hawaiian Islands",
"the Galapagos Islands"
] | D | |
medmcqa_6750 | medmcqa | <ctx>
- 5 ken wiley, laura findley, madison goldrich, tejinder k rakhra - burris, ana stevens, pamela williams, carol j bult, rex chisholm, patricia deverka, geoffrey s ginsburg, eric d green, gail jarvik, george a mensah, erin ramos, mary v relling, dan m roden, robb rowley, gil alterovitz, samuel aronson, lisa bastarache, james j cimino, erin l crowgey, guilherme del fiol, robert r freimuth, mark a hoffman, janina jeff, kevin johnson, kensaku kawamoto, subha madhavan, eneida a mendonca, lucila ohno - machado, siddharth pratap, casey overby taylor, marylyn d ritchie, nephi walton, chunhua weng, teresa zayas - caban, teri a manolio, marc s williams, a research agenda to support the development and implementation of genomics - based clinical informatics tools and resources, journal of the american medical informatics association, volume 29, issue 8, august 2022, pages 1342 β 1349, https : / / doi. org / 10. 1093 / jamia / ocac057 most cited publications locke ae, kahali b, berndt si, justice ae, pers th, day fr, powell c, vedantam s, buchkovich ml, yang j, croteau - chonka dc. genetic studies of body mass index yield new insights for obesity biology. nature. 2015 feb ; 518 ( 7538 ) : 197 - 206. according to google scholar, this article has been cited 2914 times wood ar, esko t, yang j, vedantam s, pers th, gustafsson s, chu ay, estrada k, kutalik z, amin n, buchkovich ml. defining the role of common variation in the genomic and biological architecture of adult human height. nature genetics. 2014 nov ; 46 ( 11 ) : 1173 - 86. according to google scholar, it has been cited
the measures used in the 2014 america's health rankings are in two subgroups : core measures and supplemental measures. core measures consist of health determinants ( risk factors ), and health outcomes. determinants are actions that affect the population's future health, while outcomes represent what has already occurred through death, disease, or missed days due to illness. there are four groups of determinants : behaviors, community and environment, policy, and clinical care. the measures in these four groups influence a state's health outcomes, and improving the determinants over time will improve outcomes. supplemental measures provide additional perspective on the health in a state. supplemental measures do not factor into a state's overall score and ranking, but they are useful in forming a fuller understanding of the health of a state. core measures behaviors smoking : percentage of adults who are smokers ( smoked at least 100 cigarettes in their lifetime and currently smoke ). data from cdc behavioral risk factor surveillance system ( brfss ) binge drinking : percentage of adults who had four or more ( women ) or five or more ( men ) alcoholic beverages on at least one occasion in the past 30 days. data from cdc brfss. drug deaths : number of deaths due to drug injury of any intent ( unintentional, suicide, homicide, or undetermined ) per 100, 000 people. data from the national vital statistics system. obesity : percentage of adults who are obese with a body mass index ( bmi ) of 30. 0 or higher. data from cdc brfss. physical inactivity : percentage of adults who report doing no physical activity or exercise ( such as running, calisthenics, golf, gardening, or walking ) other than their regular job in the last 30 days. data from cdc brfss. high school graduation : percentage of incoming ninth graders who graduate in four years from a high school with a regular degree. data from the national center for education statistics. community and environment violent crime : the number of murders, rapes, robberies, and aggravated assaults per 100, 000 population. data from fbi. occupational fatalities : number of fatal occupational injuries in construction, manufacturing, trade, transportation, utilities,
the areas of health and medical statistics and statistical methodology, including categorical response models, goodness of fit tests, survival analysis, and geometric probability. in addition, he has given over 120 technical talks / seminars all over the u. s. and in 10 other countries. khamis - roche stature prediction model collaborating with a. f. roche, khamis developed the khamis - roche stature prediction model used in predicting adult stature in white american children without using skeletal age. it was found that the method can predict adult stature with only a slight decrease in accuracy and reliability compared to methods using skeletal age. relatedly, his research validated the variations of the rwt prediction model to estimate adult stature in caucasian americans, recommending the multivariate cubic spline smoothing [ mcs2 ( 1 ) ] method for improved accuracy and reliability. collaborative medical research in collaborative research on bmi and obesity screening in 1996, it was discovered that bmi is an uncertain indicator of obesity, and specific cut - off values of 25 kg / m2 for men and 23 kg / m2 for women were recommended to enhance obesity screening accuracy by considering body composition. as another example, in collaboration with ophthalmologist john bullock et al. in 2011, the cause of the fusarium keratitis epidemic of 2004 - 6 was discovered ; it was also determined that the epidemic could have been declared several months sooner than the actual declaration. two - stage delta - corrected kolmogorov - smirnov test khamis'research has contributed to the increased statistical power of the classic kolmogorov - smirnov test by introducing a delta in the empirical distribution function. the new test maintained test size and increased power by up to ten percentage points. he then determined that the two - stage delta - corrected test was uniformly more powerful than the classical test. multigraph representation of hierarchical loglinear models in collaboration with graph theorist terry mckee, khamis developed a methodology for analyzing and interpreting loglinear models using the generator multigraph. this led to a more facile way of analyzing and interpreting loglinear models. in particular, it enables faster and easier ways of identifying decomposable loglinear models,
based measure of risk of atherosclerosis ) in adulthood. however, in children or adolescents with elevated bmi who reduce their bmi to normal levels, these risks are decreased to a similar level as those with normal bmi in childhood and adulthood. one study showed that children who became obese as early as age 2 were more likely to be obese as adults. according to an article in the new york times all of these health effects are contributing to a shorter lifespan of five years for these obese children. it is the first time in two centuries that the current generation of children in america may have a shorter lifespan than their parents. causes childhood obesity can be brought on by a range of factors which often act in combination. " obesogenic environment " refers to a mixture of environmental factors that are permissive of obesity, especially for those who are genetically predisposed. the greatest risk factor for child obesity is the obesity of both parents. this may be reflected by the family's environment and genetics. other reasons may also be due to psychological factors and the child's body type. a 2010 review stated that childhood obesity likely is the result of the interaction of natural selection favouring those with more parsimonious energy metabolism and today's consumerist society with easy access to energy dense cheap foods and less energy requirements in daily life. factors include the increase in use of technology, increase in snacks and portion size of meals, and the decrease in the physical activity of children. a study found kids that use electronic devices 3 or more hours a day had between a 17 - 44 % increased risk of being overweight, or a 10 - 61 % increased risk of obese ( cespedes 2011 ). childhood obesity is common among children from, low - income, african american and hispanic communities. this is mainly because minority children spend less time playing outside the house and staying active. some contributors to childhood obesity is that parents would rather have their children stay inside the home because they fear that gang, drug violence, and other dangers might harm them. genetics childhood obesity is often the result of an interplay between many genetic and environmental factors. polymorphisms in various genes controlling appetite and metabolism predispose individuals to obesity when
</ctx>
The following are multiple choice questions (with answers) about STEM.
Obesity indices are -
A. Broca's index
B. Ponderal index
C. Quetelet index
D. Corpulence index
Answer: | [
"Broca's index",
"Ponderal index",
"Quetelet index",
"Corpulence index"
] | A | Body mass index(quetelet&;s index) Ponderal index Broca index Lorentz&;sformula Corpulence index. Among these BMI and Broca index are widely used. BMI is used an index for obesity. Ref- Park&;s textbook of Preventive and Social Medicine 24th edition. |
aquarat_2177 | aquarat | <ctx>
this sum by the number of data values, which is 10. the result is :
then 4, and so on, ending with - 5, using counting for loops?
now equal this new number ( which is the next prime ), and repeat from step 3. when the algorithm terminates, the numbers remaining not marked in the list are all the primes below. the main idea here is that every value given to will be prime, because if it were composite it would be marked as a multiple of some other, smaller prime. note that some of the numbers may be marked more than once ( e. g., 15 will be marked both for 3 and 5 ). as a refinement, it is sufficient to mark the numbers in step 3 starting from, as all the smaller multiples of will have already been marked at that point. this means that the algorithm is allowed to terminate in step 4 when is greater than. another refinement is to initially list odd numbers only,, and count in increments of in step 3, thus marking only odd multiples of. this actually appears in the original algorithm. this can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds ( i. e., numbers coprime with 2 ), and counting in the correspondingly adjusted increments so that only such multiples of are generated that are coprime with those small primes, in the first place. example to find all the prime numbers less than or equal to 30, proceed as follows. first, generate a list of integers from 2 to 30 : 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 the first number in the list is 2 ; cross out every 2nd number in the list after 2 by counting up from 2 in increments of 2 ( these will be all the multiples of 2 in the list ) : 2 3 5 7 9 11 13 15 17 19 21 23 25 27 29 the next number in the list after 2 is 3 ; cross out every 3rd number in the list after 3 by counting up from 3 in increments of 3 ( these will be all the multiples of 3 in the list ) : 2 3 5 7 11 13 17 19 23 25 29 the
. math. pi * r * 2 7. which expression returns the integer 27? a. 3 * * 3 b.
</ctx>
The following are multiple choice questions (with answers) about STEM.
1+2=10
2+3=26
3+4=50 then
4+5=?
A. A)80
B. B)81
C. C)82
D. D)83
Answer: | [
"A)80",
"B)81",
"C)82",
"D)83"
] | C | 1+2=10..... (1+2)^2+1 =10
2+3=26....... (2+3)^2+1=26
3+4=50 then...(3+4)^2+1=50
4+5=82........ (4+5)^2+1=82
ANSWER:C |
mmlu_aux_2687 | mmlu_auxiliary | <ctx>
dna profiling ( also called dna fingerprinting and genetic fingerprinting ) is the process of determining an individual's deoxyribonucleic acid ( dna ) characteristics. dna analysis intended to identify a species, rather than an individual, is called dna barcoding. dna profiling is a forensic technique in criminal investigations, comparing criminal suspects'profiles to dna evidence so as to assess the likelihood of their involvement in the crime. it is also used in paternity testing, to establish immigration eligibility, and in genealogical and medical research. dna profiling has also been used in the study of animal and plant populations in the fields of zoology, botany, and agriculture. background starting in the 1980s, scientific advances allowed the use of dna as a material for the identification of an individual. the first patent covering the direct use of dna variation for forensics ( us5593832a ) was filed by jeffrey glassberg in 1983, based upon work he had done while at rockefeller university in the united states in 1981. british geneticist sir alec jeffreys independently developed a process for dna profiling in 1985 while working in the department of genetics at the university of leicester. jefferys discovered that a dna examiner could establish patterns in unknown dna. these patterns were a part of inherited traits that could be used to advance the field of relationship analysis. these discoveries lead to the first use of dna profiling in a criminal case. the process, developed by jeffreys in conjunction with peter gill and dave werrett of the forensic science service ( fss ), was first used forensically in the solving of the murder of two teenagers who had been raped and murdered in narborough, leicestershire in 1983 and 1986. in the murder inquiry, led by detective david baker, the dna contained within blood samples obtained voluntarily from around 5, 000 local men who willingly assisted leicestershire constabulary with the investigation, resulted in the exoneration of richard buckland, an initial suspect who had confessed to one of the crimes, and the subsequent conviction of colin pitchfork on january 2, 1988. pitchfork, a local bakery employee, had coerced his coworker ian kelly to stand in for him when providing a blood sample β kelly then used
##sjointed, and unfocused, and raises issues of privacy and consent that may warrant additional legal protections to be established. privacy issues surrounding dna databases not only means privacy is threatened in collecting and analyzing dna samples, it also exists in protecting and storing this important personal information. as the dna profiles can be stored indefinitely in dna database, it has raised concerns that these dna samples can be used for new and unidentified purposes. with the increase of the users who access the dna database, people are worried about their information being let out or shared inappropriately, for example, their dna profile may be shared with others such as law enforcement agencies or countries without individual consent. the application of dna databases have been expanded into two controversial areas : arrestees and familial searching. an arrestee is a person arrested for a crime and who has not yet been convicted for that offense. currently, 21 states in the united states have passed legislation that allows law enforcement to take dna from an arrestee and enter it into the state's codis dna database to see if that person has a criminal record or can be linked to any unsolved crimes. in familial searching, the dna database is used to look for partial matches that would be expected between close family members. this technology can be used to link crimes to the family members of suspects and thereby help identify a suspect when the perpetrator has no dna sample in the database. furthermore, dna databases could fall into the wrong hands due to data breaches or data sharing. dna collection and human rights in a judgement in december 2008, the european court of human rights ruled that two british men should not have had their dna and fingerprints retained by police saying that retention " could not be regarded as necessary in a democratic society ". the dna fingerprinting pioneer professor sir alec jeffreys condemned uk government plans to keep the genetic details of hundreds of thousands of innocent people in england and wales for up to 12 years. jeffreys said he was " disappointed " with the proposals, which came after a european court ruled that the current policy breaches people's right to privacy. jefferys said " it seems to be as about as minimal a response to the european court of human
##d such that statistically at least two matches ( a 1 in a trillion chance, under ideal conditions ) should have arisen by chance. however, depending on factors such as the number of incomplete profiles and the presence of related individuals, the chance matches might actually be higher. however the official position was that no chance matches have occurred, a position backed up by the fact that the majority of the searches would have been repeated, and that there are not 1 trillion unique dna profiles on file. in july 2009, a lawyer, lorraine elliot, was arrested on accusations of forgery which were quickly proven to be false. a dna sample was taken from her and logged. she was cleared of the accusations a day later and exonerated. however, mrs elliot subsequently lost her job ( even though she was completely innocent of any crime ) when the fact that her dna profile was stored on the national database was discovered during a subsequent work - related security check. in 2010 she was finally able to have her details removed from the database. racial demographics and controversy census data and home office statistics indicated that by 2007 almost 40 % of black men had their dna profile on the database compared to 13 % of asian men and 9 % of white men. in july 2006, the black police association called for an inquiry into why the database held details of 37 % of black men but fewer than 10 % of white men. in november 2006, similar concerns were raised by the sunday telegraph which claimed that three in four young black men were on the dna database. according to the sunday telegraph, an estimated 135, 000 black males aged 15 to 34 would have been added to the dna database by april 2007, equivalent to 77 per cent of the young black male population in england and wales. by contrast, only 22 per cent of young white males, and six per cent of the general population, would be on the database. this figure was confirmed by the british government's own human genetics commission 2009 report on the topic, titled nothing to hide, nothing to fear? balancing individual rights and the public interest in the governance and use of the national dna database, which said that " the profiles of over three quarters of young black men between the ages of 18 and 35 are recorded. "
a threat to individual civil liberties. personal information included in genetic material, such as markers that identify various genetic diseases, physical and behavioral traits, could be used for discriminatory profiling and its collection may constitute an invasion of privacy. also, dna can be used to establish paternity and whether or not a child is adopted. nowadays, the privacy and security issues of dna database has caused huge attention. some people are afraid that their personal dna information will be let out easily, others may define their dna profiles recording in the databases as a sense of " criminal ", and being falsely accused in a crime can lead to having a " criminal " record for the rest of their lives. uk laws in 2001 and 2003 allowed dna profiles to be taken immediately after a person was arrested and kept in a database even if the suspect was later acquitted. in response to public unease at these provisions, the uk later changed this by passing the protection of freedoms act 2012 which required that those suspects not charged or found not guilty would have their dna data deleted from the database. in european countries which have established a dna database, there are some measures which are being used to protect the privacy of individuals, more specifically, some criteria to help removing the dna profiles from the databases. among the 22 european countries which have been analyzed, most of the countries will record the dna profiles of suspects or those who have committed serious crimes. for some countries ( like belgium and france ) may remove the criminal's profile after 30 β 40 years, because these β criminal investigation β database are no longer needed. most of the countries will delete the suspect's profile after they are acquitted... etc. all the countries have a completed legislation to largely avoid the privacy issues which may occur during the use of dna database. public discussion around the introduction of advanced forensic techniques ( such as genetic genealogy using public genealogy databases and dna phenotyping approaches ) has been limited, disjointed, and unfocused, and raises issues of privacy and consent that may warrant additional legal protections to be established. privacy issues surrounding dna databases not only means privacy is threatened in collecting and analyzing dna samples, it also exists in protecting and storing this important personal information. as the
</ctx>
The following are multiple choice questions (with answers) about STEM.
Your cell phone holds secrets about you. Besides the names and numbers that you've programmed into it, traces of your DNA remain on it, according to a new study. DNA is genetic material that appears in every cell. Like your fingerprint, your DNA is unique to you --- unless you have an identical twin. Scientists today usually analyze DNA in blood,saliva , or hair left behind at the scene of a crime. The results often help detectives identify criminals and their victims. Meghan J. McFadden, a biologist at McMaster University in Hamilton, Ontario, heard about a crime in which the suspect bled onto a cell phone and later dropped the device. This made her wonder whether traces of DNA remained on cell phones --- even when no blood was involved. To find out, she and a colleague collectedflip-style phones from 10 volunteers. They collected invisible traces of the users from two parts of the phone: the outside, where the user holds it, and the speaker, which is placed at the user's ear. The scientists scrubbed(meaning "cleaned") the phones using a liquid mixture made mostly of alcohol. The aim of washing was to remove all detectable traces of DNA. The owners got their phones back for another week. Then the researchers collected the phones and repeated collecting traces on each phone once more. They discovered DNA that belonged to the phone's owner on each of the phones. Surprisingly, DNA showed up even in swabs that were taken immediately after the phones were cleaned. That suggests that washing won't remove all traces of evidence from a criminal's cell phone. So cell phones can now be added to the list of clues that can help a crime-scene investigation. Which of the following is NOT mentioned in the passage?
A. The reason for collecting the phones from volunteers.
B. The technique of collecting DNA on the phones.
C. The method of removing traces of DNA on the phones.
D. The purpose of washing the cell phones.
Answer: | [
"The reason for collecting the phones from volunteers.",
"The technique of collecting DNA on the phones.",
"The method of removing traces of DNA on the phones.",
"The purpose of washing the cell phones."
] | B | null |
m1_4974 | m1 | <ctx>
and a blue card. then : the card is correct. refers to the second card, while : the red card is correct. refers to the first card. noun phrases within if - then sentences, universally quantified sentences, negations, modality, and subordinated sentences cannot be referred to anaphorically from subsequent sentences, i. e. such noun phrases are not " accessible " from the following text. thus for each of the sentences : if a customer owns a card then he enters it. every customer enters a card. a customer does not enter a card. a customer can enter a card. a clerk believes that a customer enters a card. we cannot refer to a card with : the card is correct. anaphoric references are also possible via personal pronouns : a customer enters a card and a code. if it is valid then simplemat accepts the card. or via variables : a customer enters a card x and a code y. if y is valid then simplemat accepts x. anaphoric references via definite articles and variables can be combined : a customer enters a card x and a code y. if the code y is valid then simplemat accepts the card x. note that proper names like simplemat always refer to the same object. see also gellish natural language processing natural language programming structured english cleartalk, another machine - readable knowledge representation language inform 7, a programming language with english syntax references external links project attempto controlled english knowledge representation natural language parsing natural language processing proof assistants knowledge representation languages
then asked to act out the sentence. adult l2 speakers had fewer act - out errors than native speaking children when the garden - path sentence was presented with referential information, similarly to the adult native speakers that present less act - out errors than both the adult l2 learners and native speaking children. adult l2 speakers and native adult speakers were able to use discourse and referential information to aid in their processing of the garden - path sentences. this ability could be due to the adults β developed executive functioning allowing them more cognitive resources, discourse and referential information, to aid in parsing and revision. additionally, the use of discourse and referential information could be due to l1 - transfer because italian and english share the same sentence structure. however, when the garden - path sentences are disambiguated and then presented, the adult l2 speakers had the highest act - out error rate followed by native speaking children and then by adult native speakers. the results of this study indicate that difficulties in parsing revision are more common than originally thought and are not just confined to children or individuals with reduced executive functioning. adults, both native speakers and l2 learners, use discourse and referential information in parsing and sentence processing. but adult l2 learners and native speaking children had similar error rates for garden - path sentences with no reference information, indicating systematic revision failure. see also similar phenomena antanaclasis, a literary trope in which a single word or phrase is repeated, but in two different senses. buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo, complexity and linguistic ambiguity in a grammatical sentence comparative illusion, ungrammatical sentences which often are perceived to be acceptable dangling else, a similarly ambiguous parsing issue in computer - programming. dangling modifier donkey sentence, a sentence that contains a pronoun whose reference is clear to the reader ( it is bound semantically ) but which is much more complex to technically classify. paraprosdokian syntactic ambiguity other backtracking branch predictor list of linguistic example sentences natural language processing transderivational search winograd schema challenge explanatory notes references further reading ferreira, fernanda ; kiel christianson, andrew hollingworth ( january 2001 ).
, phrase or sentence, taken out of context, has more than one interpretation. in " we saw her duck " ( example due to richard nordquist ), the words " her duck " can refer either to the person's bird ( the noun " duck ", modified by the possessive pronoun " her " ), or to a motion she made ( the verb " duck ", the subject of which is the objective pronoun " her ", object of the verb " saw " ). syntactic ambiguity arises when a sentence can have two ( or more ) different meanings because of the structure of the sentence β its syntax. this is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. " he ate the cookies on the couch ", for example, could mean that he ate those cookies that were on the couch ( as opposed to those that were on the table ), or it could mean that he was sitting on the couch when he ate the cookies. " to get in, you will need an entrance fee of $ 10 or your voucher and your drivers'license. " this could mean that you need either ten dollars or both your voucher and your license. or it could mean that you need your license and you need either ten dollars or a voucher. only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. for the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages ( such as computer programming languages ), see ambiguous grammar. usually, semantic and syntactic ambiguity go hand in hand. the sentence " we saw her duck " is also syntactically ambiguous. conversely, a sentence like " he ate the cookies on the couch " is also semantically ambiguous. rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. for example, the command " cook, cook! " can be parsed as " cook ( noun used as vocative ), cook ( imperative verb form )! ", but also as " cook ( imperative verb form ), cook ( noun used as vo
how should it be done, so that it will be done right? " meuu hei zok ras? you go to / toward where? " οΌ where are you heading? " na qiras buuen? he when come? " οΌ when ( will ) he come? " zaux hloeiras zuen uucok? have how many classifier sick " ζ οΌ how many ( people ) are sick? " b. using interrogative accent words these questions require an answer : " yes or no ". ghwaix na bas? is not him accent " δΈ οΌ it is not him, right? " neix guu meuu hos? this belong to you accent " η οΌ does this belong to you? " meuu bhaeis lax tax hixhos? you already eat rice accent " οΌ did you already eat ( rice )? " ( there are several more words used in the interrogative mood, so please see the section on accented words. ) c. using negation words ( " da " ) meuu kweis hei da? you will go neg? " οΌ won't you go? " meuu kweis da? you be willing to neg? " οΌ aren't you willing? " meuu kweis laeis hisdhop da? you want see movie neg? " οΌ don't you want to see a movie? " gong neix hleny da? stuff this good neg? " θ₯Ώ οΌ isn't this stuff good? " the conjunction word " cuus / cas " can be added before the negative words " da " to express a question, e. g., meuu qieng dheuu cuus da? you want take or not? " οΌ do you want to take ( it ) or not? " meuu bhaeis lax cuus da? you already eat or not?
</ctx>
The following are multiple choice questions (with answers) about STEM.
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct
at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The duke were also presented with a book
commemorated his visitβs mother.
A. lexical
B. syntactic
C. semantic
D. pragmatic
E. none of the above is correct
Answer: | [
"lexical",
"syntactic",
"semantic",
"pragmatic",
"none of the above is correct"
] | A | To determine the correct answer, we need to analyze the sentence: "The duke were also presented with a book commemorated his visitβs mother."
1. **Lexical**: This refers to the choice of words and their meanings. In this case, "the duke were" is incorrect because "duke" is singular and should be paired with "was" instead of "were." Additionally, "commemorated" should be "commemorating" to correctly modify the noun "book." Thus, there are clear lexical errors in word choice.
2. **Syntactic**: This pertains to sentence structure and grammar. While there are syntactical issues (such as the incorrect verb form), the primary issue here is related to word choice.
3. **Semantic**: This deals with meaning. The sentence does not convey a coherent meaning as it stands, but the main error is not in the meaning of individual words.
4. **Pragmatic**: This involves the context and implications of language use. The sentence does not present a pragmatic issue.
5. **None of the above is correct**: This option is incorrect since we have identified a lexical issue.
In summary, the best choice is 'lexical' because the errors primarily stem from incorrect word forms. A corrected version of the sentence would be: "The duke was also presented with a book commemorating his visitβs mother." This illustrates the necessary adjustments clearly. |
m1_5887 | m1 | <ctx>
. otherwise if the expression is a single symbol, replace it with true. otherwise if the expression is a symbol followed by an arbitrary expression e. 2 followed by the same symbol, replace it with the expression < pal e. 2 >. ( in other words, throw away the two identical symbols at the beginning and the end and recurse ). otherwise replace the expression with false. ( the pattern e. 1 always matches ). " the following are three step - by - step execution traces annotated with the sentence numbers applied at each step to produce the next < pal'noon'> ( # 3 ) < pal'oo'> ( # 3 ) < pal > ( # 1 ) true < pal'wow'> ( # 3 ) < pal'o'> ( # 2 ) true < pal'revolver'> ( # 3 ) < pal'evolve'> ( # 3 ) < pal'volv'> ( # 3 ) < pal'ol'> ( # 4 ) false we can now see that the hello world example in fact executes as the sequence of the following expression transformations : seed the machine with the initial expression marked by $ entry : < go > ( apply the sentence in go ) < hello > ( apply the sentence in hello ) < prout'hello world'> ( prout is a built - in that prints and expands to nothing ) ( nothing to apply ; stop ) other examples factorial fact { 0 = 1 ; s. n = < * s. n < fact < - s. n 1 > > > ; } here 0 matches 0 the number and produces 1. on any other symbol which is a number, multiply it with the result of ( fact ( - s. n 1 ) ) note the prefix style of operators. factorial with loops fact { s. n = < loop s. n 1 > ; } ; loop { 0 s. f = s. f ; s. n s. f = < loop < - s. n 1 > < * s. n s. f > > ; } as can be seen s. n acts as the loop counter. equality equal { ( e. 1 ) ( e. 1 )
also be used to prove total correctness, i. e. termination as well as partial correctness. commonly, square brackets are used here instead of curly braces to indicate the different notion of program correctness. in this rule, in addition to maintaining the loop invariant, one also proves termination by way of an expression, called the loop variant, whose value strictly decreases with respect to a well - founded relation on some domain set during each iteration. since is well - founded, a strictly decreasing chain of members of can have only finite length, so cannot keep decreasing forever. ( for example, the usual order is well - founded on positive integers, but neither on the integers nor on positive real numbers ; all these sets are meant in the mathematical, not in the computing sense, they are all infinite in particular. ) given the loop invariant, the condition must imply that is not a minimal element of, for otherwise the body could not decrease any further, i. e. the premise of the rule would be false. ( this is one of various notations for total correctness. ) resuming the first example of the previous section, for a total - correctness proof of the while rule for total correctness can be applied with e. g. being the non - negative integers with the usual order, and the expression being, which then in turn requires to prove informally speaking, we have to prove that the distance decreases in every loop cycle, while it always remains non - negative ; this process can go on only for a finite number of cycles. the previous proof goal can be simplified to, which can be proven as follows : is obtained by the assignment rule, and can be strengthened to by the consequence rule. for the second example of the previous section, of course no expression can be found that is decreased by the empty loop body, hence termination cannot be proved. see also notes references further reading robert d. tennent. specifying software ( a textbook that includes an introduction to hoare logic, written in 2002 ) external links key - hoare is a semi - automatic verification system built on top of the key theorem prover. it features a hoare calculus for a simple while language. j - algo - modul hoare
:'('expr.')'* *'+'shift, and go to state 9 *'-'shift, and go to state 10 *')'shift, and go to state 11 * / private def state7 ( in : stream [ char ], arg1 : int ) : result = in match { case cur # : : tail = > { decrement ( cur match { case'+'= > state9 ( tail, arg1 ) case'-'= > state10 ( tail, arg1 ) case')'= > state11 ( tail, arg1 ) case c = > throw new parseexception ( c ) } ) } case stream ( ) = > throw new parseexception } / * * 0 $ accept : expr $ end. * * $ default accept * / private def state8 ( arg1 : int ) = ( ntexpr ( arg1, stream ( ) ), 1 ) / * * 1 expr : expr'+ '. term * *'('shift, and go to state 1 *'0'shift, and go to state 2 *'1'shift, and go to state 3 * * term go to state 12 * num go to state 6 * / private def state9 ( in : stream [ char ], arg1 : int ) = in match { case cur # : : tail = > { def loop ( tuple : result ) : result = { val ( res, goto ) = tuple if ( goto = = 0 ) { loop ( res match { case ntterm ( v, in ) = > state12 ( in, arg1, v ) case ntnum ( v, in ) = > state6 ( in, v ) case _ = > throw new assertionerror } ) } else ( res, goto - 1 ) } loop ( cur match { case'('= > state1 ( tail ) case'0'= > state2 ( tail ) case'1'= > state3 ( tail ) case c = > throw new parseex
] : we write false as! [ ] ( negation applied to an empty array ). (! [ ] + [ ] ) [ 1 ] : 1 is a number, we can write it as + true. (! [ ] + [ ] ) [ + true ] : since false is! [ ], true is!! [ ]. (! [ ] + [ ] ) [ +!! [ ] ] β which evaluates to " a ". proof : in javascript, alert ( (! [ ] + [ ] ) [ +!! [ ] ] ) does the same as alert ( " a " ). other constructs the function constructor can be used to trigger execution of javascript code contained in a string as if it were native javascript. so, for example, the statement alert ( 1 ) is equivalent to function ( " alert ( 1 ) " ) ( ). the function constructor can be retrieved in jsfuck by accessing the constructor property of a well known function, such as [ ] [ " filter " ] ( array. prototype. filter ) or [ ] [ " flat " ] ( array. prototype. flat ) in modern browsers. and then alert ( 1 ) becomes [ ] [ " flat " ] [ " constructor " ] ( " alert ( 1 ) " ) ( ). character table the characters with the shortest jsfuck expansions are listed below. other utf - 8 characters can be expressed as well but will generate considerably longer code. { | class = " wikitable "! character | | jsfuck | - | style = " text - align : center ; " | + | | ( + ( +! + [ ] + (! + [ ] + [ ] ) [! + [ ] +! + [ ] +! + [ ] ] + [ +! + [ ] ] + [ + [ ] ] + [ + [ ] ] ) + [ ] ) [! + [ ] +! + [ ] ] | - | style = " text - align : center ; " |. | | ( + ( +! + [ ] + [ +! + [ ] ] + (!!
</ctx>
The following are multiple choice questions (with answers) about STEM.
Tick the \emph{correct} assertion.
A. MD5 is using a compression function based on the Davies-Meyer scheme.
B. The Keccak hash function is based on the Merkle-Damg{\aa}rd construction.
C. Plain CBCMAC is resistant to forgery attacks.
D. GCM is an efficient MAC based on the CBC mode.
Answer: | [
"MD5 is using a compression function based on the Davies-Meyer scheme.",
"The Keccak hash function is based on the Merkle-Damg{\\aa}rd construction.",
"Plain CBCMAC is resistant to forgery attacks.",
"GCM is an efficient MAC based on the CBC mode."
] | A | The correct assertion is: **MD5 is using a compression function based on the Davies-Meyer scheme.**
### Logical Chain of Reasoning:
1. **MD5 and Compression Function**:
- MD5 is a widely used hash function that processes input data in blocks. It employs a specific structure, which includes a compression function that is indeed based on the Davies-Meyer scheme. This scheme is designed to create a secure hash function by using a block cipher and has been foundational in many hash function designs.
2. **Keccak and Merkle-DamgΓ₯rd Construction**:
- Keccak is the algorithm behind the SHA-3 standard and does not use the Merkle-DamgΓ₯rd construction. Instead, Keccak employs a unique sponge construction, which is different from the traditional Merkle-DamgΓ₯rd approach used by earlier hash functions like MD5 and SHA-1. Therefore, this option is incorrect.
3. **Plain CBC-MAC and Forgery Resistance**:
- The CBC-MAC (Cipher Block Chaining Message Authentication Code) is not inherently resistant to all forms of forgery attacks, especially when the same key is used with multiple messages. The security of CBC-MAC can be compromised in certain contexts (e.g., when messages of different lengths are involved), making this assertion incorrect.
4. **GCM and CBC Mode**:
- GCM (Galois/Counter Mode) is a mode of operation for cryptographic block ciphers that provides both confidentiality and authentication. It is not based on CBC (Cipher Block Chaining) mode; rather, it combines counter mode encryption with Galois mode authentication. Thus, this option is also incorrect.
### Conclusion:
Given these analyses, the assertion that "MD5 is using a compression function based on the Davies-Meyer scheme" is the only correct statement among the provided options. |
arc_challenge_1083 | arc_challenge | <ctx>
photorespiration ( also known as the oxidative photosynthetic carbon cycle or c2 cycle ) refers to a process in plant metabolism where the enzyme rubisco oxygenates rubp, wasting some of the energy produced by photosynthesis. the desired reaction is the addition of carbon dioxide to rubp ( carboxylation ), a key step in the calvin β benson cycle, but approximately 25 % of reactions by rubisco instead add oxygen to rubp ( oxygenation ), creating a product that cannot be used within the calvin β benson cycle. this process lowers the efficiency of photosynthesis, potentially lowering photosynthetic output by 25 % in plants. photorespiration involves a complex network of enzyme reactions that exchange metabolites between chloroplasts, leaf peroxisomes and mitochondria. the oxygenation reaction of rubisco is a wasteful process because 3 - phosphoglycerate is created at a lower rate and higher metabolic cost compared with rubp carboxylase activity. while photorespiratory carbon cycling results in the formation of g3p eventually, around 25 % of carbon fixed by photorespiration is re - released as and nitrogen, as ammonia. ammonia must then be detoxified at a substantial cost to the cell. photorespiration also incurs a direct cost of one atp and one nad ( p ) h. while it is common to refer to the entire process as photorespiration, technically the term refers only to the metabolic network which acts to rescue the products of the oxygenation reaction ( phosphoglycolate ). photorespiratory reactions addition of molecular oxygen to ribulose - 1, 5 - bisphosphate produces 3 - phosphoglycerate ( pga ) and 2 - phosphoglycolate ( 2pg, or pg ). pga is the normal product of carboxylation, and productively enters the calvin cycle. phosphoglycolate, however, inhibits certain enzymes involved in photosynthetic carbon fixation ( hence is often said to be an'inhibitor of photosynthesis'). it is also relatively difficult
of carbon transfer between the two, until the second year where the douglas fir received carbon from b. payrifera. detection of the isotopes was found in receiver plant shoots, expressing carbon transfer from fungus to plant tissues. the direction carbon resources flow through the mycorrhizal network has been observed to shift seasonally, with carbon flowing toward the parts of the network that need it the most. for example, in a network that includes acer saccharinum ( sugar maple ) and erythronium americanum ( trout lily ), carbon moves to young sugar maple saplings in spring when leaves are unfurling, and shifts to move to the trout lilies in fall when the lilies are developing their roots. a further study with paper birch and douglas fir demonstrated that the flow of carbon shifts direction more than once per season : in spring, newly budding birch receives carbon from green douglas fir, in summer, stressed douglas fir in the forest understory receives carbon from birch in full leaf, and in fall, birch again receives carbon from douglas fir as birch trees shed their leaves and evergreen douglas firs continue photosynthesizing. when the ectomycorrhizal fungus - receiving end of the plant has limited sunlight availability, there was an increase in carbon transfer, indicating a source β sink gradient of carbon among plants and shade surface area regulates carbon transfer. plants sense carbon through a receptor in their guard cells that measure carbon dioxide concentrations in the leaf and environment. carbon information is integrated using proteins known as carbonic anhydrases, in which the plant then responds by utilizing or disregarding the carbon resources from the mycorrhizal networks. one case study follows a cmn shared by a paper birch and douglas fir tree. by using radioactively - labeled carbon - 13 and carbon - 14, researchers found that both tree species were trading carbon β that is to say, carbon was moving from tree to tree in both directions. the rate of carbon transfer varied based on the physiological factors such as total biomass, age, nutrient status, and photosynthetic rate. at the end of the experiment, the douglas fir was found to have a 2 % to 3 % net gain in carbon.
theoretical production ecology tries to quantitatively study the growth of crops. the plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable parts. main parameters kept into consideration are temperature, sunlight, standing crop biomass, plant production distribution, nutrient and water supply. modelling modelling is essential in theoretical production ecology. unit of modelling usually is the crop, the assembly of plants per standard surface unit. analysis results for an individual plant are generalised to the standard surface, e. g. the leaf area index is the projected surface area of all crop leaves above a unit area of ground. processes the usual system of describing plant production divides the plant production process into at least five separate processes, which are influenced by several external parameters. two cycles of biochemical reactions constitute the basis of plant production, the light reaction and the dark reaction. in the light reaction, sunlight photons are absorbed by chloroplasts which split water into an electron, proton and oxygen radical which is recombined with another radical and released as molecular oxygen. the recombination of the electron with the proton yields the energy carriers nadh and atp. the rate of this reaction often depends on sunlight intensity, leaf area index, leaf angle and amount of chloroplasts per leaf surface unit. the maximum theoretical gross production rate under optimum growth conditions is approximately 250 kg per hectare per day. the dark reaction or calvin cycle ties atmospheric carbon dioxide and uses nadh and atp to convert it into sucrose. the available nadh and atp, as well as temperature and carbon dioxide levels determine the rate of this reaction. together those two reactions are termed photosynthesis. the rate of photosynthesis is determined by the interaction of a number of factors including temperature, light intensity and carbon dioxide. the produced carbohydrates are transported to other plant parts, such as storage organs and converted into secondary products, such as amino acids, lipids, cellulose and other chemicals needed by the plant or used for respiration. lipids, sugars, cellulose and starch can be produced without extra elements. the conversion of carbohydrates into amino acids and nucleic
the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient
</ctx>
The following are multiple choice questions (with answers) about STEM.
Cindy learned that trees release gases into to the air during photosynthesis. What gas does a tree release?
A. carbon dioxide
B. hydrogen
C. nitrogen
D. oxygen
Answer: | [
"carbon dioxide",
"hydrogen",
"nitrogen",
"oxygen"
] | D | |
medmcqa_5294 | medmcqa | <ctx>
et al., biphasic oxidation of oxyhemoglobin in bloodstains. plos one 2011. james, stuart h. ; eckert, william g. interpretation of bloodstain evidence at crime scenes, 2nd edition, crc press 1999. neitzel, g. paul ; smith, marc. the fluid dynamics of droplet impact on inclined surfaces with application to forensic blood spatter analysis. washington, dc : office of justice programs, 2017. solomon, berg, martin, & villee. biology, 3rd edition. saunders college publishing, fort worth, 1993. sutton, paulette t., bloodstain pattern interpretation, short course manual, university of tennessee, memphis tn 1998 vennard, john king. elementary fluid mechanics. john wiley & sons, new york, 1982. external links international association of bloodstain pattern analysts ( iabpa ) association for crime scene reconstruction bloodstain pattern analysis terminology β swgstain terminology with examples bloodstain pattern analysis research β database of bpa - related research articles blood forensic disciplines forensic techniques
layouts of clinical laboratories in health institutions vary greatly from one facility to another. for instance, some health facilities have a single laboratory for the microbiology section, while others have a separate lab for each specialty area. the following is an example of a typical breakdown of the responsibilities of each area : microbiology includes culturing of the bacteria in clinical specimens, such as feces, urine, blood, sputum, cerebrospinal fluid, and synovial fluid, as well as possible infected tissue. the work here is mainly concerned with cultures, to look for suspected pathogens which, if found, are further identified based on biochemical tests. also, sensitivity testing is carried out to determine whether the pathogen is sensitive or resistant to a suggested medicine. results are reported with the identified organism ( s ) and the type and amount of drug ( s ) that should be prescribed for the patient. parasitology is where specimens are examined for parasites. for example, fecal samples may be examined for evidence of intestinal parasites such as tapeworms or hookworms. virology is concerned with identification of viruses in specimens such as blood, urine, and cerebrospinal fluid. hematology analyzes whole blood specimens to perform full blood counts, and includes the examination of blood films. other specialized tests include cell counts on various bodily fluids. coagulation testing determines various blood clotting times, coagulation factors, and platelet function. clinical biochemistry commonly performs dozens of different tests on serum or plasma. these tests, mostly automated, includes quantitative testing for a wide array of substances, such as lipids, blood sugar, enzymes, and hormones. toxicology is mainly focused on testing for pharmaceutical and recreational drugs. urine and blood samples are the common specimens. immunology / serology uses the process of antigen - antibody interaction as a diagnostic tool. compatibility of transplanted organs may also be determined with these methods. immunohematology, or blood bank determines blood groups, and performs compatibility testing on donor blood and recipients. it also prepares blood components, derivatives, and products for transfusion. this area determines a patient's blood type and rh status, checks for antibodies to common
antibodies bind to red blood cells that express the corresponding antigen, they cause red blood cells to clump together ( agglutinate ), which can be identified visually. the person's blood group antibodies can also be identified by adding plasma to cells that express the corresponding antigen and observing the agglutination reactions. other serologic methods used in transfusion medicine include crossmatching and the direct and indirect antiglobulin tests. crossmatching is performed before a blood transfusion to ensure that the donor blood is compatible. it involves adding the recipient's plasma to the donor blood cells and observing for agglutination reactions. the direct antiglobulin test is performed to detect if antibodies are bound to red blood cells inside the person's body, which is abnormal and can occur in conditions like autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions. the indirect antiglobulin test is used to screen for antibodies that could cause transfusion reactions and identify certain blood group antigens. immunology serologic tests can help to diagnose autoimmune disorders by identifying abnormal antibodies directed against a person's own tissues ( autoantibodies ). all people have different immunology graphs. serological surveys a 2016 research paper by metcalf et al., amongst whom were neil ferguson and jeremy farrar, stated that serological surveys are often used by epidemiologists to determine the prevalence of a disease in a population. such surveys are sometimes performed by random, anonymous sampling from samples taken for other medical tests or to assess the prevalence of antibodies of a specific organism or protective titre of antibodies in a population. serological surveys are usually used to quantify the proportion of people or animals in a population positive for a specific antibody or the titre or concentrations of an antibody. these surveys are potentially the most direct and informative technique available to infer the dynamics of a population's susceptibility and level of immunity. the authors proposed a world serology bank ( or serum bank ) and foresaw " associated major methodological developments in serological testing, study design, and quantitative analysis, which could drive a
that resulted in delayed test results and closed testing stations. testing system genetics color β s physician - ordered test can be initiated by individuals online, and a sample collection kit is sent in the mail. individuals provide a saliva sample and return the kit in a pre - paid package. color's clia - certified and cap - approved lab analyzes for variants in the breast cancer genes brca1 and brca2, as well as 28 other genes associated with breast, prostate, colon, uterine, stomach, melanoma, pancreatic, and ovarian cancers. the test also identifies variants in 30 genes related to hereditary heart conditions as well as genes that may impact medication response. genetic counseling with board - certified genetic counselors is available for free to all individuals who use color. covid - 19 color β s fda emergency use authorization ( eua ) covid - 19 test can be accessed as a part of testing programs initiated by a public health entity, university, employer or other organization. the test is a dry anterior nasal swab, approved for use either in an on - site or at - home setting without the need for a healthcare provider to monitor sample collection, which eases the burden on the healthcare system and reduces testing costs. the company has received an fda eua for the testing assay, which is a nucleic acid amplification method called lamp, or loop - mediated isothermal amplification. this enables processing test results 50 % faster than rt - pcr, the amplification method used at most other labs. lamp relies on a different set of chemical reagents than standard pcr tests, which helps the process avoids supply chain scarcity. research in 2018, color was selected, alongside the broad institute of mit and harvard, and the laboratory for molecular medicine ( lmm ) at partners healthcare, to establish one of three genome centers around the country for the national institutes of health β s historic all of us research program. all of us will sequence one million or more people across the us, with the goal of accelerating health research and enabling individualized prevention, treatment, and care. the program has a focus on recruitment from populations that have been historically underrepresented in clinical science
</ctx>
The following are multiple choice questions (with answers) about STEM.
Following are tests to confirm blood stains except
A. Phenolphthalein test
B. Takayamma test
C. Florence test
D. Teichmann's test
Answer: | [
"Phenolphthalein test",
"Takayamma test",
"Florence test",
"Teichmann's test"
] | C | To find out whether the stain is blood or Not: 1. Catalytic color test Phenolphthalein test Oho-toluidine test Benzedine test 2. Crystal test Hemin crystal test(Teichmann's test) Hemochromogen crystal test(Takayamma test) Ref: FORENSIC MEDICINE AND TOXICOLOGY Dr PC IGNATIUS THIRD EDITION PAGE 317 |
mmlu_aux_4156 | mmlu_auxiliary | <ctx>
the annual cycle. route of east asian migratory flyaway olango wildlife sanctuary as a refuelling station of migratory birds migration ecology group, lund university, sweden migrate. ou. edu β migration interest group : research applied toward education, usa canadian migration monitoring network ( co - ordinates bird migration monitoring stations across canada ) bird research by science daily - includes several articles on bird migration the nature conservancy's migratory bird program the compasses of birds β a review from the science creative quarterly bbc supergoose β satellite tagging of light - bellied brent geese soaring with fidel β follow the annual migration of ospreys from cape cod to cuba to venezuela bat predation on migrating birds global register of migratory species β features not only birds, but other migratory vertebrates such as fishes ebird. com occurrence maps β occurrence maps of migrations of various species in the united states smithsonian migratory bird center β " fostering greater understanding, appreciation, and protection of the grand phenomenon of bird migration. " the secrets of bird migration : the how, why, and where of flying across the world online databases trektellen. org β live bird migration counts and ringing records from all over the world hawkcount. org β count data and site profiles for over 300 north american hawkwatch sites migraction. net β interactive database with real - time information on bird migration ( france ) bird migration ornithology bird flight birds
##ment, quebec, presses de l'universite laval, 2022, 268 p. book chapters rochefort, l., desrochers, a., graf, m., lavoie, c., poulin, m., price, j., strack, m. et waddington, m. ( 2011 ) northern peatlands. ( chap. 9 ) in wetland habitats of north america : ecology and conservation concerns. ( batzer, d. p. et baldwin, a. h., eds. ) berkeley, ca, etats - unis, university of california press desrochers, a. ( 2009 ) amenagement des habitats de la faune. ( chap. 18 ) in manuel de foresterie, 2e ed. ( ordre des ingenieurs forestiers du quebec, eds. ) quebec, editions multimondes, p. 771 - 798 desrochers, a. et belisle, m. ( 2007 ) edge, patch and landscape effects on chickadee movements. ( chap. 15 ) in ecology and behavior of chickadees and titmice : an integrated approach. ( otter, k. a., eds. ) new york, us, oxford university press, pages 243 - 261 desrochers, a., otter, k. a., belisle, m. et olson, j. r. ( 2007 ) landscape ecology, behavior, and conservation issues. ( chap. synopsis iv ) in ecology and behavior of chickadees and titmice : an integrated approach. ( otter, k. a., eds. ) new york, us, oxford university press, pages 293 - 298 desrochers, a. et van duinen, g. a. ( 2006 ) peatland fauna. ( chap. 5 ) in boreal peatland ecosystems. ( wieder, r. k. et vitt, d. h., eds. ) berlin, germany, springer - verlag, pages 67 β 100 desrochers, a. ( 2003 ) bridging the gap : linking individual bird movement and territory establishment rules with patterns of distribution in fragmented
##bard rock ptarmigan β that are dependent on the scattered plants and fauna as their only food source during the winter. with more frequent rain events, these plants are now spending more time under a thick layer of newly frozen ice, which the herbivores cannot access. a species that benefits from the rising temperatures on svalbard is the migratory pink - footed goose ( anser brachyrhynchus ). the reduction of terrestrial ice cover in spring means that the birds can start nesting earlier and there are more breeding pairs, which results in a higher rate of breeding success. this however has some consequences for the terrestrial vegetation sustainability and the ecosystem balance. with an increasing population of herbivores like the pink - footed goose the competition for primary produced food will increase and affect other herbivorous species and their related predators. terrestrial invasive species being an isolated group of islands, migrating to svalbard is difficult for terrestrial species. the main threat of introducing alien species relates to human activities. people can introduce species intentionally but unintentional introduction is probably more common. a study focusing on the shoes worn by people arriving at svalbard revealed a great number of different species of plants. seeds and bryophytes were analyzed and categorized as belonging to 18 different families and 41 different species. introduced alien species pose a threat of becoming invasive if they are fortunate enough in adapting to the new environment. marine ecology phytoplankton and zooplankton the warming temperature and the resulting effects on the ice cover in the arctic could affect the organisms involved in primary productivity β the phytoplankton and zooplankton. as a result of reduced sea - ice and consequent longer growing season, the yield of phytoplankton may be higher in some areas. primary production has increased in the arctic by around 20 % from 1998 to 2009. however, different trends are observed in different areas as the productivity in some zones has remained unchanged or even decreased. the timing of the bloom as well as species composition has also changed over these years. in the long term this could affect the species that are directly or indirectly dependent on primary productivity, but these affects are difficult to predict. according to some ice cover
guillemard, cruise of the marchesa, 2v. 1 plate in each. harvie - brown, j. a. a vertebrate fauna of sutherland. henderson, george, and hume, allan o ( ctavian ). lahore to yarkand. incidents of the route and natural history of the countries traversed by the expedition of 1870, under t. d. forsyth, esq., c. b. 1873. james, frank linsly. the unknown horn of africa. 1888. keulemans, john gerrard. onze vogels in huis en tuin. keulemans, john gerrard. natural history of the cage - birds. 1871. koslow, piotr kuz'mich, and bianchi v ( alentin ). mongoliia i kam : trudy ekspeditsii imperatorskago russkago geograficheskago obshchestva, sovershennoi v. volume v, 1907. layard, edgar leopold. birds of south africa, revised edn. legge, w ( illiam ) v ( incent ), history of the birds of ceylon. 1880. lilford, thomas littleton powys ( 1833 - 1896 ). coloured figures of the birds of the british islands. london : r. h. porter, 1885 - 1897 macpherson, h. a. birds of cumberland. marshall, c ( harles ) h ( enry ) t ( illson ), and marshall, g ( eorge ) f ( rederick ) l ( eycester ). monograph of the capitonidae. 1870 β 1871. mathews, gregory m ( acalister ). birds of australia ( vols. 1 β 4 ). melliss, john charles. st. helena : a physical, historical, and topographical description of the island, including its geology, fauna, flora, and meteorology. 1875. milne - edwards, alphonse. mission scientifique du cap horn ; mammiferes. 1891. mitchell, f ( rederick ) s ( haw ). the birds of lancashire. 1885
</ctx>
The following are multiple choice questions (with answers) about STEM.
Experts believe that storms and severe weather in North America and Asia have disrupted bird flight paths across the world and swept huge numbers of bird species towards the British Isles. Birds flying to the other side of the Atlantic or to the Pacific to lay their eggs have been trapped in Britain and Ireland, adding their numbers to native species, and causing great excitement in the bird-watching community. Local birdwatchers have already observed a total of 442 species in the British Isles this year. The highest number ever seen in one year is 445, in 2008. "We only need four more to break our record," said Lee Evans, who runs the British Birding Association. "With three months to go I'm sure we'll do it." Last month an extremely rare Siberian Rubythroat bird was seen in Scotland, sending hundreds of birdwatchers north in the hope of catching a glimpse of this colourful Asian beauty. Another very uncommon bird, the bufflehead duck was cited in Cornwell and caused similar excitement. "I couldn't believe it," said Evans. "The poor thing was completely _ . It must have been blown to England by the storms while trying to fly from Canada to the southern United States for the winter. That's 3000 miles!" Evans said that global warming over the past decade was playing a key role in transforming bird movements across the world. In addition, melting Arctic sea ice may also be opening up bird flight paths over the North Pole, making it easier for birds from the Pacific -- such as the slaty-backed gull and tufted puffin, both of which appeared in London earlier this year -- to reach Britain. Bird-watching is becoming an increasingly popular hobby among all age groups, added Evans. "A fifth of our members are under 18. This is a round-the-year hobby that you can enjoy from the kitchen window or from a car. More and more people are bird-watching, and as a result more and more unexpected species are being spotted in the British Isles." How many more bird species are needed to equal the British bird-watching record?
A. 3.
B. 4.
C. 442.
D. 445.
Answer: | [
"3.",
"4.",
"442.",
"445."
] | A | null |
medmcqa_6934 | medmcqa | <ctx>
the lower extremities ) is a well - established congenital anomaly associated with maternal diabetes mellitus. other causes are presumably involved, as demonstrated by the rare overall incidence of caudal regression syndrome ( 1 : 60, 000 ) compared to diabetes ; however, the condition does have a greatly increased incidence among infants born to mothers with diabetes, estimated at 1 in 350 newborns of mothers with diabetes. the dominant inherited sacral agenesis ( also referred to as currarino syndrome ) is very often correlated with a mutation in the hb9 ( also called hlxb9 ) gene ( shown by sally ann lynch, 1995, nature genetics ). it was previously thought to be related to sirenomelia ( " mermaid syndrome " ) but has now been determined not to be related to this condition. prognosis there are four levels ( or types ) of malformation. the least severe indicates partial deformation ( unilateral ) of the sacrum. the second level indicates a bilateral ( uniform ) deformation. the most severe types involve a total absence of the sacrum. depending on the type of sacral agenesis, bowel or urinary bladder deficiencies may be present. a permanent colostomy may be necessary in the case of imperforate anus. incontinence may also require some type of continence control system ( e. g., self - catheterization ) to be utilized. the condition often impacts the formation of the knees, legs or feet that is sometimes addressed through surgery. for some with tightly webbed, bent knees or knees that are fused straight, disarticulation surgery at the knee may be a viable option to maximize mobility options. before more comprehensive medical treatment was available, full amputation of the legs at the hip was often performed. more recently, the amputation ( actually a disarticulation because no cutting of the bone is involved ) is done at the knee for those who have bent knee positions and webbing between thigh and calf to enable more ease of mobility and better seating. some children with knee disarticulation use prosthetic legs to walk. prosthetics for children without substantial hip and trunk
been put forth. studies have shown an increased incidence in siblings, suggesting a genetic component. at least one study has also shown a predilection in the white population. it has also been reported to be more common in males. however, there is no concrete evidence to suggest a genetic trait and instead, the most favored theory currently supports that fai ( the cam type in particular ) is due to repetitive movements involving the hip ( e. g. squatting ) in young athletes. aggravating activities that are commonly reported include repetitive or prolonged squatting, twisting movements of the hip, like pivoting during athletics, getting in and out of cars, and even sitting for prolonged periods. a combination of these factors may also predispose to a form of fai ; predominantly, a marginal developmental hip abnormality together with environmental factors such as recurrent motion of the legs within a supraphysiologic range. three types of fai are recognized ( see title image ). the first involves an excess of bone along the upper surface of the femoral head, known as a cam deformity ( abbreviation for camshaft, which the shape of the femoral head and neck resembles ). the second is due to an excess of growth of the upper lip of the acetabular cup and is known as a'pincer'deformity. the third is a combination of the two, generally referred to as'mixed.'the most common type seen, approximately 70 % of the time, is the mixed type. a complicating issue is that some of the radiographic findings of fai have also been described in asymptomatic subjects. current literature suggests that the cam type of impingement is associated with the development of hip osteoarthritis. thus far, no correlation has been seen between the pincer type and development of hip osteoarthritis. anatomy the hip joint is classified as a ball and socket joint. this type of synovial joint allows for multidirectional movement and rotation. there are two bones that make up the hip joint and create an articulation between the femur and pelvis. this articulation connects the axial skeleton with the lower
the knee joint during the deep - knee bend. medicine and science in sports. 6 : 73 1974. comparative conditioning analysis. fresno : universal athletic corp, 1974. biomechanical analysis of the knee joint during deep knee bend with heavy load. xxth world congress in sports medicine, congress proceedings, 53 β 60, 1975. principles of ballistic motion in resistance exercise training. track and field quarterly review, 75 : 58 β 61, 1975. neural control of locomotion - a kinetic analysis of the trot in cats. neural control of locomotion. pp. 759 β 762. plenum publishing corp. 1976. computerized dynamic resistive exercise. abstracts of the international congress of physical activity sciences, quebec city, canada, 1976. computerized dynamic resistive exercise. biomechanics of sports and kinanthropometry. the international congress of physical activity sciences, 1977. equipment safety and effectiveness. therapeutics through exercise. pp. 197 β 201. grune & stratton, inc. 1979. biomechanical analysis of ballistic vs. tracking movements in tennis skills. proceedings of a national symposium on the racquet sports, the university of illinois at urbana - champaign, 105 β 123, 1979. computer use for biomechanical analysis, data acquisition, and human - machine interface. proceedings of the thirteenth hawaii international conference on system science, volume iii : 470 β 475, 1980. human movement analysis. applied ergonomics, 11 : 61 β 62, 1980. resistive training. clinics in sports med, 2 : 55 β 69, 1983. biomechanical aspects of olympic sports medicine. clinics in sports medicine. pp. 31 β 36. w. b. sauders company. 1983. resistive training. clinics in sports medicine. pp. 55 β 70. w. b. saunders company. 1983. biofeedback and biomechanics in athletic training. biofeedback and sports science. pp. 107 β 145. plenum publishing corp. 1985. body mechanics. in : injuries to the throwing arm from the proceedings of the national conference sponsored by the u s o c sports medicine council, philadelphia, 3 - 21
pterygium ( plural pterygia or pterygiums ) refers to any wing - like triangular membrane occurring in the neck, eyes, knees, elbows, ankles or digits. the term comes from the greek word pterygion meaning " wing ". types popliteal pterygium syndrome, a congenital condition affecting the face, limbs, or genitalia but named after the wing - like structural anomaly behind the knee pterygium ( eye ) or surfer's eye, a growth on the cornea of the eye pterygium colli or webbed neck, a congenital skin fold of the neck down to the shoulders pterygium inversum unguis or ventral pterygium, adherence of the distal portion of the nailbed to the ventral surface of the nail plate pterygium unguis or dorsal pterygium, scarring between the proximal nail fold and matrix pterygium of the eye a pterygium reduces the vision in several ways : distortion of the corneal optics. this begins usually when the pterygium is greater than 2mm from the corneal edge ( limbus ). disruption of the tear film. the tear film is the first lens in the eye. pterygia are associated with eyelid inflammation, called blepharitis. growth over the corneal centre, which leads to dramatic reduction of vision. induced anterior corneal scarring, which often remains after surgical removal. a pterygium of the eye grows very slowly. usually it takes several years or decades to progress. surgical removal indications for surgery, in order of decreasing importance : growth over the corneal centre. reduced vision due to corneal distortion. documented growth. symptoms of discomfort. cosmesis. surgery is usually performed under local anaesthetic with light sedation as day surgery. the pterygium is stripped carefully off the surface of the eye. if this is all that is done, the pterygium regrows frequently. the technique with the lowest recurrence rate uses an auto graft of conjunctiva from under the eyelid. this is placed over the defect remaining from the removed
</ctx>
The following are multiple choice questions (with answers) about STEM.
Triple deformity of knee joint is seen in
A. Tuberculosis
B. Trauma
C. Malignant tumour
D. Sarcoidosis
Answer: | [
"Tuberculosis",
"Trauma",
"Malignant tumour",
"Sarcoidosis"
] | A | TB KNEE has two stages. In late stage it shows TRIPLE DEFORMITY- Posterior Subluxation External Rotation Flexion Other condition showing triple deformity of knee-Rheumatoid Ahritis REF : MAHESWARI 9TH ED |
m1_5981 | m1 | <ctx>
the elliptic curve only hash ( ecoh ) algorithm was submitted as a candidate for sha - 3 in the nist hash function competition. however, it was rejected in the beginning of the competition since a second pre - image attack was found. the ecoh is based on the muhash hash algorithm, that has not yet been successfully attacked. however, muhash is too inefficient for practical use and changes had to be made. the main difference is that where muhash applies a random oracle, ecoh applies a padding function. assuming random oracles, finding a collision in muhash implies solving the discrete logarithm problem. muhash is thus a provably secure hash, i. e. we know that finding a collision is at least as hard as some hard known mathematical problem. ecoh does not use random oracles and its security is not strictly directly related to the discrete logarithm problem, yet it is still based on mathematical functions. ecoh is related to the semaev's problem of finding low degree solutions to the summation polynomial equations over binary field, called the summation polynomial problem. an efficient algorithm to solve this problem has not been given so far. although the problem was not proven to be np - hard, it is assumed that such an algorithm does not exist. under certain assumptions, finding a collision in ecoh may be also viewed as an instance of the subset sum problem. besides solving the summation polynomial problem, there exists another way how to find second pre - images and thus collisions, wagner's generalized birthday attack. ecoh is a good example of hash function that is based on mathematical functions ( with the provable security approach ) rather than on classical ad hoc mixing of bits to obtain the hash. the algorithm given, ecoh divides the message into blocks. if the last block is incomplete, it is padded with single 1 and then appropriate number of 0. let furthermore be a function that maps a message block and an integer to an elliptic curve point. then using the mapping, each block is transformed to an elliptic curve point, and these points are added together with two more points. one additional point contains the padding and depends
the signature is bits ( which is some constant times bits, depending on the relative size of and ), and this is the transmission overhead. the computation of the signature at each vertex requires bit operations, where is the in - degree of the vertex. the verification of a signature requires bit operations. proof of security attacker can produce a collision under the hash function. if given points in find and such that and proposition : there is a polynomial time reduction from discrete log on the cyclic group of order on elliptic curves to hash - collision. if, then we get. thus. we claim that and. suppose that, then we would have, but is a point of order ( a prime ) thus. in other words in. this contradicts the assumption that and are distinct pairs in. thus we have that, where the inverse is taken as modulo. if we have r > 2 then we can do one of two things. either we can take and as before and set for > 2 ( in this case the proof reduces to the case when ), or we can take and where are chosen at random from. we get one equation in one unknown ( the discrete log of ). it is quite possible that the equation we get does not involve the unknown. however, this happens with very small probability as we argue next. suppose the algorithm for hash - collision gave us that then as long as, we can solve for the discrete log of q. but the β s are unknown to the oracle for hash - collision and so we can interchange the order in which this process occurs. in other words, given, for, not all zero, what is the probability that the β s we chose satisfies? it is clear that the latter probability is. thus with high probability we can solve for the discrete log of. we have shown that producing hash collisions in this scheme is difficult. the other method by which an adversary can foil our system is by forging a signature. this scheme for the signature is essentially the aggregate signature version of the boneh - lynn - shacham signature scheme. here it is shown that forging a signature is at least as hard as solving the elliptic curve diffie β hellman problem. the only known way to
in cryptography, the merkle β damgard construction or merkle β damgard hash function is a method of building collision - resistant cryptographic hash functions from collision - resistant one - way compression functions. this construction was used in the design of many popular hash algorithms such as md5, sha - 1 and sha - 2. the merkle β damgard construction was described in ralph merkle's ph. d. thesis in 1979. ralph merkle and ivan damgard independently proved that the structure is sound : that is, if an appropriate padding scheme is used and the compression function is collision - resistant, then the hash function will also be collision - resistant. the merkle β damgard hash function first applies an md - compliant padding function to create an input whose size is a multiple of a fixed number ( e. g. 512 or 1024 ) β this is because compression functions cannot handle inputs of arbitrary size. the hash function then breaks the result into blocks of fixed size, and processes them one at a time with the compression function, each time combining a block of the input with the output of the previous round. in order to make the construction secure, merkle and damgard proposed that messages be padded with a padding that encodes the length of the original message. this is called length padding or merkle β damgard strengthening. in the diagram, the one - way compression function is denoted by f, and transforms two fixed length inputs to an output of the same size as one of the inputs. the algorithm starts with an initial value, the initialization vector ( iv ). the iv is a fixed value ( algorithm or implementation specific ). for each message block, the compression ( or compacting ) function f takes the result so far, combines it with the message block, and produces an intermediate result. the last block is padded with zeros as needed and bits representing the length of the entire message are appended. ( see below for a detailed length padding example. ) to harden the hash further, the last result is then sometimes fed through a finalisation function. the finalisation function can have several purposes such as compressing a bigger internal state ( the last result ) into a smaller output
between 1 and add the corresponding columns of the matrices to obtain a binary string a length output : hash of size we can now use the merkle β damgard construction to generalize the compression function to accept inputs of arbitrary lengths. example of the compression situation and initialization : hash a message using matrix h that is separated into sub - blocks,,. algorithm : we split the input into parts of length and we get,,. we convert each into an integer and get,,. from the first sub - matrix, we pick the column 2, from the second sub - matrix the column 1 and from the third sub - matrix the column 4. we add the chosen columns and obtain the result. security proof of fsb the merkle β damgard construction is proven to base its security only on the security of the used compression function. so we only need to show that the compression function is secure. a cryptographic hash function needs to be secure in three different aspects : pre - image resistance : given a hash h it should be hard to find a message m such that hash ( m ) = h second pre - image resistance : given a message m1 it should be hard to find a message m2 such that hash ( m1 ) = hash ( m2 ) collision resistance : it should be hard to find two different messages m1 and m2 such that hash ( m1 ) = hash ( m2 ) note that if an adversary can find a second pre - image, then it can certainly find a collision. this means that if we can prove our system to be collision resistant, it will certainly be second - pre - image resistant. usually in cryptography hard means something like β almost certainly beyond the reach of any adversary who must be prevented from breaking the system β. we will however need a more exact meaning of the word hard. we will take hard to mean β the runtime of any algorithm that finds a collision or pre - image will depend exponentially on size of the hash value β. this means that by relatively small additions to the hash size, we can quickly reach high security. pre - image resistance and regular syndrome decoding ( rsd ) as said before, the security of fsb depends on a problem called regular syndrome decoding ( rsd
</ctx>
The following are multiple choice questions (with answers) about STEM.
Let $H$ be a hash function based on the Merkle-Damg{\aa}rd construction. The Merkle-Damg{\aa}rd theorem says that \dots
A. \dots $H$ is collision-resistant when the compression function is collision-resistant.
B. \dots the compression function is collision-resistant when $H$ is collision-resistant.
C. \dots $H$ is collision-resistant.
D. \dots $H$ is not collision-resistant.
Answer: | [
"\\dots $H$ is collision-resistant when the compression function is collision-resistant.",
"\\dots the compression function is collision-resistant when $H$ is collision-resistant.",
"\\dots $H$ is collision-resistant.",
"\\dots $H$ is not collision-resistant."
] | A | To determine the correct answer to the question about the Merkle-DamgΓΒ₯rd theorem, let's analyze the options step-by-step.
### Understanding the Merkle-DamgΓΒ₯rd Construction
1. **Definition**: The Merkle-DamgΓΒ₯rd construction is a method for building a hash function from a one-way compression function. The construction processes input data in fixed-size blocks, iteratively applying the compression function and updating the internal state.
2. **Collision Resistance**: Collision resistance means it is computationally infeasible to find two distinct inputs that hash to the same output.
### Analyzing the Options
Now, let's evaluate each of the answer options based on what we know about the Merkle-DamgΓΒ₯rd construction.
1. **Option 1**: "Γ’ΒΒ¦ $H$ is collision-resistant when the compression function is collision-resistant."
- This statement is consistent with the Merkle-DamgΓΒ₯rd theorem. It asserts that if the underlying compression function behaves well (i.e., is collision-resistant), then the overall hash function constructed using this method will also be collision-resistant. This is a fundamental result of the theorem.
2. **Option 2**: "Γ’ΒΒ¦ the compression function is collision-resistant when $H$ is collision-resistant."
- This statement is incorrect as it reverses the implication. While $H$ being collision-resistant suggests that the compression function should be robust, it does not guarantee that the compression function is collision-resistant if $H$ is. Thus, this option does not reflect the essence of the theorem.
3. **Option 3**: "Γ’ΒΒ¦ $H$ is collision-resistant."
- While this could be true under the right conditions, the statement is incomplete. It does not specify the necessary condition regarding the compression function. Hence, it does not capture the theorem accurately.
4. **Option 4**: "Γ’ΒΒ¦ $H$ is not collision-resistant."
- This option is false. The theorem states that if the compression function is collision-resistant, then $H$ will be collision-resistant. Therefore, this statement is incorrect.
### Conclusion
Based on the evaluation of each option, the correct choice is clearly:
**Correct Option**: **1**: "Γ’ΒΒ¦ $H$ is collision-resistant when the compression function is collision-resistant."
This option accurately represents the relationship defined by the Merkle-DamgΓΒ₯rd theorem regarding collision resistance in hash functions derived from a secure compression function. |
arc_challenge_529 | arc_challenge | <ctx>
sensitivity to order in the environment. given a free choice of activity, the children showed more interest in practical activities and montessori's materials than in toys provided for them and were surprisingly unmotivated by sweets and other rewards. over time, she saw a spontaneous self - discipline emerge. based on her observations, montessori implemented a number of practices that became hallmarks of her educational philosophy and method. she replaced the heavy furniture with child - sized tables and chairs light enough for the children to move, and placed child - sized materials on low, accessible shelves. she expanded the range of practical activities such as sweeping and personal care to include a wide variety of exercises for the care of the environment and the self, including flower arranging, hand washing, gymnastics, care of pets, and cooking. she also included large open - air sections in the classroom encouraging children to come and go as they please in the room's different areas and lessons. in her book she outlines a typical winter's day of lessons, starting at 09 : 00 am and finishing at 04 : 00 pm : 9 β 10. entrance. greeting. inspection as to personal cleanliness. exercises of practical life ; helping one another to take off and put on the aprons. going over the room to see that everything is dusted and in order. language : conversation period : children give an account of the events of the day before. religious exercises. 10 β 11. intellectual exercises. objective lessons interrupted by short rest periods. nomenclature, sense exercises. 11 β 11 : 30. simple gymnastics : ordinary movements done gracefully, normal position of the body, walking, marching in line, salutations, movements for attention, placing of objects gracefully. 11 : 30 β 12. luncheon : short prayer. 12 β 1. free games. 1 β 2. directed games, if possible, in the open air. during this period the older children in turn go through with the exercises of practical life, cleaning the room, dusting, putting the material in order. general inspection for cleanliness : conversation. 2 β 3. manual work. clay modelling, design, etc. 3 β 4. collective gymnastics and songs, if possible in the open air. exercises to
the core curriculum includes language arts, mythology, history, geography, geology, algebra, geometry, mineralogy, biology, astronomy, physics, chemistry, and nutrition. the school day generally begins with a one - and - a - half to two - hour, cognitively oriented academic lesson, or " main lesson ", that focuses on a single theme over one month's time. this typically begins with introductory activities that may include singing, instrumental music, and recitations of poetry, generally including a verse written by rudolf steiner for the start of a school day. there is little reliance on standardized textbooks. waldorf elementary education allows for individual variations in the pace of learning, based upon the expectation that a child will grasp a concept or achieve a skill when he or she is ready. cooperation takes priority over competition. this approach also extends to physical education ; competitive team sports are not introduced until upper grades. each class remains together as a cohort throughout all elementary, developing as a quasi - familial social group. in elementary years, a core teacher teaches primary academic subjects. a central role of this teacher is to provide a supportive role model both through personal example and through stories drawn from a variety of cultures, educating by exercising " creative, loving authority ". class teachers are normally expected to teach a cohort for several years, a practice known as looping. starting in first grade, specialized teachers teach many subjects, including music, crafts, movement, and two foreign languages from complementary language families ( in english - speaking countries these are typically german and either spanish or french ). while class teachers serve a valuable role as personal mentors, establishing " lasting relationships with pupils ", ullrich documented problems when the same teacher continues into middle school. noting that there is a danger of any authority figure limiting students enthusiasm for inquiry and autonomy, he cited a number of schools where the class teacher accompanies the class for six years only, after which specialist teachers play a greater role. four temperaments steiner considered children's cognitive, emotional, and behavioral development to be interlinked. when students in a waldorf school are grouped, it is generally not by a focus on academic abilities. instead, steiner adapted the pseudoscient
grade students take physical setting : earth science, and take the regents as their final exam. as freshmen, students typically take living environment ( biology ), then physical setting : chemistry and physical setting : physics are taken during sophomore and junior year respectively. students interested in pursuing a career in science or medicine frequently take ap biology or ap environmental science as a junior or senior. ap courses include ap physics b, ap environmental science, ap biology, and ap chemistry. the latter has the reputation of being the hardest course offered at pelham, and therefore is only given every other year. the health studies course is required to graduate and is normally offered to sophomores. however, this semester course may be taken at any point. the science research department is a four - year elective program situated in the school's biotechnology laboratory. it allows students with a declared interest in science to conduct original research alongside of researchers at college, university, hospital, and medical school laboratories. the courses include introduction to science research, which introduces students to the research experience with the basic skills necessary to conduct research experiments. science instrumentation and problem solving is the next course in the sequence. this teaches students how to adequately prepare reagents, perform sterilization exercises, run and maintain a biotechnology laboratory. students also are instructed in various molecular biology techniques, such as dna amplification via polymerase chain reaction, restriction enzyme analysis, and bacterial culture, identification, and transformation. in the summer before 11th grade, research students find a mentor to work with on an original research experiment. this is carried throughout most of the junior year with the students working on paper to present their findings. this is subsequently entered into the siemens - westinghouse and / or intel science talent search, as well as other science competitions. students have previously been named semifinalists and region finalists, winning college scholarship money. teacher steven beltecas has played a critical role in improving the school's performances at these competitions. the world language department prepares students for a life in a contemporary society where globalization brings many different cultures and languages together. modern foreign languages include french, spanish, and italian i ii and iii. there are also honors / ap courses in french and spanish. spanish and italian have a syracuse university
solve problems through practical applications and exercises. the questions can be about livestock farming, crop management, environmental management, chemical laboratory, microbiological laboratory, business studies, agriculture machinery technology, and rural development. the second part is the oral test where you can choose from the following subjects : german, applied mathematics, crop management, livestock farming, genetic engineering, history, spanish, environmental management, rural development, and religion. after passing the exam, there is a mass and the students celebrate their successful completion of the matura together. projects there are many projects organized by students or teachers to increase the abilities of organization and teamwork. through these projects the students have a chance to accrue experiences relating to social skills, learning, motivating and organizing a team, acting as a team leader, or simply recognizing the importance of workers in our society. company planning the students have to think about a business idea and start a fictional company. thereafter, they interact with other fictional companies in austria. junior company in order to gain more practical experience, the students can also decide to found a " junior company ", instead of a mock one. enmt ( energy management ) gbt ( genetic engineering and biotechnology ) this is also an elective. first, students have a course to get the basics of genetic engineering and biotechnology. then, one can continue with different projects. pma ( project management ) in general, one thinks of a project, which has to be turned into reality in one school year. in 2012, for instance, the schoolyard was redesigned by students. impact every year some students join a course to learn about new agricultural methods or animals. this project lasts about 3 weeks and takes place in different european countries. facilities the hlfs - ursprung has many different outdoor and indoor facilities and its own farm where the students learn practical skills. gym the gym was built in 1993. laboratories the school has three labs. there is one microbiology lab and two chemistry labs. in these labs, the students practice their theoretical knowledge in chemistry and microbiological processes. stables there are two stables, an old one, which was built before the school started, and a new one which was built in 2003. the new stable is
</ctx>
The following are multiple choice questions (with answers) about STEM.
Mrs. Tucker gives her class samples of different soil types. Which action best helps Tanisha identify each soil type?
A. smelling it
B. touching it
C. weighing it
D. measuring it
Answer: | [
"smelling it",
"touching it",
"weighing it",
"measuring it"
] | B | |
qasc_4062 | qasc | <ctx>
a flood is an overflow of water ( or rarely other fluids ) that submerges land that is usually dry. in the sense of " flowing water ", the word may also be applied to the inflow of the tide. floods are an area of study of the discipline hydrology and are of significant concern in agriculture, civil engineering and public health. human changes to the environment often increase the intensity and frequency of flooding, for example land use changes such as deforestation and removal of wetlands, changes in waterway course or flood controls such as with levees, and larger environmental issues such as climate change and sea level rise. in particular climate change's increased rainfall and extreme weather events increases the severity of other causes for flooding, resulting in more intense floods and increased flood risk. flooding may occur as an overflow of water from water bodies, such as a river, lake, or ocean, in which the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries, or it may occur due to an accumulation of rainwater on saturated ground in an areal flood. while the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, these changes in size are unlikely to be considered significant unless they flood property or drown domestic animals. floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. floods often cause damage to homes and businesses if they are in the natural flood plains of rivers. while riverine flood damage can be eliminated by moving away from rivers and other bodies of water, people have traditionally lived and worked by rivers because the land is usually flat and fertile and because rivers provide easy travel and access to commerce and industry. flooding can lead to secondary consequences in addition to damage to property, such as long - term displacement of residents and creating increased spread of waterborne diseases and vector - bourne disesases transmitted by mosquitos. types areal floods can happen on flat or low - lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. the excess accumulates in place, sometimes to hazardous depths. surface soil can become
a flood is an overflow or accumulation of an expanse of water that submerges land. flood ( s ), the flood, flooded or flooding may also refer to : computing flood fill, an algorithm that determines the area connected to a given node in a multi - dimensional array flooding ( computer networking ) internet relay chat flood, a form of denial - of - service attack mac flooding, a technique employed to compromise the security of network switches network flood, a denial - of - service attack on a network film and television the flood ( 1927 film ), a german silent film the flood ( 1931 film ), american film directed by james tinling the flood ( 1958 film ), a czech film the flood ( 1962 film ), an american television film narrated by laurence harvey the flood ( 1963 film ), children's adventure film written by jean scott rogers flood!, a 1976 american television film the flood : who will save our children? a 1993 american film based on real events. the flood ( 1994 film ), a french - russian film flood ( 2007 film ), a 2007 disaster film flood ( 2017 film ), a 2017 canadian animated short film by amanda strong the flood ( 2010 film ) or mabul, an israeli film the flood ( 2019 film ), a british drama film directed by anthony woodley the flood ( 2020 film ), an australian drama film directed by victoria wharfe mcintyre " flooded " ( buffy the vampire slayer ), an episode of buffy the vampire slayer " flood " ( the young ones ), an episode of the young ones " the flood " ( mad men ), an episode of mad men the flood, a fictional viral doctor who villain literature the flood ( al - faydan ), a 1975 short story collection by haidar haidar flood ( baxter novel ), a 2008 novel by stephen baxter halo : the flood, a 2003 novel by william c. dietz flood ( doyle novel ), a 2002 novel by richard doyle the flood ( novel ), a 1986 novel by ian rankin the flood ( novella ), an 1880 novella by emile zola the flood, a 2004 novel by maggie gee flood, a 2002 novel by james heneghan floods, a 2000 volume of poetry by maurice riordan flood,
severe flooding occurred during the 2011 monsoon season in thailand. the flooding began at the end of july triggered by the landfall of tropical storm nock - ten. these floods soon spread through the provinces of northern, northeastern, and central thailand along the mekong and chao phraya river basins. in october floodwaters reached the mouth of the chao phraya and inundated parts of the capital city of bangkok. flooding persisted in some areas until mid - january 2012, and resulted in a total of 815 deaths ( with three missing ) and 13. 6 million people affected. sixty - five of thailand's 76 provinces were declared flood disaster zones, and over of farmland was damaged. the disaster has been described as " the worst flooding yet in terms of... water and people affected. " the world bank has estimated 1, 425 trillion baht ( us $ 46. 5 billion ) in economic damages and losses due to flooding, as of 1 december 2011. most of this was due to the manufacturing industry, as seven major industrial estates were inundated in water as much as deep during the floods. disruptions to manufacturing supply chains affected regional automobile production and caused a global shortage of hard disk drives which lasted throughout 2012. the world bank's estimate for this disaster means it ranks as the world's fourth costliest disaster as of 2011 surpassed only by the 2011 tohoku earthquake and tsunami in japan, great hanshin earthquake in 1995, and hurricane katrina in 2005. a 2015 study suggests increasing odds for potential flooding similar to the 2011 flood intensity to occur in the future. thailand's southern provinces also saw seasonal flash - flooding towards the end of the year, although these were not as destructive as the floods in the more northerly provinces. background various regions of thailand are prone to seasonal flash - flooding due to their tropical climate. the floods often occur in the north and spread down the chao phraya river through the central plains, in the northeast along the chi and mun rivers flowing into the mekong, or in the coastal hillsides of the east and south. remnants of tropical storms that strike vietnam or the peninsular south commonly increase precipitation, resulting in further risk of flooding. drainage control systems, including several dams, irrigation
. the following table shows the flow rates of some of the waterways measured on november 2, 1968, flow rates that for the gauging stations listed in 2002 still represented the highest figure in the available time series. the 1968 event was not an isolated one, and numerous other flood waves have caused flood events in the biella area over time. among the most notable ones in the last century are : may 1923 : a flood of the cervo river caused severe damage in piedicavallo and rosazza ; november 1951 : widespread flooding throughout the biella area and particularly in the cervo valley ; october and november 1976 : numerous disruptions caused in particular by the flooding of olobbia, elvo, oremo and quargnasca ; september 1981 : flooding in the upper cervo valley with damage caused mostly by the minor hydrographic network ; april 1986 : a large landslide blocked the former ss 232 near valle mosso ; september 1993 : after 36 hours of bad weather, a flood of the cervo river caused the bypass bridge to collapse. there were no casualties thanks to a roadman who noticed the impending disaster and had the bridge closed half an hour before the collapse. june 2002 : heavy rainfall caused landslides and overflowing in the western biella basins ; the most serious damage occurred in the upper cervo valley, while in the oropa basin a landslide destroyed a long section of the access road to the rosazza tunnel. october 2020 : the flood event mainly affected the strona di mosso valley and sessera valley, with very heavy damage especially to the road system. lakes in the hilly and mountainous part of the biellese region there are numerous lakes, generally of small to medium size with the exception of lake viverone. the latter, with a surface area of almost 6 kmΒ², is in fact the third largest lake in piedmont and is an important tourist hub with numerous accommodation and recreational facilities located on its shores. the lake is located on the border with the province of turin ( in fact, about 1 / 6 of its surface area falls within the municipality of azeglio ) ; a public boat line connects the main towns along the coast. lakes in the biella region can
</ctx>
The following are multiple choice questions (with answers) about STEM.
what is caused by flooding?
A. landslides
B. desert
C. sunlight
D. drought
Answer: | [
"landslides",
"desert",
"sunlight",
"drought"
] | A | |
openbookqa_212 | openbookqa | <ctx>
through experiences and development of personal abilities instead of through commercially provided consumption experiences. many of the key traits of voluntary simplification have been exhibited in a less extreme, but more widespread way, through the phenomenon of " downshifting. " downshifting involves a change of lifestyle and consumption patterns that exchange a relatively highly paid / lower stress but more rewarding, and shifting to a lower level of material consumption but a higher level of quality of life and personal satisfaction. sustainable consumption choices all types of consumption are not equally important in terms of their sustainability impacts. the european environmental impact of product project provides a rigorous analysis of research into the environmental impact of products consumed by households. the project's input / output - based methodology assesses 255 domestic product types against a wide range of environmental impacts. it concludes that 70 β 80 % of total impacts relate to food and drink consumption ; housing ( including domestic energy use ) ; and transport ( including commuting, leisure, and holiday travel ). ideally, all aspects of our consumption behaviors and production systems will become oriented toward sustainability, but initially significant progress would be achieved through the following : sustainable food and drink consumption choices β consumption level that are more conducive to health ; a reduced consumption of meat products due to their contribution to climate change ; choosing organically produced and locally sourced, seasoned produce ; and greater composting of biodegradable food waste sustainable housing consumption choices β more emphasis on purchasing homes constructed using sustainable materials and choosing and creating homes with high levels of insulation and energy efficiency. this also involves energy usage within the home based on sustainable energy source, and the avoidance of energy waste while living in the home ( e. g. through energy - efficient refrigerators and energy saving bulbs ). sustainable travel behavior β reducing the amount of travel ( e. g. through home - working or teleconference service ) or finding alternative transport means for journeys, such as cycling for leisure rather than driving. seeking tourism offerings that try to protect the global and local environment and also the cultures within tourism destinations. sustainable choices and motivational imbalance individuals may experience motivational imbalance in which they believe a particular choice has positive personal outcomes but is subject to disapproval by important or that the
the energy required to grow food as well as the quantity of chemicals utilized during production. a wastewater treatment system and a rainwater collection system are just a couple of the environmentally - friendly amenities the village has to offer. with the aid of these characteristics, the town is able to use less water and discharge less pollutants into the environment. the village is able to lessen its environmental effect and set an example for other communities by utilizing renewable energy sources, natural building materials, organic farming methods, and eco - friendly features. economic impacts of huangbaiyu ecovillage huangbaiyu ecovillage as a model of sustainable living, has benefited the neighborhood's economy. although the village was intended to be self - sufficient and have a minimal impact on the environment, it has also benefited the local economy. by using locals for both the construction of the village and the upkeep and management of the many renewable energy sources, the community has helped to create jobs in the neighborhood. restaurants, cafes, and gift shops, among other regional companies, now have more chances, thanks to the village. additionally, the village has drawn visitors from all over the world, which has benefited the local economy. numerous foreign dignitaries have visited the village, and it has appeared in a number of documentaries, both of which have raised awareness of the place. furthermore, the village has served as an example for nearby sustainable communities like the green village. this project has brought in more money to the region and helped to add more jobs. local government support of huangbaiyu ecovillage in china's liaoning province, the local government has given huangbaiyu ecovillage substantial backing. the village was intended to be self - sufficient and to have little impact on the environment, and the local government has taken action to help achieve this objective. the village's construction and the installation of renewable energy sources were both financially supported by the local government. the hamlet has grown more accessible and self - sufficient thanks to this funding. additionally, the village's organic farming practices and use of natural building materials have received support from the regional government. the village's rainwater collection system and sewage treatment system are both supported by the government. the village's efforts to
##2 they will choose to travel by plane. 27 % of young people claim they will travel to a faraway destination. more specifically, people under the age of 30 are more likely to consider climate implications of vacation spots and air travel. home energy and landscaping reducing home energy use through measures such as insulation, better energy efficiency of appliances, cool roofs, heat reflective paints, lowering water heater temperature, and improving heating and cooling efficiency can significantly reduce an individual's carbon footprint. after home insulation and ventilation has been checked, replacing a failed gas boiler with a heat pump can be considered, especially in climates where both heating and cooling are required. in addition, the choice of energy used to heat, cool, and power homes makes a difference in the carbon footprint of individual homes. many energy suppliers in various countries worldwide have options to purchase part or pure " green energy " ( usually electricity but occasionally also gas ). these methods of energy production emit almost no greenhouse gases once they are up and running. installing rooftop solar, both on a household and community scale, also drastically reduces household emissions, and at scale could be a major contributor to greenhouse gas abatement. low energy products and consumption labels, such as energy star in the us, can be seen on many household appliances, home electronics, office equipment, heating and cooling equipment, windows, residential light fixtures, and other products. energy star is a program in the u. s. that promotes energy efficiency. when buying air conditioning the choice of coolant is important. carbon emission labels describe the carbon dioxide emissions created as a by - product of manufacturing, transporting, or disposing of a consumer product. environmental product declarations ( epd ) " present transparent, verified and comparable information about the life - cycle environmental impact of products. " these labels may help consumers choose lower energy products. converting appliances such as stoves, water heaters and furnaces from gas to electric reduces emissions of and methane. landscape and gardens plants process carbon dioxide to make organic molecules like cellulose, sugars, starches, plant proteins, and oils. perennials keep a large proportion of those organic molecules for as long as they live, not releasing them until microorganisms decompose
e. g. respiratory diseases, cancer ) increases in radiative forcing and global warming decreased agricultural productivity due to ozone deposition ecosystem acidification and eutrophication. technology impacts the applications of technology often result in unavoidable and unexpected environmental impacts, which according to the i = pat equation is measured as resource use or pollution generated per unit gdp. environmental impacts caused by the application of technology are often perceived as unavoidable for several reasons. first, given that the purpose of many technologies is to exploit, control, or otherwise " improve " upon nature for the perceived benefit of humanity while at the same time, the myriad of processes in nature have been optimized and are continually adjusted by evolution, any disturbance of these natural processes by technology is likely to result in negative environmental consequences. second, the conservation of mass principle and the first law of thermodynamics ( i. e., conservation of energy ) dictate that whenever material resources or energy are moved around or manipulated by technology, environmental consequences are inescapable. third, according to the second law of thermodynamics, order can be increased within a system ( such as the human economy ) only by increasing disorder or entropy outside the system ( i. e., the environment ). thus, technologies can create " order " in the human economy ( i. e., order as manifested in buildings, factories, transportation networks, communication systems, etc. ) only at the expense of increasing " disorder " in the environment. according to several studies, increased entropy is likely to correlate to negative environmental impacts. mining industry the environmental impact of mining includes erosion, formation of sinkholes, loss of biodiversity, and contamination of soil, groundwater and surface water by chemicals from mining processes. in some cases, additional forest logging is done in the vicinity of mines to increase the available room for the storage of the created debris and soil. even though plants need some heavy metals for their growth, excess of these metals is usually toxic to them. plants that are polluted with heavy metals usually depict reduced growth, yield and performance. pollution by heavy metals decreases the soil organic matter composition resulting in a decline in soil nutrients which then leads to a decline
</ctx>
The following are multiple choice questions (with answers) about STEM.
To positively affect the environment
A. cut down trees and shrubs
B. drive a gas guzzling car
C. use Styrofoam plates and cups
D. cultivate dogwoods and oaks
Answer: | [
"cut down trees and shrubs",
"drive a gas guzzling car",
"use Styrofoam plates and cups",
"cultivate dogwoods and oaks"
] | D | |
arc_easy_1039 | arc_easy | <ctx>
osteoblasts ( from the greek combining forms for " bone ", ΞΏΟΟΡο -, osteo - and βλαΟΟΞ±Ξ½Ο, blastano " germinate " ) are cells with a single nucleus that synthesize bone. however, in the process of bone formation, osteoblasts function in groups of connected cells. individual cells cannot make bone. a group of organized osteoblasts together with the bone made by a unit of cells is usually called the osteon. osteoblasts are specialized, terminally differentiated products of mesenchymal stem cells. they synthesize dense, crosslinked collagen and specialized proteins in much smaller quantities, including osteocalcin and osteopontin, which compose the organic matrix of bone. in organized groups of disconnected cells, osteoblasts produce hydroxyapatite, the bone mineral, that is deposited in a highly regulated manner, into the organic matrix forming a strong and dense mineralized tissue, the mineralized matrix. the mineralized skeleton is the main support for the bodies of air breathing vertebrates. it is also an important store of minerals for physiological homeostasis including both acid - base balance and calcium or phosphate maintenance. bone structure the skeleton is a large organ that is formed and degraded throughout life in the air - breathing vertebrates. the skeleton, often referred to as the skeletal system, is important both as a supporting structure and for maintenance of calcium, phosphate, and acid - base status in the whole organism. the functional part of bone, the bone matrix, is entirely extracellular. the bone matrix consists of protein and mineral. the protein forms the organic matrix. it is synthesized and then the mineral is added. the vast majority of the organic matrix is collagen, which provides tensile strength. the matrix is mineralized by deposition of hydroxyapatite ( alternative name, hydroxylapatite ). this mineral is hard, and provides compressive strength. thus, the collagen and mineral together are a composite material with excellent tensile and compressive strength, which can bend under a strain and recover its shape without
a bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. bones come in a variety of shapes and sizes and have complex internal and external structures. they are lightweight yet strong and hard and serve multiple functions. bone tissue ( osseous tissue ), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. it has a honeycomb - like matrix internally, which helps to give the bone rigidity. bone tissue is made up of different types of bone cells. osteoblasts and osteocytes are involved in the formation and mineralisation of bone ; osteoclasts are involved in the resorption of bone tissue. modified ( flattened ) osteoblasts become the lining cells that form a protective layer on the bone surface. the mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. in the human body at birth, there are approximately 300 bones present ; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. the largest bone in the body is the femur or thigh - bone, and the smallest is the stapes in the middle ear. the greek word for bone is ΞΏΟΟΡον ( " osteon " ), hence the many terms that use it as a prefix β such as osteopathy. in anatomical terminology, including the terminologia anatomica international standard, the word for a bone is os ( for example, os breve, os longum, os sesamoideum ). structure bone is not uniformly solid, but consists of a flexible matrix
osteopathy. in anatomical terminology, including the terminologia anatomica international standard, the word for a bone is os ( for example, os breve, os longum, os sesamoideum ). structure bone is not uniformly solid, but consists of a flexible matrix ( about 30 % ) and bound minerals ( about 70 % ) which are intricately woven and endlessly remodeled by a group of specialized bone cells. their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight. bone matrix is 90 to 95 % composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. the elasticity of collagen improves fracture resistance. the matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. it is the mineralization that gives bones rigidity. bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, each with a different appearance and characteristics. cortex the hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. it forms the hard exterior ( cortex ) of bones. the cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80 % of the total bone mass of an adult human skeleton. it facilitates bone's main functions β to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. it consists of multiple microscopic columns, each called an osteon or haversian system. each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. volkmann's canals at right angles connect the osteons together. the columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon
( 1990 ) distinguish three cell types from osteoblast to mature osteocyte : type i preosteocyte ( osteoblastic osteocyte ), type ii preosteocyte ( osteoid osteocyte ), and type iii preosteocyte ( partially surrounded by mineral matrix ). the embedded " osteoid - osteocyte " must do two functions simultaneously : regulate mineralization and form connective dendritic processes, which requires cleavage of collagen and other matrix molecules. the transformation from motile osteoblast to entrapped osteocyte takes about three days, and during this time, the cell produces a volume of extracellular matrix three times its own cellular volume, which results in 70 % volume reduction in the mature osteocyte cell body compared to the original osteoblast volume. the cell undergoes a dramatic transformation from a polygonal shape to a cell that extends dendrites toward the mineralizing front, followed by dendrites that extend to either the vascular space or bone surface. as the osteoblast transitions to an osteocyte, alkaline phosphatase is reduced, and casein kinase ii is elevated, as is osteocalcin. osteocytes appear to be enriched in proteins that are resistant to hypoxia, which appears to be due to their embedded location and restricted oxygen supply. oxygen tension may regulate the differentiation of osteoblasts into osteocytes, and osteocyte hypoxia may play a role in disuse - mediated bone resorption. function although osteocytes are relatively inert cells, they are capable of molecular synthesis and modification, as well as transmission of signals over long distances, in a way similar to the nervous system. they are the most common cell type in bone ( 31, 900 per cubic millimeter in bovine bone to 93, 200 per cubic millimeter in rat bone ). most of the receptor activities that play an important role in bone function are present in the mature osteocyte. osteocytes are an important regulator of bone mass. osteocytes contain
</ctx>
The following are multiple choice questions (with answers) about STEM.
What are the two main jobs of the skeletal system?
A. support and protect the body
B. digest food and carry messages for the body
C. get rid of wastes and defend the body from disease
D. sense changes in the environment and carry food through the body
Answer: | [
"support and protect the body",
"digest food and carry messages for the body",
"get rid of wastes and defend the body from disease",
"sense changes in the environment and carry food through the body"
] | A | |
openbookqa_302 | openbookqa | <ctx>
##enophases were consistently recorded over the 58 years from 1891 to 1948, and a further 14 phenophases were recorded for the 20 years between 1929 and 1948. the returns were summarised each year in the quarterly journal of the rms as the phenological reports. jeffree ( 1960 ) summarised the 58 years of data, which show that flowering dates could be as many as 21 days early and as many as 34 days late, with extreme earliness greatest in summer - flowering species, and extreme lateness in spring - flowering species. in all 25 species, the timings of all phenological events are significantly related to temperature, indicating that phenological events are likely to get earlier as climate warms. the phenological reports ended suddenly in 1948 after 58 years, and britain remained without a national recording scheme for almost 50 years, just at a time when climate change was becoming evident. during this period, individual dedicated observers made important contributions. the naturalist and author richard fitter recorded the first flowering date ( ffd ) of 557 species of british flowering plants in oxfordshire between about 1954 and 1990. writing in science in 2002, richard fitter and his son alistair fitter found that " the average ffd of 385 british plant species has advanced by 4. 5 days during the past decade compared with the previous four decades. " they note that ffd is sensitive to temperature, as is generally agreed, that " 150 to 200 species may be flowering on average 15 days earlier in britain now than in the very recent past " and that these earlier ffds will have " profound ecosystem and evolutionary consequences ". in scotland, david grisenthwaite meticulously recorded the dates he mowed his lawn since 1984. his first cut of the year was 13 days earlier in 2004 than in 1984, and his last cut was 17 days later, providing evidence for an earlier onset of spring and a warmer climate in general. national recording was resumed by tim sparks in 1998 and, from 2000, has been led by citizen science project nature's calendar, run by the woodland trust and the centre for ecology and hydrology. latest research shows that oak bud burst has advanced more than 11 days since the 19th century and that
chemicals and insecticides as well as pharmaceuticals. geography nanchang is located in inland southeastern china, south of the yangtze river and is situated on the right bank of the gan river just below its confluence with the jin river and some southwest of its discharge into poyang lake. climate nanchang has a monsoon - influenced humid subtropical climate ( koppen cfa ) with four distinct seasons. winters are short and cool with occasional frosts ; it begins somewhat sunny and dry but becomes progressively wetter and more overcast. spring begins especially gloomy, and from april to june, each month has more than of rainfall. summer is long and humid, with amongst the highest temperatures of any chinese provincial capital, and with the sun shining close to 60 percent of the time in july and august, is the sunniest time of year. autumn is warm to mild with the lowest rainfall levels of the year. the monthly 24 - hour average temperature ranges from in january to in july, while the annual mean is. annual precipitation stands at around ; with monthly percent possible sunshine ranging from 27 percent in march to 56 percent in august, the city receives 1, 809 hours of sunshine annually in average. extremes since 1951 have ranged from on 29 december 1991 to on 23 july 1961. demographics as of the 2020 chinese census, nanchang had a population of 6, 255, 007, of which 5, 382, 162 lived in built - up ( urbanized ) area consisting of 6 urban districts plus nanchang county largely being urbanized. 37 ethnic groups were counted amongst its prefecture divisions. administration honggutan new district ( ) is an economic management area and not a formal administrative division. economy nanchang is a regional hub for agricultural production in jiangxi province with its grain yield being 16. 146 million tons in 2000. products such as rice and oranges are economic staples. the ford motor company has a plant in nanchang, assembling the ford transit van as part of the jiangling motor joint venture. much of its industry revolves around aircraft manufacturing, automobile manufacturing, metallurgy, electro - mechanics, textile, chemical engineering, traditional chinese medicine, pharmaceuticals and others. nanchang has a rapid economic development and ranks 15th among the fastest growing 20 cities in
diurnal variations, but the modifying effect of the sea restricts the range to less than that in most other parts of the united kingdom. january is the coldest month, with mean minimum temperatures between 1 Β°c ( 34 Β°f ) and 2 Β°c ( 36 Β°f ). july and august are the warmest, with mean daily maxima around 21 Β°c ( 70 Β°f ). in general, december is the dullest month and june the sunniest. the south - west of england enjoys a favoured location, particularly in summer, when the azores high extends its influence north - eastwards towards the uk. cumulus cloud often forms inland, especially near hills, and reduces exposure to sunshine. the average annual sunshine is about 1, 600 hours. rainfall tends to be associated with atlantic depressions or with convection. in summer, convection caused by solar surface heating sometimes forms shower clouds, and a large proportion of the annual precipitation falls from showers and thunderstorms at this time of year. average rainfall is around 800 β 900 mm ( 31 β 35 in ). about 8 β 15 days of snowfall is typical. november to march have the highest mean wind speeds, with june to august having the lightest ; the prevailing wind direction is from the south - west. a combination of the rainfall and geology leads to an estimated average daily runoff from springs and boreholes of some 330, 000 m3 ( 72 million imperial gallons ). bristol waterworks company ( now bristol water ) recognised the value of this resource and between 1846 and 1853 created a series of tunnels, pipes, and aqueducts called the " line of works ", which still carry approximately 18, 200 m3 ( 4 million imperial gallons ) of water a day to barrow gurney reservoirs for filtration and then on to bristol and the surrounding areas. this collection and conveyance of water from the chewton mendip and east and west harptree areas is accomplished by the effect of gravity on the runoff. water from the mendips is also collected in cheddar reservoir, which was constructed in the 1930s and takes water from the springs in cheddar gorge. ecology the area hosts three semi - natural habitats of national importance : ash β maple woodland ( fraxinus spp. and acer
. national recording was resumed by tim sparks in 1998 and, from 2000, has been led by citizen science project nature's calendar, run by the woodland trust and the centre for ecology and hydrology. latest research shows that oak bud burst has advanced more than 11 days since the 19th century and that resident and migrant birds are unable to keep up with this change. continental europe in europe, phenological networks are operated in several countries, e. g. germany's national meteorological service operates a very dense network with approx. 1200 observers, the majority of them on a voluntary basis. the pan european phenology ( pep ) project is a database that collects phenological data from european countries. currently 32 european meteorological services and project partners from across europe have joined and supplied data. in geneva, switzerland, the opening of the first leaf of an official chestnut tree ( a horse chestnut ) has been observed and recorded since 1818, thus forming the oldest set of records of phenological events in switzerland. this task is conducted by the secretary of the grand council of geneva ( the local parliament ), and the opening of the first leaf is announced publicly as indicating the beginning of the spring. data show a trend during the 20th century towards an opening that happens earlier and earlier. other countries there is a usa national phenology network in which both professional scientists and lay recorders participate. many other countries such as canada ( alberta plantwatch and saskatchewan plantwatch ), china and australia also have phenological programs. in eastern north america, almanacs are traditionally used for information on action phenology ( in agriculture ), taking into account the astronomical positions at the time. william felker has studied phenology in ohio, us, since 1973 and now publishes " poor will's almanack ", a phenological almanac for farmers ( not to be confused with a late 18th - century almanac by the same name ). in the amazon rainforests of south america, the timing of leaf production and abscission has been linked to rhythms in gross primary production at several sites. early in their lifespan, leaves reach a peak in their capacity for photosynthesis, and in tropical evergreen forests of some regions of
</ctx>
The following are multiple choice questions (with answers) about STEM.
In a decade spring occurs
A. three times
B. twelve times
C. ten times
D. twice
Answer: | [
"three times",
"twelve times",
"ten times",
"twice"
] | C | |
medmcqa_1429 | medmcqa | <ctx>
. stem cells, on the other hand, have the ability to divide for indefinite periods and to give rise to specialized cells. they are best described in the context of normal human development. development begins when a sperm fertilizes an egg and creates a single cell that has the potential to form an entire organism. in the first hours after fertilization, this cell divides into identical cells. in humans, approximately four days after fertilization and after several cycles of cell division, these cells begin to specialize, forming a hollow sphere of cells, called a blastocyst. the blastocyst has an outer layer of cells, and inside this hollow sphere, there is a cluster of cells called the inner cell mass. the cells of the inner cell mass go on to form virtually all of the tissues of the human body. although the cells of the inner cell mass can form virtually every type of cell found in the human body, they cannot form an organism. these cells are referred to as pluripotent. pluripotent stem cells undergo further specialization into multipotent progenitor cells that then give rise to functional cells. examples of stem and progenitor cells include : radial glial cells ( embryonic neural stem cells ) that give rise to excitatory neurons in the fetal brain through the process of neurogenesis. hematopoietic stem cells ( adult stem cells ) from the bone marrow that give rise to red blood cells, white blood cells, and platelets. mesenchymal stem cells ( adult stem cells ) from the bone marrow that give rise to stromal cells, fat cells, and types of bone cells epithelial stem cells ( progenitor cells ) that give rise to the various types of skin cells muscle satellite cells ( progenitor cells ) that contribute to differentiated muscle tissue. a pathway that is guided by the cell adhesion molecules consisting of four amino acids, arginine, glycine, asparagine, and serine, is created as the cellular blastomere differentiates from the single - layered blastula to the three primary layers of germ cells in mammals, namely the ectoderm, mesoderm and endoder
probability of double - hit mutant generation corresponds to purely symmetric division of scs with a large proliferation rate of border stem cells along with a small, but non - zero, proliferation rate of central stem cells. stem cell niches harboring continuously dividing cells, such as those located at the base of the intestinal gland, are maintained at small population size. this presents a challenge to the maintenance of multicellular tissues, as small populations of asexually dividing individuals will accumulate deleterious mutations through genetic drift and succumb to mutational meltdown. mathematical modeling of the intestinal gland reveals that the small population size within the stem cell niche minimizes the probability of carcinogenesis occurring anywhere, at the expense of gradually accumulated deleterious mutations throughout organismal lifetime β a process that contributes to tissue degradation and aging. therefore, the population size of the stem cell niche represents an evolutionary trade - off between the probability of cancer formation and the rate of aging. examples germline germline stem cells ( gscs ) are found in organisms that continuously produce sperm and eggs until they are sterile. these specialized stem cells reside in the gsc niche, the initial site for gamete production, which is composed of the gscs, somatic stem cells, and other somatic cells. in particular, the gsc niche is well studied in the genetic model organism drosophila melanogaster and has provided an extensive understanding of the molecular basis of stem cell regulation. gsc niche in drosophila ovaries in drosophila melanogaster, the gsc niche resides in the anterior - most region of each ovariole, known as the germarium. the gsc niche consists of necessary somatic cells - terminal filament cells, cap cells, escort cells, and other stem cells which function to maintain the gscs. the gsc niche holds on average 2 β 3 gscs, which are directly attached to somatic cap cells and escort stem cells, which send maintenance signals directly to the gscs. gscs are easily identified through histological staining against vasa protein ( to identify germ cells ) and 1b1 protein ( to outline cell structures and a germline specific fu
the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission takes in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ) the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. genetics inheritance genetics is the scientific study of inheritance. mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. it has several principles. the first is that genetic characteristics, alleles, are discrete and have alternate forms ( e. g., purple vs. white or tall vs. dwarf ), each inherited from one of two parents. based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive ; an organism with at least one dominant allele will display the phenotype of that dominant allele. during gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. heterozygotic individuals produce gametes with an equal frequency
inhibitors and shh. when expressed during early development, bmp's are responsible for the differentiation of epidermal cells from the ventral ectoderm. inhibitors of bmps, such as nog and chrd, promote differentiation of ectoderm cells into prospective neural tissue on the dorsal side. if any of these genes are improperly regulated, then proper formation and differentiation will not occur. bmp also plays a very important role in the patterning that occurs after the formation of the neural tube. due to the graded response the cells of the neural tube have to bmp and shh signaling, these pathways are in competition to determine the fate of preneural cells. bmp promotes dorsal differentiation of pre - neural cells into sensory neurons and shh promotes ventral differentiation into motor neurons. there are many other genes that help to determine neural fate and proper development include, reln, sox9, wnt, notch and delta coding genes, hox, and various cadherin coding genes like cdh1 and cdh2. some recent research has shown that the level of gene expression changes drastically in the brain at different periods throughout the life cycle. for example, during prenatal development the amount of mrna in the brain ( an indicator of gene expression ) is exceptionally high, and drops to a significantly lower level not long after birth. the only other point of the life cycle during which expression is this high is during the mid - to late - life period, during 50 β 70 years of age. while the increased expression during the prenatal period can be explained by the rapid growth and formation of the brain tissue, the reason behind the surge of late - life expression remains a topic of ongoing research. current research neurogenetics is a field that is rapidly expanding and growing. the current areas of research are very diverse in their focuses. one area deals with molecular processes and the function of certain proteins, often in conjunction with cell signaling and neurotransmitter release, cell development and repair, or neuronal plasticity. behavioral and cognitive areas of research continue to expand in an effort to pinpoint contributing genetic factors. as a result of the expanding neurogenetics field a better
</ctx>
The following are multiple choice questions (with answers) about STEM.
Why fetal cells continue to divide but terminally differentiated adult cells do not divide -
A. There are many cyclin inhibitors which prevent cell to enter into S phase in adult
B. Phosphatase absent in fetal cells
C. Proteinase is absent in fetus
D. Absence of CD kinase
Answer: | [
"There are many cyclin inhibitors which prevent cell to enter into S phase in adult",
"Phosphatase absent in fetal cells",
"Proteinase is absent in fetus",
"Absence of CD kinase"
] | A | A cyclin-dependent kinase inhibitor protein is a protein which inhibits cyclin-dependent kinase. Several function as tumor suppressor genes. Cell cycle progression is negatively controlled by cyclin-dependent kinases inhibitors (called CDIs, CKIs or CDKIs). CDIs are involved in cell cycle arrest at the G1 phase. |
medmcqa_3016 | medmcqa | <ctx>
falls during the rainy season ( october to april ). while water resources are quite abundant in sumatra, kalimantan, sulawesi, maluku and irian, water shortages occur during the dry season in parts of java, bali and the lesser sunda islands. in particular in java, the dry season flows are inadequate to meet the demand, leading to irrigation shortages. irrigation accounts for 93 % of water use. the total storage capacity amounts to only 5 β 6 % of the river flows. construction of reservoirs is constrained by lack of good reservoir sites, high density of population at possible reservoir sites and expected short reservoir lifetimes due to siltation. groundwater potential in indonesia is very limited. however, much of the eastern islands depend on groundwater because of surface water scarcity. groundwater overexploitation occurs in heavily populated coastal areas of java, including in jakarta and semarang. in jakarta, it has caused seawater intrusion up to from the coast and land subsidence at a rate of 2 β 34 cm / year in east jakarta. in semarang, land subsidence occurs at a rate of 9 cm / year. water utilities abstract water from rivers and lakes ( 60 % ), springs ( 25 % ) and groundwater ( 15 % ). for example, the main water source for jakarta is the jatiluhur dam on the citarum river southeast of the city. for those who are self - supplied or receive water from community - based organizations, shallow groundwater and springs are by far the main sources of water on most islands. on sumatra and irian, however, rainwater harvesting is also an important water source. pollution domestic sewage, industrial effluents, agricultural runoff, and mismanaged solid waste are polluting surface and groundwater, especially in java. indonesia ranks among the worst countries in asia in sewerage and sanitation coverage. few indonesian cities possess even minimal sanitation systems. the absence of an established sanitation network forces many households to rely upon private septic tanks or to dispose of their waste directly into rivers and canals. the commonality of the latter practice, together with the prevalence of polluted shallow wells used for drinking water supply in urban areas, has led to repeated epidemics of gas
surface water is water located on top of land, forming terrestrial ( surrounding by land on all sides ) waterbodies, and may also be referred to as blue water, opposed to the seawater and waterbodies like the ocean. the vast majority of surface water is produced by precipitation. as the climate warms in the spring, snowmelt runs off towards nearby streams and rivers contributing towards a large portion of human drinking water. levels of surface water lessen as a result of evaporation as well as water moving into the ground becoming ground - water. alongside being used for drinking water, surface water is also used for irrigation, wastewater treatment, livestock, industrial uses, hydropower, and recreation. for usgs water - use reports, surface water is considered freshwater when it contains less than 1, 000 milligrams per liter ( mg / l ) of dissolved solids. there are three major types of surface water. permanent ( perennial ) surface waters are present year round, and includes lakes, rivers and wetlands ( marshes and swamps ). semi - permanent ( ephemeral ) surface water refers to bodies of water that are only present at certain times of the year including seasonally dry channels such as creeks, lagoons and waterholes. human - made surface water is water that can be continued by infrastructures that humans have assembled. this would be dammed artificial lakes, canals and artificial ponds ( e. g. garden ponds ) or swamps. the surface water held by dams can be used for renewable energy in the form of hydropower. hydropower is the forcing of surface water sourced from rivers and streams to produce energy. measurement surface water can be measured as annual runoff. this includes the amount of rain and snowmelt drainage left after the uptake of nature, evaporation from land, and transpiration from vegetation. in areas such as california, the california water science center records the flow of surface water and annual runoff by utilizing a network of approximately 500 stream gages collecting real time data from all across the state. this then contributes to the 8, 000 stream gage stations that are overseen by the usgs national stream gage record. this in turn has provided to date records and documents of water data over the years.
( 6 ) ; the supply reservoir, which is physically closest to the compressor, is also called a wet tank because the majority of oil and water from the compressor gather here. a second, downstream reservoir is called the service reservoir, and forms the primary source for brake operation. the supply and service reservoirs are sized to allow several service brake applications if the compressor fails or the engine stops. from the service reservoir, compressed air is then distributed via a four - way protection valve ( 5 ) into the primary reservoir ( rear brake reservoir ) and the secondary reservoir ( front / trailer - brake reservoir ), a parking - brake reservoir, and an auxiliary air supply distribution point. the supply system also includes various check, pressure limiting, drain and safety valves. control system the control system is further divided into two service brake - circuits, the parking brake - circuit, and the trailer brake - circuit. the dual - service brake circuits are further split into front - and rear - wheel circuits which receive compressed air from their individual supply reservoirs for added safety in case of an air leak. the service brakes are applied by means of a brake pedal air valve ( 9 ) which regulates both circuits. depressing the brake pedal allows compressed air stored in the respective supply reservoirs to enter each service brake chamber ( 10 ) and ( 12 ), causing the service brakes to actuate. releasing the brake pedal disconnects the supply reservoirs ; a brake return spring forces the service brakes to release, and the compressed air that was used to actuate the service brakes is exhausted to the atmosphere. repeated application of the service brakes will deplete the air reservoir pressure, prompting the governor to load the compressor again. relay valve ( s ) are used to improve brake response with long lines. routing the lines for the rear and trailer service brakes through the brake pedal air valve could result in unacceptably slow actuation, so these brakes are plumbed directly to the service reservoir through a local relay valve ( 11 ) and the line from the brake pedal air valve is used to actuate the relay valve instead of the service brakes. similarly, quick - release valve ( s ) are located close to their respective brakes and allow the air to be exhausted more quickly when the brakes are released.
1996 estimated that world reserves are adequate for at least 50 years. a similar prediction for platinum use in fuel cells notes that the metal could be easily recycled. phosphorus phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. individual countries supplies vary widely ; without a recycling initiative america's supply is estimated around 30 years. phosphorus supplies affect total agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. peak water hubbert's original analysis did not apply to renewable resources. however over - exploitation often results in a hubbert peak nonetheless. a modified hubbert curve applies to any resource that can be harvested faster than it can be replaced. for example, a reserve such as the ogallala aquifer can be mined at a rate that far exceeds replenishment. this turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. these debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. the term fossil water is sometimes used to describe aquifers whose water is not being recharged. renewable resources fisheries : at least one researcher has attempted to perform hubbert linearization ( hubbert curve ) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. another example is the cod of the north sea. the comparison of the cases of fisheries and of mineral extraction tells us that the human pressure on the environment is causing a wide range of resources to go through a depletion cycle which follows a hubbert curve. sustainability gaps sustainability measurements and indicators are part of an ever - evolving and changing process and has various gaps to be filled to achieve an integrated framework and model. the following are some of the breaks in continuity : global indicators - due to differences in social, economical, and environmental conditions of countries, each country has its own indicators and indexes to measure sustainability, which can lead to improper and varying interpretation at the global level. hence, there common indexes and measuring parameters would allow comparisons among countries. in agriculture, comparable indicators are already in use
</ctx>
The following are multiple choice questions (with answers) about STEM.
Man is the only reservoir for
A. Salmonella
B. Campylobacter jejuni
C. E. histolytica
D. Y. enterocolitica
Answer: | [
"Salmonella",
"Campylobacter jejuni",
"E. histolytica",
"Y. enterocolitica"
] | C | Entamoeba histolytica is predominantly found in primates (including humans) and occasionally in dogs, cats, cattle and pigs.Β
The parasite has a worldwide distribution and is prevalent in tropical and subtropical countries.
Salmonella - The most common are animal reservoirs are chickens, turkeys, pigs, and cows; dozens of other domestic and wild animals also harbor these organisms.
Campylobacter jejuni - C. jejuni is carried by most of these animal reservoirs and is the predominant species isolated from chickens and cattle.
Y. Enterocolitica - dogs, sheep, wild rodents, and environmental water may also be a reservoir of pathogenic Y. enterocolitica strains.
Ref : http://parasite.org.au/para-site/text/entamoeba-text.html#:~:text=Host%20range%3A%20Entamoeba%20histolytica%20is,in%20tropical%20and%20subtropical%20countries |
aquarat_964 | aquarat | <ctx>
##2261917482286102 x4 - 13392499389128176685 x3 + 16875252458877684989 x2 + 3759900174855208738 x1 - 46769930553931905995 which has a root of 12574411168418005980468 modulo rsa - 130. rsa - 140 rsa - 140 has 140 decimal digits ( 463 bits ), and was factored on february 2, 1999, by a team led by herman te riele and composed of stefania cavallar, bruce dodson, arjen k. lenstra, paul leyland, walter lioen, peter l. montgomery, brian murphy and paul zimmermann. the value and factorization are as follows : rsa - 140 = rsa - 140 = 3398717423028438554530123627613875835633986495969597423490929302771479 Γ 6264200187401285096151654948264442219302037178623509019111660653946049 the factorization was found using the number field sieve algorithm and an estimated 2000 mips - years of computing time. rsa - 150 rsa - 150 has 150 decimal digits ( 496 bits ), and was withdrawn from the challenge by rsa security. rsa - 150 was eventually factored into two 75 - digit primes by aoki et al. in 2004 using the general number field sieve ( gnfs ), years after bigger rsa numbers that were still part of the challenge had been solved. the value and factorization are as follows : rsa - 150 = rsa - 150 = 348009867102283695483970451047593424831012817350385456889559637548278410717 Γ 4456
all the digits have been used, write " 00 " ) and write them to the right of the remainder from the previous step ( on the first step, there will be no remainder ). in other words, multiply the remainder by 100 and add the two digits. this will be the current value c. find p, y and x, as follows : let p be the part of the root found so far, ignoring any decimal point. ( for the first step, p = 0. ) determine the greatest digit x such that. we will use a new variable y = x ( 20p + x ). note : 20p + x is simply twice p, with the digit x appended to the right. note : x can be found by guessing what c / ( 20 Β· p ) is and doing a trial calculation of y, then adjusting x upward or downward as necessary. place the digit as the next digit of the root, i. e., above the two digits of the square you just brought down. thus the next p will be the old p times 10 plus x. subtract y from c to form a new remainder. if the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. otherwise go back to step 1 for another iteration. examples find the square root of 152. 2756. 1 2. 3 4 / \ / 01 52. 27 56 01 1 * 1 < = 1 < 2 * 2 x = 1 01 y = x * x = 1 * 1 = 1 00 52 22 * 2 < = 52 < 23 * 3 x = 2 00 44 y = ( 20 + x ) * x = 22 * 2 = 44 08 27 243 * 3 < = 827 < 244 * 4 x = 3 07 29 y = ( 240 + x ) * x = 243 * 3 = 729 98 56 2464 * 4 < = 9856 < 2465 * 5 x = 4 98 56 y = ( 2460 + x ) * x = 2464 * 4 = 9856 00 00 algorithm terminates : answer = 12. 34 binary numeral system ( base 2 ) this section uses the formalism from the digit - by - digit calculation
##662855699269624207451365738384255497823390996283918 3287667419172988072221996532403300258906083211160744508191024837057033 rsa - 460 rsa - 460 has 460 decimal digits ( 1, 526 bits ), and has not been factored so far. rsa - 460 = 1786856020404004433262103789212844585886400086993882955081051578507634807524 1464078819812169681394445771476334608488687746254318292828603396149562623036 3564554675355258128655971003201417831521222464468666642766044146641933788836 8932452217321354860484353296131403821175862890998598653858373835628654351880 4806362231643082386848731052350115776715521149453708868428108303016983133390 0416365515466857004900847501644808076825638918266848964153626486460448430073 4909 rsa - 1536 rsa - 1536 has 463 decimal digits ( 1, 536 bits ), and has not been factored so far. $ 150, 000 was previously offered for successful factorization. rsa - 1536 = 1847699703211741474306835620200164403018
##0 = d. then Ξ³ an integer and Ξ³ = 1 Ξ³ β€ 0 the value of is. to begin with, we shall simplify matters by concentrating a particular value of and generalise the result at a later stage. we shall use the value. the indicial equation has a root at, and we see from the recurrence relation that when that that denominator has a factor which vanishes when. in this case, a solution can be obtained by putting where is a constant. with this substitution, the coefficients of vanish when and. the factor of in the denominator of the recurrence relation cancels with that of the numerator when. hence, our solution takes the form if we start the summation at rather than we see that the result ( as we have written it ) generalises easily. for, with then obviously, if, then. the expression for we have just given looks a little inelegant since we have a multiplicative constant apart from the usual arbitrary multiplicative constant. later, we shall see that we can recast things in such a way that this extra constant never appears the other root to the indicial equation is, but this gives us ( apart from a multiplicative constant ) the same result as found using. this means we must take the partial derivative ( w. r. t. ) of the usual trial solution in order to find a second independent solution. if we define the linear operator as then since in our case, ( we insist that. ) taking the partial derivative w. r. t, note that we must evaluate the partial derivative at ( and not at the other root ). otherwise the right hand side is non - zero in the above, and we do not have a solution of. the factor is not cancelled for and. this part of the second independent solution is now we can turn our attention to the terms where the factor cancels. first after this, the recurrence relations give us so, if we have we need the partial derivatives similarly, we can write and it becomes clear that for here, is the th partial sum of the harmonic series, and by definition and. putting these together, for the case we
</ctx>
The following are multiple choice questions (with answers) about STEM.
Find the value of
(10)150Γ·(10)146
A. A)10
B. B)100
C. C)1000
D. D)10000
Answer: | [
"A)10",
"B)100",
"C)1000",
"D)10000"
] | D | Explanation:
=(10)150/(10)146=10(4)=10000
Option D |
m1_2684 | m1 | <ctx>
random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. for classification tasks, the output of the random forest is the class selected by most trees. for regression tasks, the mean or average prediction of the individual trees is returned. random decision forests correct for decision trees'habit of overfitting to their training set. random forests generally outperform decision trees, but their accuracy is lower than gradient boosted trees. however, data characteristics can affect their performance. the first algorithm for random decision forests was created in 1995 by tin kam ho using the random subspace method, which, in ho's formulation, is a way to implement the " stochastic discrimination " approach to classification proposed by eugene kleinberg. an extension of the algorithm was developed by leo breiman and adele cutler, who registered " random forests " as a trademark in 2006 (, owned by minitab, inc. ). the extension combines breiman's " bagging " idea and random selection of features, introduced first by ho and later independently by amit and geman in order to construct a collection of decision trees with controlled variance. history the general method of random decision forests was first proposed by ho in 1995. ho established that forests of trees splitting with oblique hyperplanes can gain accuracy as they grow without suffering from overtraining, as long as the forests are randomly restricted to be sensitive to only selected feature dimensions. a subsequent work along the same lines concluded that other splitting methods behave similarly, as long as they are randomly forced to be insensitive to some feature dimensions. note that this observation of a more complex classifier ( a larger forest ) getting more accurate nearly monotonically is in sharp contrast to the common belief that the complexity of a classifier can only grow to a certain level of accuracy before being hurt by overfitting. the explanation of the forest method's resistance to overtraining can be found in kleinberg's theory of stochastic discrimination. the early development of breiman's notion of random forests was influenced by the work of amit and geman who introduced the idea of searching over a random
of irrelevant features, and produces inspectable models. however, they are seldom accurate ". in particular, trees that are grown very deep tend to learn highly irregular patterns : they overfit their training sets, i. e. have low bias, but very high variance. random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance. this comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model. bagging the training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners. given a training set =,..., with responses =,...,, bagging repeatedly ( b times ) selects a random sample with replacement of the training set and fits trees to these samples : for = 1,..., : sample, with replacement, training examples from, ; call these,. train a classification or regression tree on,. after training, predictions for unseen samples can be made by averaging the predictions from all the individual regression trees on : or by taking the plurality vote in the case of classification trees. this bootstrapping procedure leads to better model performance because it decreases the variance of the model, without increasing the bias. this means that while the predictions of a single tree are highly sensitive to noise in its training set, the average of many trees is not, as long as the trees are not correlated. simply training many trees on a single training set would give strongly correlated trees ( or even the same tree many times, if the training algorithm is deterministic ) ; bootstrap sampling is a way of de - correlating the trees by showing them different training sets. additionally, an estimate of the uncertainty of the prediction can be made as the standard deviation of the predictions from all the individual regression trees on : the number of samples / trees,, is a free parameter. typically, a few hundred to several thousand trees are used, depending on the size and nature of the training set. an optimal number of trees can be found using cross - validation, or by observing
explanation of the forest method's resistance to overtraining can be found in kleinberg's theory of stochastic discrimination. the early development of breiman's notion of random forests was influenced by the work of amit and geman who introduced the idea of searching over a random subset of the available decisions when splitting a node, in the context of growing a single tree. the idea of random subspace selection from ho was also influential in the design of random forests. in this method a forest of trees is grown, and variation among the trees is introduced by projecting the training data into a randomly chosen subspace before fitting each tree or each node. finally, the idea of randomized node optimization, where the decision at each node is selected by a randomized procedure, rather than a deterministic optimization was first introduced by thomas g. dietterich. the proper introduction of random forests was made in a paper by leo breiman. this paper describes a method of building a forest of uncorrelated trees using a cart like procedure, combined with randomized node optimization and bagging. in addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular : using out - of - bag error as an estimate of the generalization error. measuring variable importance through permutation. the report also offers the first theoretical result for random forests in the form of a bound on the generalization error which depends on the strength of the trees in the forest and their correlation. algorithm preliminaries : decision tree learning decision trees are a popular method for various machine learning tasks. tree learning " come [ s ] closest to meeting the requirements for serving as an off - the - shelf procedure for data mining ", say hastie et al., " because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. however, they are seldom accurate ". in particular, trees that are grown very deep tend to learn highly irregular patterns : they overfit their training sets, i. e. have low bias, but very high variance. random forests are a way
. the next few sections talk about how the random forest algorithm works in more detail. creation of decision trees the next step of the algorithm involves the generation of decision trees from the bootstrapped dataset. to achieve this, the process examines each gene / feature and determines for how many samples the feature's presence or absence yields a positive or negative result. this information is then used to compute a confusion matrix, which lists the true positives, false positives, true negatives, and false negatives of the feature when used as a classifier. these features are then ranked according to various classification metrics based on their confusion matrices. some common metrics include estimate of positive correctness ( calculated by subtracting false positives from true positives ), measure of " goodness ", and information gain. these features are then used to partition the samples into two sets : those who possess the top feature, and those who do not. the diagram below shows a decision tree of depth two being used to classify data. for example, a data point that exhibits feature 1, but not feature 2, will be given a " no ". another point that does not exhibit feature 1, but does exhibit feature 3, will be given a " yes ". this process is repeated recursively for successive levels of the tree until the desired depth is reached. at the very bottom of the tree, samples that test positive for the final feature are generally classified as positive, while those that lack the feature are classified as negative. these trees are then used as predictors to classify new data. random forests the next part of the algorithm involves introducing yet another element of variability amongst the bootstrapped trees. in addition to each tree only examining a bootstrapped set of samples, only a small but consistent number of unique features are considered when ranking them as classifiers. this means that each tree only knows about the data pertaining to a small constant number of features, and a variable number of samples that is less than or equal to that of the original dataset. consequently, the trees are more likely to return a wider array of answers, derived from more diverse knowledge. this results in a random forest, which possesses numerous benefits over a single decision tree
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of the following is true regarding the random forest classification algorithm?
A. It is not suitable for parallelization.
B. It uses only a subset of features for learning in each weak learner.
C. We compute a prediction by randomly selecting the decision of one weak learner.
D. It produces a human interpretable model.
Answer: | [
"It is not suitable for parallelization.",
"It uses only a subset of features for learning in each weak learner.",
"We compute a prediction by randomly selecting the decision of one weak learner.",
"It produces a human interpretable model."
] | B | It uses only a subset of features for learning in each weak learner.
Reasoning: Random forest employs a technique called "feature bagging," where each decision tree (weak learner) is trained on a random subset of features, promoting diversity among the trees. This approach helps in reducing overfitting and enhances model performance. The other options are incorrect as random forests can be parallelized, predictions are made by aggregating decisions from all weak learners, and the model is generally not easily interpretable. |
m1_1498 | m1 | <ctx>
to tn, the extra column added to the matrix does not perturb the solution when a zero is used to extend the forward vector. however, the extra row added to the matrix has perturbed the solution ; and it has created an unwanted error term Ξ΅f which occurs in the last place. the above equation gives it the value of : this error will be returned to shortly and eliminated from the new forward vector ; but first, the backwards vector must be extended in a similar ( albeit reversed ) fashion. for the backwards vector, as before, the extra column added to the matrix does not perturb this new backwards vector ; but the extra row does. here we have another unwanted error Ξ΅b with value : these two error terms can be used to form higher - order forward and backward vectors described as follows. using the linearity of matrices, the following identity holds for all : if Ξ± and Ξ² are chosen so that the right hand side yields e1 or en, then the quantity in the parentheses will fulfill the definition of the nth forward or backward vector, respectively. with those alpha and beta chosen, the vector sum in the parentheses is simple and yields the desired result. to find these coefficients,, are such that : and respectively, are such that : by multiplying both previous equations by one gets the following equation : now, all the zeroes in the middle of the two vectors above being disregarded and collapsed, only the following equation is left : with these solved for ( by using the cramer 2Γ2 matrix inverse formula ), the new forward and backward vectors are : performing these vector summations, then, gives the nth forward and backward vectors from the prior ones. all that remains is to find the first of these vectors, and then some quick sums and multiplications give the remaining ones. the first forward and backward vectors are simply : using the backward vectors the above steps give the n backward vectors for m. from there, a more arbitrary equation is : the solution can be built in the same recursive way that the backwards vectors were built. accordingly, must be generalized to a sequence of intermediates, such that. the solution is then built recursively by noticing that if then,
element this can be accomplished by reversing the sign of if necessary. finding all solutions so far one possible solution for and has been established given. it is, however, not the only possible solution and it may not even be a valid solution from a practical point of view. to begin with, since the scaling of is undefined, the scaling of is also undefined. it must lie in the null space of since for the subsequent analysis of the solutions, however, the exact scaling of is not so important as its " sign ", i. e., in which direction it points. let be normalized vector in the null space of. it is then the case that both and are valid translation vectors relative. it is also possible to change into in the derivations of and above. for the translation vector this only causes a change of sign, which has already been described as a possibility. for the rotation, on the other hand, this will produce a different transformation, at least in the general case. to summarize, given there are two opposite directions which are possible for and two different rotations which are compatible with this essential matrix. in total this gives four classes of solutions for the rotation and translation between the two camera coordinate systems. on top of that, there is also an unknown scaling for the chosen translation direction. it turns out, however, that only one of the four classes of solutions can be realized in practice. given a pair of corresponding image coordinates, three of the solutions will always produce a 3d point which lies behind at least one of the two cameras and therefore cannot be seen. only one of the four classes will consistently produce 3d points which are in front of both cameras. this must then be the correct solution. still, however, it has an undetermined positive scaling related to the translation component. the above determination of and assumes that satisfy the internal constraints of the essential matrix. if this is not the case which, for example, typically is the case if has been estimated from real ( and noisy ) image data, it has to be assumed that it approximately satisfy the internal constraints. the vector is then chosen as right singular vector of corresponding to the smallest singular value. 3d points from corresponding image points many
original rows, and that all other rows are unchanged. explicitly, if Ο and Ο satisfy the above equation, then for and ( which divisions are possible by the definition of Ξ² ) one has so that the matrix is invertible, with inverse now l can be obtained by fitting into rows and columns t and k of the identity matrix. by construction the matrix obtained after left - multiplying by l has entry Ξ² at position ( t, jt ) ( and due to our choice of Ξ± and Ξ³ it also has an entry 0 at position ( k, jt ), which is useful though not essential for the algorithm ). this new entry Ξ² divides the entry that was there before, and so in particular ; therefore repeating these steps must eventually terminate. one ends up with a matrix having an entry at position ( t, jt ) that divides all entries in column jt. step iii : eliminating entries finally, adding appropriate multiples of row t, it can be achieved that all entries in column jt except for that at position ( t, jt ) are zero. this can be achieved by left - multiplication with an appropriate matrix. however, to make the matrix fully diagonal we need to eliminate nonzero entries on the row of position ( t, jt ) as well. this can be achieved by repeating the steps in step ii for columns instead of rows, and using multiplication on the right by the transpose of the obtained matrix l. in general this will result in the zero entries from the prior application of step iii becoming nonzero again. however, notice that each application of step ii for either rows or columns must continue to reduce the value of, and so the process must eventually stop after some number of iterations, leading to a matrix where the entry at position ( t, jt ) is the only non - zero entry in both its row and column. at this point, only the block of a to the lower right of ( t, jt ) needs to be diagonalized, and conceptually the algorithm can be applied recursively, treating this block as a separate matrix. in other words, we can increment t by one and go back to step i. final step applying
causes. the table below shows, the earth being considered as the physical system and for different physical phenomena, the model parameters that describe the system, the physical quantity that describes the state of the physical system and observations commonly made on the state of the system. in the inverse problem approach we, roughly speaking, try to know the causes given the effects. general statement of the inverse problem the inverse problem is the " inverse " of the forward problem : instead of determining the data produced by particular model parameters, we want to determine the model parameters that produce the data that is the observation we have recorded ( the subscript obs stands for observed ). our goal, in other words, is to determine the model parameters such that ( at least approximately ) where is the forward map. we denote by the ( possibly infinite ) number of model parameters, and by the number of recorded data. we introduce some useful concepts and the associated notations that will be used below : the space of models denoted by : the vector space spanned by model parameters ; it has dimensions ; the space of data denoted by : if we organize the measured samples in a vector with components ( if our measurements consist of functions, is a vector space with infinite dimensions ) ; : the response of model ; it consists of the data predicted by model ; : the image of by the forward map, it is a subset of ( but not a subspace unless is linear ) made of responses of all models ; : the data misfits ( or residuals ) associated with model : they can be arranged as a vector, an element of. the concept of residuals is very important : in the scope of finding a model that matches the data, their analysis reveals if the considered model can be considered as realistic or not. systematic unrealistic discrepancies between the data and the model responses also reveals that the forward map is inadequate and may give insights about an improved forward map. when operator is linear, the inverse problem is linear. otherwise, that is most often, the inverse problem is nonlinear. also, models cannot always be described by a finite number of parameters. it is the case when we look for distributed parameters ( a distribution of wave - speeds for instance ) : in such cases
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of the following is WRONG about inverted files? (Slide 24,28 Week 3)
A. The space requirement for the postings file is O(n)
B. Variable length compression is used to reduce the size of the index file
C. The index file has space requirement of O(n^beta), where beta is about 1β2
D. Storing differences among word addresses reduces the size of the postings file
Answer: | [
"The space requirement for the postings file is O(n)",
"Variable length compression is used to reduce the size of the index file",
"The index file has space requirement of O(n^beta), where beta is about 1β2",
"Storing differences among word addresses reduces the size of the postings file"
] | B | To determine which statement about inverted files is incorrect, let's first break down the concepts involved and analyze each option systematically. An inverted file (or inverted index) is a data structure used in information retrieval systems, such as search engines, to efficiently retrieve documents containing specific terms.
### Step-by-Step Analysis
1. **Understand Inverted Files**:
- An inverted file consists of two main components: the *index file* and the *postings file*.
- The *index file* contains a list of terms (words) and pointers to their corresponding postings.
- The *postings file* contains the actual document IDs (or positions) where each term appears.
2. **Analyze Each Option**:
- **Option A**: "The space requirement for the postings file is O(n)"
- Here, *n* typically represents the number of terms or documents in the collection.
- This statement suggests that the space required for the postings file grows linearly with the number of documents. This is *generally correct* because the postings file must store information for each term across all documents.
- **Option B**: "Variable length compression is used to reduce the size of the index file"
- Variable length compression techniques, such as those used in data compression algorithms, are often employed to reduce the size of files, including index files.
- This statement is also *correct*, as variable-length encoding can effectively decrease the size of the index file.
- **Option C**: "The index file has space requirement of O(n^beta), where beta is about 1β2"
- This statement indicates that the space requirement for the index file grows at a rate proportional to \(n^{\beta}\) where \(\beta\) is approximately \(1/2\).
- Inverted indexes typically have a space requirement that is proportional to the number of terms and the average document size, but stating \(O(n^{1/2})\) for the index file is a bit misleading and may not hold true universally. It could depend on various factors like term distribution and document length.
- This option is likely *incorrect* or misleading.
- **Option D**: "Storing differences among word addresses reduces the size of the postings file"
- This option refers to a common technique called *delta encoding*, where instead of storing absolute positions, the differences between consecutive addresses are stored.
- This is a well-known method to save space and is *correct*.
3. **Conclusion**:
- After examining all options, it's clear that **Option C** is the one that is likely wrong or misleading regarding the space requirement of the index file.
### Final Answer
The statement that is WRONG about inverted files is **C. The index file has space requirement of O(n^beta), where beta is about 1β2**. |
mmlu_aux_4851 | mmlu_auxiliary | <ctx>
24. 2 production of electromagnetic waves learning objectives by the end of this section, you will be able to : β’ describe the electric and magnetic waves as they move out from a source, such as an ac generator. β’ explain the mathematical relationship between the magnetic field strength and the electrical field strength. β’ calculate the maximum strength of the magnetic field in an electromagnetic wave, given the maximum electric field strength. we can get a good understanding of electromagnetic waves ( em ) by considering how they are produced. whenever a current varies, associated electric and magnetic fields vary, moving out from the source like waves. perhaps the easiest situation to visualize is a varying current in a long straight wire, produced by an ac generator at its center, as illustrated in figure 24. 5. figure 24. 5 this long straight gray wire with an ac generator at its center becomes a broadcast antenna for electromagnetic waves. shown here are the charge distributions at four different times. the electric field ( ) propagates away from the antenna at the speed of light, forming part of an electromagnetic wave. the electric field ( ) shown surrounding the wire is produced by the charge distribution on the wire. both the and the charge distribution vary as the current changes. the changing field propagates outward at the speed of light. there is an associated magnetic field ( ) which propagates outward as well ( see figure 24. 6 ). the electric and magnetic fields are closely related and propagate as an electromagnetic wave. this is what happens in broadcast antennae such as those in radio and tv stations. closer examination of the one complete cycle shown in figure 24. 5 reveals the periodic nature of the generator - driven charges oscillating up and down in the antenna and the electric field produced. at time, there is the maximum separation of charge, with negative charges at the top and positive charges at the bottom, producing the maximum magnitude of the electric field ( or - field ) in the upward direction. one - fourth of a cycle later, there is no charge separation and the field next to the antenna is zero, while the maximum - field has moved away at speed. as the process continues, the charge separation reverses and the field reaches its maximum downward value, returns to zero
radio broadcasting is the broadcasting of audio ( sound ), sometimes with related metadata, by radio waves to radio receivers belonging to a public audience. in terrestrial radio broadcasting the radio waves are broadcast by a land - based radio station, while in satellite radio the radio waves are broadcast by a satellite in earth orbit. to receive the content the listener must have a broadcast radio receiver ( radio ). stations are often affiliated with a radio network that provides content in a common radio format, either in broadcast syndication or simulcast, or both. radio stations broadcast with several different types of modulation : am radio stations transmit in am ( amplitude modulation ), fm radio stations transmit in fm ( frequency modulation ), which are older analog audio standards, while newer digital radio stations transmit in several digital audio standards : dab ( digital audio broadcasting ), hd radio, drm ( digital radio mondiale ). television broadcasting is a separate service that also uses radio frequencies to broadcast television ( video ) signals. history the earliest radio stations were radiotelegraphy systems and did not carry audio. for audio broadcasts to be possible, electronic detection and amplification devices had to be incorporated. the thermionic valve ( a kind of vacuum tube ) was invented in 1904 by the english physicist john ambrose fleming. he developed a device he called an " oscillation valve " ( because it passes current in only one direction ). the heated filament, or cathode, was capable of thermionic emission of electrons that would flow to the plate ( or anode ) when it was at a higher voltage. electrons, however, could not pass in the reverse direction because the plate was not heated and thus not capable of thermionic emission of electrons. later known as the fleming valve, it could be used as a rectifier of alternating current and as a radio wave detector. this greatly improved the crystal set which rectified the radio signal using an early solid - state diode based on a crystal and a so - called cat's whisker. however, what was still required was an amplifier. the triode ( mercury - vapor filled with a control grid ) was created on march 4, 1906, by the austrian robert von lieben independent
a bell, producing a brief oscillating current which is radiated as electromagnetic waves by the antenna. the transmitter repeats this cycle at a rapid rate, so the spark appeared continuous, and the radio signal sounded like a whine or buzz in a radio receiver. the cycle begins when current from the transformer charges up the capacitor, storing positive electric charge on one of its plates and negative charge on the other. while the capacitor is charging the spark gap is in its nonconductive state, preventing the charge from escaping through the coil. when the voltage on the capacitor reaches the breakdown voltage of the spark gap, the air in the gap ionizes, starting an electric spark, reducing its resistance to a very low level ( usually less than one ohm ). this closes the circuit between the capacitor and the coil. the charge on the capacitor discharges as a current through the coil and spark gap. due to the inductance of the coil when the capacitor voltage reaches zero the current doesn't stop but keeps flowing, charging the capacitor plates with an opposite polarity, until the charge is stored in the capacitor again, on the opposite plates. then the process repeats, with the charge flowing in the opposite direction through the coil. this continues, resulting in oscillating currents flowing rapidly back and forth between the plates of the capacitor through the coil and spark gap. the resonant circuit is connected to the antenna, so these oscillating currents also flow in the antenna, charging and discharging it. the current creates an oscillating magnetic field around the antenna, while the voltage creates an oscillating electric field. these oscillating fields radiate away from the antenna into space as an electromagnetic wave ; a radio wave. the energy in the resonant circuit is limited to the amount of energy originally stored in the capacitor. the radiated radio waves, along with the heat generated by the spark, uses up this energy, causing the oscillations to decrease quickly in amplitude to zero. when the oscillating electric current in the primary circuit has decreased to a point where it
electrons in the receiving antenna back and forth, creating a tiny oscillating voltage which is a weaker replica of the current in the transmitting antenna. this voltage is applied to the radio receiver, which extracts the information signal. the receiver first uses a bandpass filter to separate the desired radio station's radio signal from all the other radio signals picked up by the antenna, then amplifies the signal so it is stronger, then finally extracts the information - bearing modulation signal in a demodulator. the recovered signal is sent to a loudspeaker or earphone to produce sound, or a television display screen to produce a visible image, or other devices. a digital data signal is applied to a computer or microprocessor, which interacts with a human user. the radio waves from many transmitters pass through the air simultaneously without interfering with each other. they can be separated in the receiver because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the bandpass filter in the receiver consists of one or more tuned circuits which act like a resonator, similarly to a tuning fork. the tuned circuit has a natural resonant frequency at which it oscillates. the resonant frequency is set equal to the frequency of the desired radio station. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. biological and environmental effects radio waves are non - ionizing radiation, which means they do not have enough energy to separate electrons from atoms or molecules, ionizing them, or break chemical bonds, causing chemical reactions or dna damage. the main effect of absorption of radio waves by materials is to heat them, similarly to the infrared waves radiated by sources of heat such as a space heater or wood fire. the oscillating electric field of the wave causes polar molecules to vibrate back and forth, increasing the
</ctx>
The following are multiple choice questions (with answers) about STEM.
Turn on the radio. A program will come out from a broadcasting station miles and miles away. We know that sounds and music themselves couldn't travel that distance through space .There must be something that brings the program from the station. But what is this silent carrier ( ) ? This is known as radio waves. However, we can't see them or even hear them. In fact, nobody knows exactly what they are. Nevertheless( ) ,we know that they are made by electricity . At the broadcasting station, people talk, sing, play musical instruments or make many other sounds. These sounds are changed into electricity. Then from the broadcasting aerial of a tall tower , electricity sends out radio waves which travel in all directions .And some of them reach your radio aerial. What happens next is a magical thing. The radio waves start an electric current ( ) in your aerial like the one that was first made in the broadcasting station . Finally, the loudspeaker in your radio set changes the electricity into sound .Therefore, you hear the program. ,. You may find this passage in a _ .
A. guidebook
B. science magazine
C. fashion newspaper
D. fictional novel
Answer: | [
"guidebook",
"science magazine",
"fashion newspaper",
"fictional novel"
] | B | null |
medmcqa_8265 | medmcqa | <ctx>
layouts of clinical laboratories in health institutions vary greatly from one facility to another. for instance, some health facilities have a single laboratory for the microbiology section, while others have a separate lab for each specialty area. the following is an example of a typical breakdown of the responsibilities of each area : microbiology includes culturing of the bacteria in clinical specimens, such as feces, urine, blood, sputum, cerebrospinal fluid, and synovial fluid, as well as possible infected tissue. the work here is mainly concerned with cultures, to look for suspected pathogens which, if found, are further identified based on biochemical tests. also, sensitivity testing is carried out to determine whether the pathogen is sensitive or resistant to a suggested medicine. results are reported with the identified organism ( s ) and the type and amount of drug ( s ) that should be prescribed for the patient. parasitology is where specimens are examined for parasites. for example, fecal samples may be examined for evidence of intestinal parasites such as tapeworms or hookworms. virology is concerned with identification of viruses in specimens such as blood, urine, and cerebrospinal fluid. hematology analyzes whole blood specimens to perform full blood counts, and includes the examination of blood films. other specialized tests include cell counts on various bodily fluids. coagulation testing determines various blood clotting times, coagulation factors, and platelet function. clinical biochemistry commonly performs dozens of different tests on serum or plasma. these tests, mostly automated, includes quantitative testing for a wide array of substances, such as lipids, blood sugar, enzymes, and hormones. toxicology is mainly focused on testing for pharmaceutical and recreational drugs. urine and blood samples are the common specimens. immunology / serology uses the process of antigen - antibody interaction as a diagnostic tool. compatibility of transplanted organs may also be determined with these methods. immunohematology, or blood bank determines blood groups, and performs compatibility testing on donor blood and recipients. it also prepares blood components, derivatives, and products for transfusion. this area determines a patient's blood type and rh status, checks for antibodies to common
was 82 years, which implies a life expectancy in these patients similar to that of an age - matched general population. more than 90 % of patients with primary cad have cold - induced circulatory symptoms ranging from moderate acrocyanosis to severe raynaud phenomena precipitated even by very slight cold exposure. history cold hemagglutination was first reported by landsteiner in 1903 and found to occur in human beings in 1918. the association of cold hemagglutination with hemolysis was described in 1937 by rosenthal and corten. during the 1960s, dacie and schubothe published systematic descriptions of 16 cad patients each. the auto antibodies responsible for hemagglutination at low temperatures, cold agglutinins ( ca ), may be found in the sera of healthy subjects as well as in patients with aiha of the cold reactive types. ca bind to erythrocyte surface antigens at a temperature optimum of 0 β 4 Β°c. in contrast to polyclonal ca in healthy individuals, monoclonal ca often have a high - thermal amplitude, which contributes to their pathogenicity at temperatures approaching 37 Β°c. binding of ca causes agglutination of erythrocytes and the antigen β antibody complex induces complement ( c ) activation and hemolysis. essential clinical manifestations of primary cad are hemolytic anemia and cold - induced circulatory symptoms. exact estimates of the severity of anemia and the frequency of cold - induced symptoms, however, have not been provided until recent years. see also cold shock response donath β landsteiner hemolytic anemia list of hematologic conditions paroxysmal cold hemoglobinuria warm antibody autoimmune hemolytic anemia references external links autoimmune diseases acquired hemolytic anemia cold autoimmune hemolytic anemia
as recessive. other examples may include noble families, which are inbred due to custom rather than local genetic variety. hemolytic disease of the newborn in theory, the maternal production of anti - h during pregnancy might cause hemolytic disease in a fetus who did not inherit the mother's bombay phenotype. in practice, cases of hdn caused in this way have not been described. this may be possible due to the rarity of the bombay phenotype but also because of the igm produced by the immune system of the mother. since igms are not transported across the microscopic placental blood vessels ( like igg are ) they cannot reach the blood stream of the fetus to provoke the expected acute hemolytic reaction. references external links hh at bgmut blood group antigen gene mutation database at ncbi, nih rmit university the bombay, para - bombay and other h deficiencies bombaybloodgroup. org an initiative to connect individuals who donate and who are in need of bombay blood group. genetics of the bombay phenotype know more blood antigen systems transfusion medicine south asia
fully understood ; however, after administration the anti - d coated red blood cell complexes saturate fcΞ³ receptors sites on macrophages, resulting in preferential destruction of red blood cells ( rbcs ), therefore sparing antibody - coated platelets. anti - d is recommended as a first - line therapy for itp, along with corticosteroids and intravenous immune globulin ( ivig ). [ winrho sdf is an anti - d manufactured, distributed and marketed by cangene in the us. there is a black box warning on winrho sdf due to the risk of potentially fatal intravascular hemolysis when used in the treatment of itp. life - threatening anemia, kidney failure, and disseminated intravascular coagulation ( dic ) have occurred in people treated with winrho sdf for itp. contraindications the following females are not candidates for rhig : d - negative females whose fetus is known to be d - negative d - negative females who have been previously alloimmunized to d ( they have already formed an anti - d alloantibody ) any d - positive females women who test positive for one of the weak d mutations by molecular testing should be considered rhd positive and not receive rhig women who test positive for one of the partial d mutations ( by molecular testing ) should be treated as rhd negative and receive rhig as clinically indicated history the first rho ( d ) immune globulin treatment " skymed " was introduced by ortho clinical diagnostics, a subsidiary holding of jskymed, and was first administered on may 29, 1968, to marianne cummins in teaneck, new jersey. in 1996, zlb bioplasma ( part of csl behring ) was given approval to sell rhophylac in europe. effectiveness was demonstrated in a clinical trial in 2003 and in 2004 rhophylac was approved in the united states. society and culture manufacturing and safety rho ( d ) immune globulin is a derivative of human plasma. the most common way anti - d products are manufactured is by a
</ctx>
The following are multiple choice questions (with answers) about STEM.
Hemolytic Uremic Syndrome is characterized by -a) Microangiopathic haemolytic anaemiab) βed LDHc) Thrombocytopeniad) Renal failuree) Positive Coomb's test
A. abc
B. ad
C. acd
D. bde
Answer: | [
"abc",
"ad",
"acd",
"bde"
] | C | Hemolytic uremic syndrome
Hemolytic uremic syndrome is characterized by the triad of :
AnemiaΒ (microangiopathic hemolytic anemia).
Renal failureΒ (microangiopathy of kidney involving glomerular capillaries and arterioles).
Thrombocytopenia (due to platelet consumption).
Hemolytic uremic syndrome is most common is children under 2 years of age.
It usually follows an episode of acute gastroenterities, often triggered by E coli.
The prodrome is usually of abdominal pain, diarrhoea and vomiting.
Shortly thereafter, signs and symptoms of acute hemolytic anemia, thrombocytopenia and acute renal failure ensue.
Sometimes neurological findings also occur.Β (But usually absent and differentiateΒ HUSΒ fromΒ TTP.)
Rarely HUS may follow respiratory tract infection.
Etiology
Gastrointestinal infection in infants with the following organism :
E coli, Shigella dysenteriae, Streptococouus pneumoniaeΒ
Hematological findings in a case of microangiopathic hemolytic anemia :
Presence ofΒ schistocytesΒ (fragmented red cells). This is theΒ hallmarkΒ of microangiopathic hemolytic anemia.
Neutrophil leukocytosis.
Thrombocytopenia.
Hemoglobinuria is mild to moderate with hemosiderinuria.
Blood urea and serum creatinine
Elevated serum LDH |
arc_challenge_11 | arc_challenge | <ctx>
1. 3 physical and chemical properties learning objectives by the end of this section, you will be able to : β’ identify properties of and changes in matter as physical or chemical β’ identify properties of matter as extensive or intensive the characteristics that distinguish one substance from another are called properties. a physical property is a characteristic of matter that is not associated with a change in its chemical composition. familiar examples of physical properties include density, color, hardness, melting and boiling points, and electrical conductivity. some physical properties, such as density and color, may be observed without changing the physical state of the matter. other physical properties, such as the melting temperature of iron or the freezing temperature of water, can only be observed as matter undergoes a physical change. a physical change is a change in the state or properties of matter without any accompanying change in the chemical identities of the substances contained in the matter. physical changes are observed when wax melts, when sugar dissolves in coffee, and when steam condenses into liquid water ( figure 1. 18 ). other examples of physical changes include magnetizing and demagnetizing metals ( as is done with common antitheft security tags ) and grinding solids into powders ( which can sometimes yield noticeable changes in color ). in each of these examples, there is a change in the physical state, form, or properties of the substance, but no change in its chemical composition. figure 1. 17 almost one - third of naturally occurring elements are used to make a cell phone. ( credit : modification of work by john taylor )
1. 3 β’ physical and chemical properties 23 figure 1. 18 ( a ) wax undergoes a physical change when solid wax is heated and forms liquid wax. ( b ) steam condensing inside a cooking pot is a physical change, as water vapor is changed into liquid water. ( credit a : modification of work by β 95jb14 β / wikimedia commons ; credit b : modification of work by β mjneuby β / flickr ) the change of one type of matter into another type ( or the inability to change ) is a chemical property. examples of chemical properties include flammability, toxicity, acidity, and many other types of reactivity. iron, for example, combines with oxygen in the presence of water to form rust ; chromium does not oxidize ( figure 1. 19 ). nitroglycerin is very dangerous because it explodes easily ; neon poses almost no hazard because it is very unreactive. figure 1. 19 ( a ) one of the chemical properties of iron is that it rusts ; ( b ) one of the chemical properties of chromium is that it does not. ( credit a : modification of work by tony hisgett ; credit b : modification of work by β atoma β / wikimedia commons ) a chemical change always produces one or more types of matter that differ from the matter present before the change. the formation of rust is a chemical change because rust is a different kind of matter than the iron, oxygen, and water present before the rust formed. the explosion of nitroglycerin is a chemical change because the gases produced are very different kinds of matter from the original substance. other examples of chemical changes include reactions that are performed in a lab ( such as copper reacting with nitric acid ), all forms of combustion ( burning ), and food being cooked, digested, or rotting ( figure 1. 20 ). 24 1 β’ essential ideas access for free at openstax. org figure 1. 20 ( a ) copper and nitric acid undergo a chemical change to form copper nitrate and brown, gaseous nitrogen dioxide. ( b ) during the combustion of a match, cellulose in the
chemical changes occur when a substance combines with another to form a new substance, called chemical synthesis or, alternatively, chemical decomposition into two or more different substances. these processes are called chemical reactions and, in general, are not reversible except by further chemical reactions. some reactions produce heat and are called exothermic reactions and others may require heat to enable the reaction to occur, which are called endothermic reactions. understanding chemical changes is a major part of the science of chemistry. when chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. an example of a chemical change is the reaction between sodium and water to produce sodium hydroxide and hydrogen. so much energy is released that the hydrogen gas released spontaneously burns in the air. this is an example of a chemical change because the end products are chemically different from the substances before the chemical reaction. types chemists categorize chemical changes into three main classes : inorganic chemical changes, organic chemical changes and biochemical changes. inorganic changes inorganic chemistry describes the reactions of elements and compounds that, in general, do not involve carbon. the changes typically take place in laboratories, or on a larger scale in heavy industries. typical types of change include neutralization ( mixing an acid with a base, resulting in water and salt ), oxidization including combustion, redox reactions etc. organic changes organic chemistry is concerned with the chemistry of carbon and the elements and compound with which it reacts. these compounds include mineral oil and all of its products and much of the output of industries manufacturing pharmaceuticals, paints, detergents, cosmetics, fuels etc. typical examples of organic chemical changes include cracking heavy hydrocarbons at an oil refinery to create more gasoline from crude oil, as gasoline is in higher demand than the heavier hydrocarbons, such as residual fuel oils. other reactions include, methylation, condensation reactions, polymerisation, halogenation etc. biochemical change biochemistry deals with the chemistry of the growth and activity of living organisms. it is a chemistry where most reactions are controlled by complex proteins called enzymes and are moderated and limited by hormones. the chemistry is always highly complex and is still not fully understood.
20 ). 24 1 β’ essential ideas access for free at openstax. org figure 1. 20 ( a ) copper and nitric acid undergo a chemical change to form copper nitrate and brown, gaseous nitrogen dioxide. ( b ) during the combustion of a match, cellulose in the match and oxygen from the air undergo a chemical change to form carbon dioxide and water vapor. ( c ) cooking red meat causes a number of chemical changes, including the oxidation of iron in myoglobin that results in the familiar red - to - brown color change. ( d ) a banana turning brown is a chemical change as new, darker ( and less tasty ) substances form. ( credit b : modification of work by jeff turner ; credit c : modification of work by gloria cabada - leman ; credit d : modification of work by roberto verzo ) properties of matter fall into one of two categories. if the property depends on the amount of matter present, it is an extensive property. the mass and volume of a substance are examples of extensive properties ; for instance, a gallon of milk has a larger mass than a cup of milk. the value of an extensive property is directly proportional to the amount of matter in question. if the property of a sample of matter does not depend on the amount of matter present, it is an intensive property. temperature is an example of an intensive property. if the gallon and cup of milk are each at 20 Β°c ( room temperature ), when they are combined, the temperature remains at 20 Β°c. as another example, consider the distinct but related properties of heat and temperature. a drop of hot cooking oil spattered on your arm causes brief, minor discomfort, whereas a pot of hot oil yields severe burns. both the drop and the pot of oil are at the same temperature ( an intensive property ), but the pot clearly contains much more heat ( extensive property ). chemistry in everyday life hazard diamond you may have seen the symbol shown in figure 1. 21 on containers of chemicals in a laboratory or workplace. sometimes called a β fire diamond β or β hazard diamond, β this chemical hazard diamond provides valuable information that briefly summarizes the various dangers of which to be aware when working with a
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of the following is an example of a physical change?
A. lighting a match
B. breaking a glass
C. burning of gasoline
D. rusting of iron
Answer: | [
"lighting a match",
"breaking a glass",
"burning of gasoline",
"rusting of iron"
] | B | |
openbookqa_2133 | openbookqa | <ctx>
energy from waves as they hit the coast. they keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor. seagrasses evolved from marine algae which colonized land and became land plants, and then returned to the ocean about 100 million years ago. however, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. seagrass meadows are currently being destroyed at a rate of about two football fields every hour. kelp forests kelp forests occur worldwide throughout temperate and polar coastal oceans. in 2007, kelp forests were also discovered in tropical waters near ecuador. physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. for example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services. however, the influence of humans has often contributed to kelp forest degradation. of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. this can rapidly result in transitions to barren landscapes where relatively few species persist. already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as tasmania's east coast and the coast of northern california. the implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors. coastal ecology coastal food webs coastal waters include the waters in estuaries and over continental shelves. they occupy about 8 percent of the total ocean area and account for about half of all the ocean productivity. the key nutrients determining eutrophication are nitrogen in coastal waters and
. opportunities and challenges for the development of an integrated seaweed - based aquaculture activity in chile : determining the physiological capabilities of macrocystis and gracilaria as biofilters. journal of applied phycology, 20 ( 5 ), 571 - 577. buschmann, a. h., hernandez - gonzalez, m. c., astudillo, c., fuente, l. d. l., gutierrez, a., & aroca, g. ( 2005 ). seaweed cultivation, product development and integrated aquaculture studies in chile. world aquaculture, 36 ( 3 ), 51 - 53. chaoyuan, w., & guangheng, l. ( 1987 ). progress in the genetics and breeding of economic seaweeds in china. hydrobiologia, 151 - 152 ( 1 ), 57 - 61. connor, judith & charles baxter. ( 1989 ) kelp forests. monterey, california : monterey bay aquarium. fishery and aquaculture statistics ( 2007 ). retrieved from ftp. fao. org graham, m. h., vasquez, j. a. and buschmann, a. h. ( 2007 ) global ecology of the giant kelp macrocystis : from ecotypes to ecosystems. oceanography and marine biology : an annual review 45 : 39 - 88. huisman, j m ( 2000 ) marine plants of australia. university of western australia press. lopez, james. " macrocystis pyrifera. " monterey bay aquarium research institute. 2001. monterey bay aquarium research institute. 10 jan 2007 macchiavello, j., araya, e., & bulboa, c. production of macrocystis pyrifera ( laminariales ; phaeophyceae ) in northern chile on spore - based culture. journal of applied phycology, 1 - 7. mariculture of seaweeds. ( 2010 ). retrieved from https : / / web. archive. org / web / 20101226110745 / http : / / aquanic. org / species / documents / 6 _ algae _ 3 _ _ culturing. pdf
##thesis, a sufficient water depth not hindered by low tide exposure, an optimum water temperature between 27 and 30 degrees celsius, salinity levels of 30 β 35 0 / 00, and areas with little presence of grazers, microorganisms, suspended silt, and epiphytes. seed stocks are then prepared by tying eucheuma cuttings with soft plastic tying materials to monolines in both the bottom monoline and floating methods, where the nylon lines run parallel to each other at one - meter intervals to allow for water currents to flow through. the seaweed is than harvested 10 β 12 weeks after planting in order to allow the crop to mature and increase its carrageenan content. the farming of eucheuma has raised certain environmental issues, mostly centered on the ecology and biodiversity of coastal environments. the ecology of eucheuma farm sites may be characterized by overcrowding, as other farmers may be attracted to the farming site, eventually exceeding the carrying capacity of the environment. this in turn may change the overall hydrology of the area, impacting on other species that may be present. domestic pollution from farmhouse waste disposal may also impact the nearby environment. see also caulerpa lentillifera ( lato ) gamet agar carrageenan seaweed farming references further reading red algae genera solieriaceae edible algae
found in the protist kingdom are red, green and brown algae. kingdom plantae the plantae kingdoms consists of angiosperms - plants that produce seeds or flower as a part of their reproductive system. about 0. 085 % of the 300, 000 angiosperms believed to exist can be found in marine like environments. some examples of what plants in this kingdom exist are mosses, ferns, seagrasses, mangroves, and salt marsh plants β the last three being the three major communities of angiosperms in marine waters. seagrasses are recognized as some of the most important member to marine communities. it is the only true submerged angiosperm and can help determine the state of an ecosystem. seagrass helps identify the conditions of an ecosystem, as the presence of this plant aids the environment by : stabilizing the water's bottom, providing shelter and food for animals, and maintaining water quality. marine ecology marine ecology and marine botany include : benthic zone coral reef kelp forests mangroves phytoplankton salt marsh sea grass seaweed see also aquatic plants aquatic ecology " aquatic botany " β scientific journal phycology β study of algae index : marine botany marine primary production references biological oceanography aquatic ecology. seaweeds branches of botany oceanographical terminology
</ctx>
The following are multiple choice questions (with answers) about STEM.
Kelp has to have provisions to
A. maintain life
B. die
C. hibernate
D. relocate
Answer: | [
"maintain life",
"die",
"hibernate",
"relocate"
] | A | |
aquarat_41544 | aquarat | <ctx>
the answer should also have three significant figures. the answer 30. 0 km / hr does indeed have three significant figures, so this is appropriate. note that the significant figures in the conversion factor are not relevant because an hour is defined to be 60 minutes, so the precision of the conversion factor is perfect. ( 4 ) next, check whether the answer is reasonable. let us consider some information from the problem β if you travel 10 km in a third of an hour ( 20 min ), you would travel three times that far in an hour. the answer does seem reasonable. solution for ( b ) there are several ways to convert the average speed into meters per second. ( 1 ) start with the answer to ( a ) and convert km / h to m / s. two conversion factors are needed β one to convert hours to seconds, and another to convert kilometers to meters. ( 2 ) multiplying by these yields 1. 2 1. 3
1. 2 β’ physical quantities and units 21 example 1. 1 unit conversions : a short drive home suppose that you drive the 10. 0 km from your school to home in 20. 0 min. calculate your average speed ( a ) in kilometers per hour ( km / h ) and ( b ) in meters per second ( m / s ). ( note : average speed is distance traveled divided by time of travel. ) strategy first we calculate the average speed using the given units. then we can get the average speed into the desired units by picking the correct conversion factor and multiplying by it. the correct conversion factor is the one that cancels the unwanted unit and leaves the desired unit in its place. solution for ( a ) ( 1 ) calculate average speed. average speed is distance traveled divided by time of travel. ( take this definition as a given for now β average speed and other motion concepts will be covered in a later module. ) in equation form, ( 2 ) substitute the given values for distance and time. ( 3 ) convert km / min to km / h : multiply by the conversion factor that will cancel minutes and leave hours. that conversion factor is discussion for ( a ) to check your answer, consider the following : ( 1 ) be sure that you have properly cancelled the units in the unit conversion. if you have written the unit conversion factor upside down, the units will not cancel properly in the equation. if you accidentally get the ratio upside down, then the units will not cancel ; rather, they will give you the wrong units as follows : which are obviously not the desired units of km / h. ( 2 ) check that the units of the final answer are the desired units. the problem asked us to solve for average speed in units of km / h and we have indeed obtained these units. ( 3 ) check the significant figures. because each of the values given in the problem has three significant figures, the answer should also have three significant figures. the answer 30. 0 km / hr does indeed have three significant figures, so this is appropriate. note that the significant figures in the conversion factor are not relevant because an hour is defined to be 60 minutes, so the precision of the conversion factor is perfect
suppose that at one time during a shopping trip your instantaneous velocity is 40 km / h due north. your instantaneous speed at that instant would be 40 km / h β the same magnitude but without a direction. average speed, however, is very different from average velocity. average speed is the distance traveled divided by elapsed time. we have noted that distance traveled can be greater than the magnitude of displacement. so average speed can be greater than average velocity, which is displacement divided by time. for example, if you drive to a store and return home in half an hour, and your car β s odometer shows the total distance traveled was 6 km, then your average speed was 12 km / h. your average velocity, however, was zero, because your displacement for the round trip is zero. ( displacement is change in position and, thus, is zero for a round trip. ) thus average speed is not simply the magnitude of average velocity. 44 2 β’ kinematics access for free at openstax. org figure 2. 10 during a 30 - minute round trip to the store, the total distance traveled is 6 km. the average speed is 12 km / h. the displacement for the round trip is zero, since there was no net change in position. thus the average velocity is zero. another way of visualizing the motion of an object is to use a graph. a plot of position or of velocity as a function of time can be very useful. for example, for this trip to the store, the position, velocity, and speed - vs. - time graphs are displayed in figure 2. 11. ( note that these graphs depict a very simplified model of the trip. we are assuming that speed is constant during the trip, which is unrealistic given that we β ll probably stop at the store. but for simplicity β s sake, we will model it with no stops or changes in speed. we are also assuming that the route between the store and the house is a perfectly straight line. )
2. 3 time, velocity, and speed 5. ( a ) calculate earth β s average speed relative to the sun. ( b ) what is its average velocity over a period of one year? 92 2 β’ problems & exercises access for free at openstax. org 6. a helicopter blade spins at exactly 100 revolutions per minute. its tip is 5. 00 m from the center of rotation. ( a ) calculate the average speed of the blade tip in the helicopter β s frame of reference. ( b ) what is its average velocity over one revolution? 7. the north american and european continents are moving apart at a rate of about 3 cm / y. at this rate how long will it take them to drift 500 km farther apart than they are at present? 8. land west of the san andreas fault in southern california is moving at an average velocity of about 6 cm / y northwest relative to land east of the fault. los angeles is west of the fault and may thus someday be at the same latitude as san francisco, which is east of the fault. how far in the future will this occur if the displacement to be made is 590 km northwest, assuming the motion remains constant? 9. on may 26, 1934, a streamlined, stainless steel diesel train called the zephyr set the world β s nonstop long - distance speed record for trains. its run from denver to chicago took 13 hours, 4 minutes, 58 seconds, and was witnessed by more than a million people along the route. the total distance traveled was 1633. 8 km. what was its average speed in km / h and m / s? 10. tidal friction is slowing the rotation of the earth. as a result, the orbit of the moon is increasing in radius at a rate of approximately 4 cm / year. assuming this to be a constant rate, how many years will pass before the radius of the moon β s orbit increases by ( 1 % )? 11. a student drove to the university from their home and noted that the odometer reading of their car increased by 12. 0 km. the trip took 18. 0 min. ( a ) what was their average speed? ( b ) if the straight - line distance from their home
</ctx>
The following are multiple choice questions (with answers) about STEM.
A car traveling at a certain constant speed takes 4 seconds longer to travel 1 km than it would take to travel 1 km at 80 km/hour. At what speed, in km/hr, is the car traveling?
A. A)70
B. B)72
C. C)74
D. D)75
Answer: | [
"A)70",
"B)72",
"C)74",
"D)75"
] | C | Time to cover 1 kilometer at 80 kilometers per hour is 1/80 hours = 3,600/80 seconds = 45 seconds;
Time to cover 1 kilometer at regular speed is 45 + 4 = 49 seconds = 49/3,600 hours = 1/74 hours;
So, we get that to cover 1 kilometer 1/74 hours is needed --> regular speed 74 kilometers per hour (rate is a reciprocal of time or rate=distance/time).
Answer: C |
mmlu_aux_8067 | mmlu_auxiliary | <ctx>
14. 39 600 14 β’ heat and heat transfer methods access for free at openstax. org moving air temperature wind speed ( m / s ) table 14. 4 wind - chill factors although air can transfer heat rapidly by convection, it is a poor conductor and thus a good insulator. the amount of available space for airflow determines whether air acts as an insulator or conductor. the space between the inside and outside walls of a house, for example, is about 9 cm ( 3. 5 in ) β large enough for convection to work effectively. the addition of wall insulation prevents airflow, so heat loss ( or gain ) is decreased. similarly, the gap between the two panes of a double - paned window is about 1 cm, which prevents convection and takes advantage of air β s low conductivity to prevent greater loss. fur, fiber, and fiberglass also take advantage of the low conductivity of air by trapping it in spaces too small to support convection, as shown in the figure. fur and feathers are lightweight and thus ideal for the protection of animals. figure 14. 19 fur is filled with air, breaking it up into many small pockets. convection is very slow here, because the loops are so small. the low conductivity of air makes fur a very good lightweight insulator. some interesting phenomena happen when convection is accompanied by a phase change. it allows us to cool off by sweating, even if the temperature of the surrounding air exceeds body temperature. heat from the skin is required for sweat to evaporate from the skin, but without air flow, the air becomes saturated and evaporation stops. air flow caused by convection replaces the saturated air by dry air and evaporation continues.
to be further discussed in the next chapter ), electric currents preferentially flow through paths in the human body that have a minimum resistance in a direct path to earth. the earth is a natural electron sink. wearing insulating shoes, a requirement in many professions, prohibits a pathway for electrons by providing a large resistance in that path. whenever working with high - power tools ( drills ), or in risky situations, ensure that you do not provide a pathway for current flow ( especially through the heart ). 3 for an average male shocked through trunk of body for 1 s by 60 - hz ac. values for females are 60 β 80 % of those listed.
18. 2 conductors and insulators learning objectives by the end of this section, you will be able to : β’ define conductor and insulator, explain the difference, and give examples of each. β’ describe three methods for charging an object. β’ explain what happens to an electric force as you move farther from the source. β’ define polarization. figure 18. 10 this power adapter uses metal wires and connectors to conduct electricity from the wall socket to a laptop computer. the conducting wires allow electrons to move freely through the cables, which are shielded by rubber and plastic. these materials act as insulators that don β t allow electric charge to escape outward. ( credit : evan - amos, wikimedia commons ) some substances, such as metals and salty water, allow charges to move through them with relative ease. some of the electrons in metals and similar conductors are not bound to individual atoms or sites in the material. these free electrons can move through the material much as air moves through loose sand. any substance that has free electrons and allows charge to move relatively freely through it is called a conductor. the moving electrons may collide with fixed atoms and molecules, losing some energy, but they can move in a conductor. superconductors allow the movement of charge without any loss of energy. salty water and other similar conducting materials contain free ions that can move through them. an ion is an atom or molecule having a positive or negative ( nonzero ) total charge. in other words, the total number of electrons is not equal to the total number of protons. other substances, such as glass, do not allow charges to move through them. these are called insulators. electrons and ions in insulators are bound in the structure and cannot move easily β as much as times more slowly than in conductors. pure water and dry table salt are insulators, for example, whereas molten salt and salty water are conductors. figure 18. 11 an electroscope is a favorite instrument in physics demonstrations and student laboratories. it is typically made with gold foil leaves hung from a ( conducting ) metal stem and is insulated from the room air in a glass - walled container. ( a ) a positively charged glass rod is
conduction, convection, or radiation β is approximately proportional to the temperature difference Ξ΄t. frozen food will warm up faster in a warm room than in a cold room. note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind. this is referred to as wind chill. for example, a wind chill of - 20 Β°c means that heat is being lost at the same rate as if the temperature were - 20 Β°c without wind. applicable situations this law describes many situations in which an object has a large thermal capacity and large conductivity, and is suddenly immersed in a uniform bath which conducts heat relatively poorly. it is an example of a thermal circuit with one resistive and one capacitative element. for the law to be correct, the temperatures at all points inside the body must be approximately the same at each time point, including the temperature at its surface. thus, the temperature difference between the body and surroundings does not depend on which part of the body is chosen, since all parts of the body have effectively the same temperature. in these situations, the material of the body does not act to " insulate " other parts of the body from heat flow, and all of the significant insulation ( or " thermal resistance " ) controlling the rate of heat flow in the situation resides in the area of contact between the body and its surroundings. across this boundary, the temperature - value jumps in a discontinuous fashion. in such situations, heat can be transferred from the exterior to the interior of a body, across the insulating boundary, by convection, conduction, or diffusion, so long as the boundary serves as a relatively poor conductor with regard to the object's interior. the presence of a physical insulator is not required, so long as the process which serves to pass heat across the boundary is " slow " in comparison to the conductive transfer of heat inside the body ( or inside the region of interest β the " lump " described above ). in such a situation, the object acts as the " capacitative " circuit element, and the resistance of the thermal contact at the boundary acts as the ( single ) thermal resistor. in electrical circuits, such a combination would charge
</ctx>
The following are multiple choice questions (with answers) about STEM.
Electrical flow can be slowed down if there is an insulator
A. protected
B. avoided
C. active
D. demoted
Answer: | [
"protected",
"avoided",
"active",
"demoted"
] | C | null |
openbookqa_2468 | openbookqa | <ctx>
to study their immediate ecological impacts on intertidal flat communities at sendai bay and the sanriku ria coast. pre - and post - event surveys show a reduction in animal taxon richness and change in taxon composition mainly attributed to the tsunami and its physical impacts. in particular, sessile epibenthic animals and endobenthic animals both decreased in taxon richness. mobile epibenthic animals, such as hermit crabs, were not as affected. post - surveys also recorded taxa that were not previously recorded before, suggesting that tsunamis have the potential to introduce species and change taxon composition and local community structure. the long term ecological impacts at sendai bay and the greater east coast of japan require further study. see also health crisis humanitarian response to the 2011 tohoku earthquake and tsunami lists of earthquakes list of earthquakes in 2011 list of earthquakes in japan list of tsunamis list of megathrust earthquakes nuclear power in japan Β§ seismicity ryou - un maru seismicity of the sanriku coast takashi shimokawara 3. 11 : surviving japan pray for japan explanatory notes references further reading [ council for central ] disaster management, cabinet office, government of japan ( march 2015 ). disaster management in japan. sources external links japan's killer quake β nova the m9. 0 great tohoku earthquake ( northeast honshu, japan ) of march 11, 2011 from united states geological survey ( usgs ) pacific tsunami warning center at national oceanic and atmospheric administration ( noaa ) map of tsunami inundation areas in japan from reliefweb massive earthquake hits japan photos from the boston globe japan earthquake : before and after aerial and satellite images from abc news, credited to post - earthquake images of japan satellite photos of japan, before and after the quake and tsunami the new york times japan in crisis : a series of interviews with scholars by peter shea at the university of minnesota special : the tohoku - oki earthquake, japan β free - access scientific papers from science magazine the east japan earthquake archive ( testimonies of survivors, photographs and videos on google earth ) preventionweb great east japan earthquake 2011 from fuji news network interactive comparisons of tohoku earthquake and tsunami at be
and therefore could have been enough to cause a marine mass extinction, but this change would not have caused the extinction of the ammonites. the regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the earth's albedo and increasing global temperatures. marine regression also resulted in the loss of epeiric seas, such as the western interior seaway of north america. the loss of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the dinosaur park formation. another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. while this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered. multiple causes proponents of multiple causation view the suggested single causes as either too small to produce the vast scale of the extinction, or not likely to produce its observed taxonomic pattern. in a review article, j. david archibald and david e. fastovsky discussed a scenario combining three major postulated causes : volcanism, marine regression, and extraterrestrial impact. in this scenario, terrestrial and marine communities were stressed by the changes in, and loss of, habitats. dinosaurs, as the largest vertebrates, were the first affected by environmental changes, and their diversity declined. at the same time, particulate materials from volcanism cooled and dried areas of the globe. then an impact event occurred, causing collapses in photosynthesis - based food chains, both in the already - stressed terrestrial food chains and in the marine food chains. based on studies at seymour island in antarctica, sierra petersen and colleagues argue that there were two separate extinction events near the cretaceous β paleogene boundary, with one correlating to deccan trap volcanism and one correlated with the chicxulub impact. the team analyzed combined extinction patterns using a new clumped isotope temperature record from a hiatus - free, expanded k β pg boundary section. they documented a 7. 8Β±3. 3 Β°c warming synchronous with the onset of deccan traps volcanism and a second, smaller warming at
risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. causes potential anthropogenic causes of human extinction include global thermonuclear war, deployment of a highly effective biological weapon, an ecological collapse, runaway artificial intelligence, runaway nanotechnology ( such as a grey goo scenario ), a scientific accident involving a micro black hole or vacuum metastability disaster, overpopulation and increased consumption pose the risk of resource depletion and a concomitant population crash, population decline by choosing to have fewer children, displacement of naturally evolved humans by a new species produced by genetic engineering or technological augmentation. natural and external extinction risks include high - fatality - rate pandemic, supervolcanic eruption, asteroid impact, nearby supernova or gamma - ray bursts, extreme solar flare, or alien invasion. humans ( e. g. homo sapiens sapiens ) as a species may also be considered to have " gone extinct " simply by being replaced with distant descendants whose continued evolution may produce new homo species or subspecies. without intervention by unexpected forces, the stellar evolution of the sun is expected to make earth uninhabitable, then destroy it. depending on its ultimate fate, the entire universe may eventually become uninhabitable. probability natural vs. anthropogenic experts generally agree that anthropogenic existential risks are ( much ) more likely than natural risks. a key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. humanity has existed for at least 200, 000 years, over which it has been subject to a roughly constant level of natural risk. if the natural risk were sufficiently high, then it would be highly unlikely that humanity would have survived as long as it has. based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14, 000 per year. another empirical method to study the likelihood of certain natural risks is to investigate the geological record. for example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one - in
down to its soil. this was previously not thought possible in a volcanic eruption of this magnitude. one specific example was the survival of arthropods, as most found during 2009 were either survivors of the disaster or their progeny. one of the greater contributing factors to recolonization on kasatochi is the different methods of dispersal. these are the methods that are planned to bring back the largest amount of species ; however, the survival of these species depends on the ability of the island to stabilize its present ecosystem. at present, it is found that most immigrants ( whether by wind, flight, water, etc. ) fail to find suitable habitats, so while they are continually being introduced, the majority will die. these dispersal methods will lead to greater species diversity only when the terrestrial habitat stabilizes in response to abiotic and external forces.'' these few examples of learned patterns by monitoring how the ecosystem of kasatochi returns to a new equilibrium and comparing the data to similar islands are helping further scientists'understanding of terrestrial - marine interactions and allowing for new knowledge of ecosystem reassembly after a devastating natural disaster. see also list of volcanoes in the united states of america references external links usgs : kasatochi volcano kasatochi island photos photos from kasatochi island, july 2008 alaska volcano observatory, kasatochi description and statistics geology : kasatochi volcano national museum of natural history : global volcanism program andreanof islands volcanoes of alaska subduction volcanoes stratovolcanoes of the united states volcanic crater lakes 2008 in alaska 21st - century volcanic events uninhabited islands of alaska islands of alaska islands of unorganized borough, alaska landforms of aleutians west census area, alaska volcanoes of unorganized borough, alaska active volcanoes
</ctx>
The following are multiple choice questions (with answers) about STEM.
natural disasters can cause animals to
A. deep fry
B. dance
C. trek
D. nap
Answer: | [
"deep fry",
"dance",
"trek",
"nap"
] | C | |
aquarat_43670 | aquarat | <ctx>
functions for simple and compound interest are if, then these two functions are the same. continuous compounding as n, the number of compounding periods per year, increases without limit, the case is known as continuous compounding, in which case the effective annual rate approaches an upper limit of, where is a mathematical constant that is the base of the natural logarithm. continuous compounding can be thought of as making the compounding period infinitesimally small, achieved by taking the limit as n goes to infinity. see definitions of the exponential function for the mathematical proof of this limit. the amount after t periods of continuous compounding can be expressed in terms of the initial amount p0 as force of interest as the number of compounding periods tends to infinity in continuous compounding, the continuous compound interest rate is referred to as the force of interest. in mathematics, the accumulation functions are often expressed in terms of e, the base of the natural logarithm. this facilitates the use of calculus to manipulate interest formulae. for any continuously differentiable accumulation function a ( t ), the force of interest, or more generally the logarithmic or continuously compounded return is a function of time defined as follows : this is the logarithmic derivative of the accumulation function. conversely : ( since ; this can be viewed as a particular case of a product integral. ) when the above formula is written in differential equation format, then the force of interest is simply the coefficient of amount of change : for compound interest with a constant annual interest rate r, the force of interest is a constant, and the accumulation function of compounding interest in terms of force of interest is a simple power of e : or the force of interest is less than the annual effective interest rate, but more than the annual effective discount rate. it is the reciprocal of the e - folding time. see also notation of interest rates. a way of modeling the force of inflation is with stoodley's formula : where p, r and s are estimated. compounding basis to convert an interest rate from one compounding basis to another compounding basis, so that use where r1 is the interest rate with compounding frequency n1, and r2 is
a mortgage ( 1 million for 20 years @ r = 10 % ) calculated firstly according to the above time continuous model and secondly using the excel pv function. as may be seen the curves are virtually indistinguishable β calculations effected using the model differ from those effected using the excel pv function by a mere 0. 3 % ( max ). the data from which the graph ( s ) were derived can be viewed comparison with similar physical systems define the " reverse time " variable z = t β t. ( t = 0, z = t and t = t, z = 0 ). then : this may be recognized as a solution to the " reverse time " differential equation : electrical / electronic engineers and physicists will be familiar with an equation of this nature : it is an exact analogue of the type of differential equation which governs ( for example ) the charging of a capacitor in an rc circuit. the key characteristics of such equations are explained in detail at rc circuits. for home owners with mortgages the important parameter to keep in mind is the time constant of the equation which is simply the reciprocal of the annual interest rate r. so ( for example ) the time constant when the interest rate is 10 % is 10 years and the period of a home loan should be determined β within the bounds of affordability β as a minimum multiple of this if the objective is to minimise interest paid on the loan. mortgage difference and differential equation the conventional difference equation for a mortgage loan is relatively straightforward to derive - balance due in each successive period is the previous balance plus per period interest less the per period fixed payment. given an annual interest rate r and a borrower with an annual payment capability mn ( divided into n equal payments made at time intervals Ξ΄t where Ξ΄t = 1 / n years ), we may write : if n is increased indefinitely so that Ξ΄t β 0, we obtain the continuous time differential equation : note that for there to be a continually diminishing mortgage balance, the following inequality must hold : p0 is the same as p ( 0 ) β the original loan amount or loan balance at time t = 0. solving the difference equation we begin by re - writing the difference equation in rec
: note that for there to be a continually diminishing mortgage balance, the following inequality must hold : p0 is the same as p ( 0 ) β the original loan amount or loan balance at time t = 0. solving the difference equation we begin by re - writing the difference equation in recursive form : using the notation pn to indicate the mortgage balance after n periods, we may apply the recursion relation iteratively to determine p1 and p2 : it can already be seen that the terms containing mn form a geometric series with common ratio 1 + rΞ΄ t. this enables us to write a general expression for pn : finally noting that r Ξ΄ t = i the per - period interest rate and the per period payment, the expression may be written in conventional form : if the loan timespan is m periods, then pm = 0 and we obtain the standard present value formula : solving the differential equation one method of solving the equation is to obtain the laplace transform p ( s ) : using a table of laplace transforms and their time domain equivalents, p ( t ) may be determined : in order to fit this solution to the particular start and end points of the mortgage function we need to introduce a time shift of t years ( t = loan period ) to ensure the function reaches zero at the end of the loan period : note that both the original solution and " time - shifted " version satisfy the original differential equation whence both are derived. similar to the expression derived above for pn in the difference equation, the expression for p ( t ) may be written in the following algebraically equivalent form : calculation of accumulated interest and principal payments re - arranging the original differential equation we obtain : integrating both sides of the equation yields : the first integral on the right hand side determines the accumulated interest payments from time of inception to time t whilst the second determines the accumulated principal payments over the same period. the sum of these interest and principal payments must equal the cumulative fixed payments at time t i. e. mat. evaluating the first integral on the right we obtain an expression for i ( t ), the interest paid : unsurprisingly the second integral evaluates to p0 β p ( t )
of these interest and principal payments must equal the cumulative fixed payments at time t i. e. mat. evaluating the first integral on the right we obtain an expression for i ( t ), the interest paid : unsurprisingly the second integral evaluates to p0 β p ( t ) and therefore : the reader may easily verify that this expression is algebraically identical to the one above. loan cost factor the cost of a loan is simply the annual rate multiplied by loan period : let s = rt. then we may define loan cost factor c ( s ) such that c = p0c ( s ) i. e. : c ( s ) is the cost per unit of currency loaned. the function c ( s ) is characterised by having a limiting value of 1 when s is close to zero since for small values of s, exp ( βs ) β 1 β s and the denominator simplifies to s. also when s is very large, exp ( βs ) is small so c ( s ) β s and thus loan cost c β p0rt ( rt > > 0 ). by way of example, consider a loan of 1000000 at 10 % repaid over 20 years. then s = 0. 1 Γ 20 = 2. the product rt is an easily obtained but important parameter in determining loan cost according to the equation c = p0xc ( s ). this is best illustrated by plotting the cost factor function for s values in domain [ 0 ; 5 ]. the linear behaviour of the function for higher values of s is clear. equivalent simple interest cost factor for a fixed term loan of t years, we may compare the above loan cost factor against an equivalent simple interest cost factor 1 + se where se = ret and re is the equivalent simple interest rate : it is straightforward to determine se in terms of s. dividing by loan time period t will then give the equivalent simple interest rate. more challenging is the reverse determination of s given se. in his book problem solving with true basic, dr b. d. hahn has a short section on certain'hire purchase'schemes in which interest is calculated in advance in one lump sum, which is added to the
</ctx>
The following are multiple choice questions (with answers) about STEM.
Find compound interest on Rs. 7500 at 4% per annum for 2 years, compounded annually
A. A)Rs 312
B. B)Rs 412
C. C)Rs 512
D. D)Rs 612
Answer: | [
"A)Rs 312",
"B)Rs 412",
"C)Rs 512",
"D)Rs 612"
] | D | Explanation:
Please apply the formula
Amount=P(1+R100)nC.I. = Amount - P
Answer: D |
aquarat_36734 | aquarat | <ctx>
list of elements is this : ( ( 3, 8, 0 ), ( 2, 8, 1 ), ( 4, 8, 1 ) ) after 2 turns : ( ( 3, 8, 0 ), ( 2, 8, 1 ), ( 4, 8, 1 ), ( 1, 8, 2 ), ( 4, 7, 2 ) ) after 3 turns : (... ( 1, 7, 3 ), ( 4, 6, 3 ), ( 5, 7, 3 ) ) after 4 turns : (... ( 1, 6, 4 ), ( 3, 6, 4 ), ( 6, 7, 4 ) ) after 5 turns : (... ( 1, 5, 5 ), ( 3, 5, 5 ), ( 6, 6, 5 ), ( 6, 8, 5 ) ) after 6 turns : (... ( 1, 4, 6 ), ( 2, 5, 6 ), ( 3, 4, 6 ), ( 6, 5, 6 ), ( 7, 8, 6 ) ) after 7 turns : (... ( 1, 3, 7 ) ) β problem solved, end this stage of the algorithm β note that if you have multiple units chasing the same target ( as in many games β the finish to start approach of the algorithm is intended to make this easier ), you can continue until the entire map is taken up, all units are reached or a set counter limit is reached now, map the counters onto the map, getting this : 1 2 3 4 5 6 7 8 x x x x x x x x x x x _ _ _ x x _ x _ x 1 x _ x _ _ x _ _ _ x 2 x s x x _ _ _ x _ x 3 x 6 x 6 _ x _ _ _ x 4 x 5 6 5 x x 6 x _ x 5 x 4 x 4 3 x 5 x _ x 6 x 3 x x 2 3 4 x _ x 7 x 2 1 0 1 x 5 6 _ x 8 x x x x x x x x x x now, start at s ( 7 ) and go to the
nato round. the round was designed to the same ballistics requirement as the 5. 56Γ45mm nato round as outlined in evaluation procedures for future nato weapons systems ( document 14 ). however, the 4. 73mm is much less likely to tumble when hitting or penetrating a soft target, and thus not as lethal. the effect on soft targets is in accordance with international conventions. even at short range, the round does not fragment in the soft target medium. this was confirmed in tests with gelatine. ( see terminal ballistics ) the design principle was to increase target hit probability by firing high rate multi - round bursts ( salvos ). tests have been run using a prototype shotgun test - bed called caws to see whether a single - shot, multi - projectile system could achieve the range and hit probability requirements. the results indicated that the use of serially fired projectiles at a high rate of fire would achieve a tight shotgun - like pattern with rifle - like accuracy up to the required range. the rifle was designed to have a dispersion such that a man - target running at a speed of 6 km / h at a distance of 250 m would be hit even if the lead angle error ( 2 mil ) was off by 51 cm. the weapon itself has three firing modes : semi - auto, full - auto at 460 rounds per minute, and three - round burst at over 2100 cyclic rounds per minute, or approximately 36 rounds per second. the loading and feed mechanism is physically very complicated but exceptionally fast and reliable. rounds are fed into the weapon from a magazine that lies above and parallel with the barrel. the rounds are oriented vertically ( at 90 degrees to the bore ) and are fed downwards into the rotary chamber so that they can be rotated 90 degrees for firing. the firing cycle process is roughly : as the cocking handle on the side is rotated clockwise by the weapon operator : a round is dropped into the revolving chamber vertically ( a loading piston assists ). the chamber rotates 90Β° so it is lined up with the barrel. this completes the chambering of the round and cocking of the firing pin. when the trigger is pulled, a firing pin ignites the primer, which then ignites a
by epic ) added statistics tracking. the range of statistics being tracked is very wide, such as : precision of fire with each weapon ( percentage of hits to fired ammunition ) kills with each weapon, being killed by particular weapon, and being killed when holding particular weapon. headshots ( lethal hits of combatant heads with sniper rifles and some other powerful weapons ) killing sprees : killing 5, 10, 15, 20 or 25 combatants without dying is called a killing spree, each greater kill count being considered more valuable and having a unique title ( respectively ; killing spree, rampage, dominating, unstoppable, godlike ). the game tracked how many times has the player achieved each of these titles. consecutive kills : when a player kills a combatant within 5 seconds after a previous kill, a consecutive kill occurs. the timer starts ticking anew, allowing a third kill, a fourth kill etc. alternatively, killing several enemies with a mega weapon ( such as the redeemer, which resembles a nuclear rocket ) also counts as consecutive kill. the titles of these kills are : double kill ( 2 ), multi kill ( 3 ), ultra kill ( 4 ), megakill ( 5 ), monsterkill ( 6 ; 5 in the original unreal tournament ). for comparison, id software's " quake iii arena " tracks double kills, but a third kill soon after results in another double kill award. quake iii arena this game's approach to combat achievements tracking is different from unreal tournament. in deathmatch, the player might be rewarded with awards for the following tricks : " perfect! " β winning a round of deathmatch without getting killed " impressive! " β hitting with two consecutive shots or hitting two enemies with one shot from the railgun ( a powerful, long - range hitscan weapon with a slow rate of fire ) " humiliation! " β killing an opponent with the melee razor - like gauntlet ( the killed player hears the announcement too, but the fact of being humiliated is not tracked for him ). " accuracy " β having over 50 % of hits - to - shots ratio. last man standing the last man standing ( lms ) version of deathmatch is fundamentally different
truel and triel are neologisms for a duel between three opponents, in which players can fire at one another in an attempt to eliminate them while surviving themselves. game theory overview a variety of forms of truels have been studied in game theory. features that determine the nature of a truel include the probability of each player hitting their chosen targets ( often not assumed to be the same for each player ) whether the players shoot simultaneously or sequentially, and, if sequentially, whether the shooting order is predetermined, or determined at random from among the survivors ; the number of bullets each player has ( in particular, whether this is finite or infinite ) ; whether or not intentionally missing is allowed. whether or not self - targeting or random selection of targets is allowed. there is usually a general assumption that each player in the truel wants to be the only survivor, and will behave logically in a manner that maximizes the probability of this. ( if each player only wishes to survive and does not mind if the others also survive, then the rational strategy for all three players can be to miss every time. ) in the widely studied form, the three have different probabilities of hitting their target. if a single bullet is used, the probabilities of hitting the target are equal and deliberate missing is allowed, the best strategy for the first shooter is to deliberately miss. since he is now disarmed, the next shooter will have no reason to shoot the first one and so will shoot at the third shooter. while the second shooter might miss deliberately, there would then be the risk that the third one would shoot him. if the first shooter does not deliberately miss, he will presumably be shot by whichever shooter remained. if an unlimited number of bullets are used, then deliberate missing may be the best strategy for a duelist with lower accuracy than both opponents. if both have better than 50 % success rate, he should continue to miss until one of his opponents kills the other. then he will get the first shot at the remaining opponent. but if the " middle " opponent is weak, it can be better to team up with him until the strongest is eliminated. the details depend on the firing order. for example,
</ctx>
The following are multiple choice questions (with answers) about STEM.
A man can hit a target once in 8 shots. If he fires 4 shots in succession, how many ways that he will hit his target?
A. A)71/23
B. B)81/13
C. C)81/29
D. D)17/29
Answer: | [
"A)71/23",
"B)81/13",
"C)81/29",
"D)17/29"
] | D | The man will hit the target if he hits it once or twice or thrice or all four times in the four shots that he takes.
So, the only possibility when the man will not hit the target is when he fails to hit the target in even one of the four shots that he takes.
The event of not hitting the target even once is the complement of the event of hitting the target at least once.
The probability that he will not hit the target in any given shot = 1 - 1/8= 7/8
Therefore, the probability that he will not hit the target in any of the four shots = 7/8 ^4 = 17/29
D |
medmcqa_9813 | medmcqa | <ctx>
beta - lactams. in recognition of her contributions to the field of microbiology, silver was elected as a fellow of the american academy of microbiology in 2018. professional activities editorial board, antimicrobial agents and chemotherapy scientific advisory board, combating antibacterial resistant bacteria ( carb - x ) expert advisor, global antibiotic research and development partnership ( gardp ) discovery expert advisor, pew charitable trust shared platform for antibiotic research and knowledge ( spark ) reviewer on nih study sections patent issued us patent number 6, 221, 859 carbapenem antibacterial compositions and methods of the treatment. dorso ; karen l. ( franklin park, nj ), jackson ; jesse j. ( howell, nj ), gill ; charles j. ( beachwood, nj ), kohler ; joyce ( woodbridge, nj ), silver ; lynn l. ( westfield, nj ) representative publications in peer reviewed journals there are over 11000 citations of silver's publications, and she has an h - index of 38 silver, lynn l. challenges of antibacterial discovery. clinical micro reviews 2011, 24 : 71 - 109. theuretzbacher u, gottwalt s, beyer p, butler m, czaplewski l, lienhardt c, moja l, paul m, paulin s, rex jh, silver ll, spigelman m, thwaites ge, paccaud jp, harbarth s. analysis of the clinical antibacterial and antituberculosis pipeline. lancet infect dis. 2019 feb ; 19 ( 2 ) : e40 - e50. doi : 10. 1016 / s1473 - 3099 ( 18 ) 30513 - 9. tacconelli e, carrara e, savoldi a, harbarth s, mendelson m, monnet dl, pulcini c, kahlmeter g, kluytmans j, carmeli y, ouellette m, outterson k, patel j, cavaleri m, cox em, houchens cr, grayson ml, hansen p, singh n
can be targeted for the treatment of disease. techniques involved in this interdisciplinary research include proteomics, x - ray crystallography, nuclear magnetic resonance ( nmr ) spectroscopy, biological mass spectrometry, molecular biology, enzyme kinetics, protein - directed dynamic combinatorial chemistry and organic synthesis / medicinal chemistry. antibiotics : biosynthesis and resistance mechanisms most clinically used antibiotics are based upon natural products. the most important family of antibiotics contains a Ξ² - lactam ring, and includes the penicillin, cephalosporin, clavam, and carbapenem antibiotics. the group's biosynthetic work has focused on the clavams and carbapenems, with a particular focus being on the mechanism and structures of enzymes that catalyse chemically'interesting'steps. the biggest threat to the continued use of Ξ² - lactam antibiotics is that of bacterial resistance. schofield is currently working on the design and synthesis of enzyme inhibitors for the metallo Ξ² - lactamases β there are no clinically used inhibitor of these enzymes but they pose a significant threat as they catalyse the hydrolysis of almost all clinically used Ξ² - lactam antibiotics. a particular interest involves human metallo Ξ² - lactamases which share the same fold. awards and honours 2015 - 2020 : wellcome trust advanced investigator award ( with sir peter ratcliffe ) 2013 : fellow of the royal society ( london ) ; member of embo ; fellow of the royal society of biology, uk ; member of the biochemical society ; member of the society for experimental biology, uk 2012 : finalist β biotechnology and biological sciences research council'innovator of the year'2011 : royal society of chemistry, jeremy knowles award, uk ; highly cited paper awards ( e. g. biochemical journal, bioorganic & medicinal chemistry letters ) 2009 β 2014 : pi of erc advanced investigator grant spa ga 2008 233240 ( with sir peter ratcliffe ) ; molecular mechanism of oxygen sensing by enzymes ( moose ) 2000 : fellow of the royal society of chemistry ( london ) references external links http : / / schofield. chem. ox. ac. uk / http
other dna sequencing analysis methods. dna microarrays and chips use the binding of complementary dna to a target gene or nucleic acid sequence. the benefit of this is that multiple genes can be assessed simultaneously. maldi - tof matrix - assisted laser desorption ionisation - time of flight mass spectrometry ( maldi - tof ms ) is another method of susceptibility testing. this is a form of time - of - flight mass spectrometry, in which the molecules of a bacterium are subject to matrix - assisted laser desorption. the ionised particles are then accelerated, and spectral peaks recorded, producing an expression profile, which is capable of differentiating specific bacterial strains after being compared to known profiles. this includes, in the context of antibiotic susceptibility testing, strains such as beta - lactamase producing e coli. maldi - tof is rapid and automated. there are limitations to testing in this format however ; results may not match the results of phenotypic testing, and acquisition and maintenance is expensive. reporting the results of the testing are reported as a table, sometimes called an antibiogram. bacteria are marked as sensitive, resistant, or having intermediate resistance to an antibiotic based on the minimum inhibitory concentration ( mic ), which is the lowest concentration of the antibiotic that stops the growth of bacteria. the mic is compared to standard threshold values ( called " breakpoints " ) for a given bacterium and antibiotic. breakpoints for the same organism and antibiotic may differ based on the site of infection : for example, the clsi generally defines streptococcus pneumoniae as sensitive to intravenous penicillin if mics are β€0. 06 ΞΌg / ml, intermediate if mics are 0. 12 to 1 ΞΌg / ml, and resistant if mics are β₯2 ΞΌg / ml, but for cases of meningitis, the breakpoints are considerably lower. sometimes, whether an antibiotic is marked as resistant is also based on bacterial characteristics that are associated with known methods of resistance such as the potential for beta - lactamase production. specific patterns of drug resistance or multidrug resistance may be noted,
united kingdom at cambridge university, returned to washington university in st. louis, missouri as an assistant professor of pharmacology. here, he discovered that the uridine nucleotide that accumulated in the penicillin - treated bacterium staphylococcus aureus was a precursor of the bacterial cell wall. strominger joined the university of wisconsin, madison, as chairman of the department of pharmacology from 1964 to 1968. there, with donald j. tipper in 1965, he demonstrated the mechanism of action by which antibiotic penicillins kill bacteria by inhibiting the completion of the synthesis of structural components of bacterial cell walls known as peptidoglycans. penicillins specifically inhibit the activity of enzymes that are needed for the cross - linking of peptidoglycans during the final step in cell wall biosynthesis. these antibiotics do this by binding to the group of enzymes known as penicillin - binding proteins using a chemical structure found on penicillin molecules known as a Ξ² - lactam ring. Ξ² - lactam imitates the naturally occurring acyl - d - alanyl - d - alanine substrate for the enzymes. he joined the harvard faculty in 1968 to work in the biochemistry and molecular biology department specializing in microbial biochemistry, with a small portion of his time being devoted to organ transplantation biology. knowledge was scarce with respect to the mechanisms of allograft rejection. there was none for the transplantation antigens. graft acceptance or rejection was only hinted at through previous knowledge of blood type erythrocyte transfusion. in the mid - 1960's, allan davies from the united kingdom had discovered a number of the 3, 6 - dideoxyhexoses that could be utilized to distinguish bacterial surfaces. davies speculated that the specificity of transplantation antigen might also be determined by cell surface arrangements of sugars. later, stan nathenson worked with davies to characterize transplantation antigens and discovered that they could be solubilized from the surfaces of cells by the protease papain. in 1974, stominger became a member of the dana β farber cancer institute, a cancer treatment and research institution in boston
</ctx>
The following are multiple choice questions (with answers) about STEM.
A 65-year-old male from bihar with a pneumonia has a sputum culture that is positive for a staphylococcal strain that is b-lactamase- positive. Which is the best choice of penicillin therapy in this patient?
A. Ampicillin
B. Oxacillin
C. Ticarcillin
D. Penicillin G
Answer: | [
"Ampicillin",
"Oxacillin",
"Ticarcillin",
"Penicillin G"
] | B | Oxacillin is classified as a penicillinase-resistant penicillin that is relatively acid-stable and, therefore, is useful for oral administration. Major adverse reactions include penicillin hypersensitivity and interstitial nephritis. With the exception of methicillin, which is 35% bound to serum proteins, all penicillinase-resistant penicillins are highly bound to plasma proteins. Oxacillin has a very narrow spectrum and is used primarily as an antistaphylococcal agent. |
aquarat_30276 | aquarat | <ctx>
get 31. 55 31. 56 Γ Γ Γ
1. 3 accuracy, precision, and significant figures express your answers to problems in this section to the correct number of significant figures and proper units. 11. suppose that your bathroom scale reads your mass as 65 kg with a 3 % uncertainty. what is the uncertainty in your mass ( in kilograms )? 12. a good - quality measuring tape can be off by 0. 50 cm over a distance of 20 m. what is its percent uncertainty? 13. ( a ) a car speedometer has a uncertainty. what is the range of possible speeds when it reads? ( b ) convert this range to miles per hour. 14. an infant β s pulse rate is measured to be beats / min. what is the percent uncertainty in this measurement? 15. ( a ) suppose that a person has an average heart rate of 72. 0 beats / min. how many beats do they have in 2. 0 y? ( b ) in 2. 00 y? ( c ) in 2. 000 y? 16. a can contains 375 ml of soda. how much is left after 308 ml is removed? 17. state how many significant figures are proper in the results of the following calculations : ( a ) ( b ) ( c ). 18. ( a ) how many significant figures are in the numbers 99 and 100? ( b ) if the uncertainty in each number is 1, what is the percent uncertainty in each? ( c ) which is a more meaningful way to express the accuracy of these two numbers, significant figures or percent uncertainties? 19. ( a ) if your speedometer has an uncertainty of at a speed of, what is the percent uncertainty? ( b ) if it has the same percent uncertainty when it reads, what is the range of speeds you could be going? 20. ( a ) a person β s blood pressure is measured to be. what is its percent uncertainty? ( b ) assuming the same percent uncertainty, what is the uncertainty in a blood pressure measurement of? 21. a person measures their heart rate by counting the number of beats in. if beats are counted in, what is the heart rate and its uncertainty in beats per minute? 22. what is the area of a circle in diameter? 23.
each answer with the correct number of significant figures. ( a ) 62. 8 34 ( b ) 0. 147 + 0. 0066 + 0. 012 ( c ) 38 95
figure
</ctx>
The following are multiple choice questions (with answers) about STEM.
Find the missing figures :
?% of 30 = 2.125
A. A)8.5
B. B)6.5
C. C)8.07
D. D)7.08
Answer: | [
"A)8.5",
"B)6.5",
"C)8.07",
"D)7.08"
] | D | (i) Let x% of 30 = 2.125. Then , (x/100)*30 = 2.125
X = (2.125 * 100/30) = 7.08
Answer is D. |
aquarat_29339 | aquarat | <ctx>
when it comes to who and when they marry. firstly, many older generations consider the ideal age to get married to be 23 for women and 25 for men. in particular, females are expected to marry before their late twenties, or they would be titled " sheng nu ", in other words, " leftover women. " however, as many young women pursue education and career, the average age of first marriage is delayed. for example, in 2005, people's average age of the first marriage is 24. 37 for women and 26. 68 for men in shanghai. however, it becomes 28. 14 for women and 30. 11 for men in 2014. therefore, parents get more agitated when young people pursue their education and career in their twenties without a partner in their late twenties. secondly, members of the older generation consider marriage a necessary safeguard in sudden sickness or unexpected unemployment. in some respects, it conflicts with the younger generation, who value freedom and independence. therefore, cui hun is a common phenomenon in china that many young people should deal with. however, parents urge their children to get married is not a completely outdated behaviour. when parents are getting old, they expect to rely on their children financially and mentally. they worry that unmarried children will not have a family support network as the relationship network plays a significant role in china. see also chinese marriage ghost marriage ( chinese ) walking marriage shim - pua marriage heqin may fourth movement new marriage law back - up partner sheng nu mosuo shanghai lalas recognition of same - sex unions in china arranged marriage references china
included. in addition, people were getting married later. in 2018, the average age at first marriage for a man was 36. 4, up from 33. 6 in 2008 ; for a woman, those figures were 34. 4 and 31. 7, respectively. usually, rising birth and marriage rates correspond to a healthy economy, but the present statistics seem to have buckled that trend. as of 2016, ireland was, demographically, a young country by european standards. however, the country is aging quite quickly. according to the central statistics office, although ireland had more people below the age of 14 than above the age of 65 in 2016, the situation could flip by 2031 in all projected scenarios, which will pose a problem for public policy. for instance, ireland's healthcare system, already operating on a tight budget, will be under even more pressure. england and wales were experiencing a baby boom in the early 2010s which peaked in 2012. with more children being born in the uk between the middle of 2011 and the middle of 2012 than any year since 1972. however, fertility rates declined as the decade progressed. according to the united kingdom office for national statistics, the fertility rates of england and wales fell to a record low in 2018. moreover, they fell for women of all age groups except those in their 40s. a total of 657, 076 children were born in england and wales in 2018, down 10 % from 2012. 1977 and 1992 - 2002 were the only years where these jurisdictions had lower fertility rates on record. as has been the case since the start of the new millennium, the birth rate of women below the age of 20 continues to fall, down to 11. 9 in 2018. before 2004, women in their mid - to late - 20s had the highest fertility rate, but between the mid - 2000s and the late - 2010s, those in their early - to mid - 30s held that position. social statistician and demographer ann berrington of the university of southampton told the guardian that access to education, " changing aspirations " in life, the availability of emergency and long - acting contraception, and the lack of affordable housing were among the reasons behind the decline in fertility among people in their 20s
age of 15 have been registered officially for marriage. 90 percent of the statistics are related to young girls. but research shows that the number of children's marriages is not limited to statistics, since in rural areas children are married and live for years without formal marriage. also according to the statistics in 2012, 37, 000 children aged 10 to 18 were divorced or widowed. each year, 800 girls from 10 to 14 years old and 15, 000 girls aged 15 to 19 are divorced in iran. poverty and traditional beliefs are the cause of these early marriages. in most cases, the son - in - law's family pays money to the bride's family, who are often poor to marry their under - aged daughter. the implications of early marriage include increased illiteracy and ill - treatment among women, polygamy, house - venting, and spousal phenomena. see also convention on the rights of the child declaration of the rights of the child human rights in iran human rights in the islamic republic of iran notes 1. a lunar year, which is the measurement that the lunar calendar is based on, differences in length from the solar year by eleven to twelve days. therefore, nine lunar years correspond to approximately eight solar years and eight months and fifteen lunar years are approximately fourteen solar years and seven months. 2. ihsan is the status of a married man who can have vaginal intercourse with his wife β whenever he wishes β. ihsan also refers to the status of a woman who can have vaginal intercourse with her husband. ( iranian penal code, article 226 ) 3. the un defines a child laborer as either a ) a child 5 - 11 years old who either takes part in at least 1 hour of economic activity or at least 28 hours of household chores per week or b ) a child 12 - 14 years old who takes part in at least 14 hours of economic activity or at least 28 hours of household chores per week. ( unicef, a state of the worlds children 2016 ). another definition proposed by ilo β s statistical information and monitoring program on child labor ( simpoc ) defines a child as a child laborer if he / she is involved in economic activity, is under 12 years old
increased by 51 %. the percentage of single households has also steadily increased. in 1970, only 17 % of households consisted of singles. in 2000 that figure had increased by 50 % with singles constituting 25. 5 % of households. the most drastic increase was among the percentage of households made up of single males, which nearly doubled from 5. 6 % in 1970 to 10. 7 % in 2000. today, one can no longer refer to the nuclear family as the average american household, neither can one identify the current plurality of married couples without children as " the average. " recent statistics indeed indicate that there is no average american family arrangement, but that american society is home to a wide and diverse variety of family arrangements. the one thing the data does indicate is that the average joe most likely does not reside in a nuclear 4 - person family. a statement that can be made, however, is that most americans will marry at least once in their lifetime with the first marriage most commonly ending in divorce. today a little over half ( 52. 3 % ) of us household include a married couple, showing a significant decrease since 1970 when 70. 6 % of households included a married couple. current trends indicate that people in the us are marrying later and less often with higher divorce rates. despite the declining prevalence of marriage, more than three - quarters of americans will marry at least once in their lifetime. the average age for marriage for a male was 26. 8 and 25. 1 for a female. americans are also likely to remarry after their first divorce. in 1990, 40 % of all marriages were remarriages. all together one can conclude that while there is no prevalent average household arrangement, most americans ( the average joe ) will get married and divorced once with a considerable number of americans remarrying at least once. income income in the united states is most commonly measured either by individual or household. in 2005 the median personal income, the amount earned by the average joe or average jane, is largely determined by who is or who is not included in the statistics. personal income statistics are given in accordance to a variety of demographic characteristics. the median income for all 233 million persons over the age of 15 was $ 28, 56
</ctx>
The following are multiple choice questions (with answers) about STEM.
The average age of husband, wife and their child 3 years ago was 27 years and that of wife and the child 5 years ago was 20 years. The present age of the husband is
A. A)40
B. B)35
C. C)45
D. D)55
Answer: | [
"A)40",
"B)35",
"C)45",
"D)55"
] | A | Explanation:
Sum of the present ages of husband, wife and child = (27 * 3 + 3 * 3) years = 90 years.
Sum of the present ages of wife and child = (20 * 2 + 5 * 2) years = 50 years.
Husband's present age = (90 - 50) years = 40 years
Option A |
openbookqa_2369 | openbookqa | <ctx>
##ot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed. sustained wind speeds are reported globally at a height and are averaged over a 10 β minute time frame. the united states reports winds over a 1 β minute average for tropical cyclones, and a 2 β minute average within weather observations. india typically reports winds over a 3 β minute average. knowing the wind sampling average is important, as the value of a one - minute sustained wind is typically 14 % greater than a ten - minute sustained wind. a short burst of high speed wind is termed a wind gust, one technical definition of a wind gust is : the maxima that exceed the lowest wind speed measured during a ten - minute time interval by for periods of seconds. a squall is an increase of the wind speed above a certain threshold, which lasts for a minute or more. to determine winds aloft, radiosondes determine wind speed by gps, radio navigation, or radar tracking of the probe. alternatively, movement of the parent weather balloon position can be tracked from the ground visually using theodolites. remote sensing techniques for wind include sodar, doppler lidars and radars, which can measure the doppler shift of electromagnetic radiation scattered or reflected off suspended aerosols or molecules, and radiometers and radars can be used to measure the surface roughness of the ocean from space or airplanes. ocean roughness can be used to estimate wind velocity close to the sea surface over oceans. geostationary satellite imagery can be used to estimate the winds at cloud top based upon how far clouds move from one image to the next. wind engineering describes the study of the effects of the wind on the built environment, including buildings, bridges and other artificial objects. models models can provide spatial and temporal information about airflow. spatial information can be obtained through the interpolation of data from various measurement stations, allowing for horizontal data calculation. alternatively, profiles, such as the logarithmic wind profile, can be utilized to derive vertical information. temporal information is typically computed by solving the navier - stokes equations within numerical weather
a method for harvesting energy from trees. these energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. according to voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. a small test network was recently deployed in a us national park forest. other sources of energy from trees include capturing the physical movement of the tree in a generator. theoretical analysis of this source of energy shows some promise in powering small electronic devices. a practical device based on this theory has been built and successfully powered a sensor node for a year. metamaterial a metamaterial - based device wirelessly converts a 900 mhz microwave signal to 7. 3 volts of direct current ( greater than that of a usb device ). the device can be tuned to harvest other signals including wi - fi signals, satellite signals, or even sound signals. the experimental device used a series of five fiberglass and copper conductors. conversion efficiency reached 37 percent. when traditional antennas are close to each other in space they interfere with each other. but since rf power goes down by the cube of the distance, the amount of power is very very small. while the claim of 7. 3 volts is grand, the measurement is for an open circuit. since the power is so low, there can be almost no current when any load is attached. atmospheric pressure changes the pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. devices with a sealed chamber can use these pressure differences to extract energy. this has been used to provide power for mechanical clocks such as the atmos clock. ocean energy a relatively new concept of generating energy is to generate energy from oceans. large masses of waters are present on the planet which carry with them great amounts of energy. the energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature., efforts are underway to harvest energy this way. united states navy recently was able to generate electricity using difference in temperatures present in the ocean. one method to use the temperature difference across different levels of the therm
controllers. wind in a wind turbine, the wind's kinetic energy is converted into mechanical energy to drive a generator, which in turn generates electricity. the basic components of a wind power system consist of a tower with rotating blades containing an electricity generator and a transformer to step up voltage for electricity transmission to a substation on the grid. cabling and electronics are also important components. the harsh environment offshore wind farms means that the individual components need to be more rugged and corrosion protected than their onshore components. increasingly long connections to shore with subsea mv and hv cables are required at this time. the need for corrosion protection favors copper nickel cladding as the preferred alloy for the towers. copper is an important conductor in wind power generation. wind farms can contain several hundred - thousand feet of copper weighing between 4 million to 15 million pounds, mostly in wiring, cable, tubing, generators and step - up transformers. copper usage intensity is high because turbines in wind generation farms are spread over large areas. in land - based wind farms, copper intensity can range between 5, 600 and 14, 900 pounds per mw, depending on whether the step - up transformers have copper or aluminium conductors. in the off - shore environment, copper intensity is much higher : approximately 21, 000 pounds per mw, which includes submarine cables to shore. in both onshore and offshore environments, additional copper cabling is used to connect wind farms to main electrical grids. the amount of copper used for wind energy systems in 2011 was estimated to be 120 kt. the cumulative amount of copper installed through 2011 was estimated to be 714 kt., global production of wind turbines use 450, 000 tonnes of copper per year. for wind farms with three - stage gearbox doubly fed 3 mw induction generators, approximately 2. 7 t per mw is needed with standard wind turbines. for wind turbines with lv / mv transformers in the nacelle, 1. 85 t per mw is needed. copper is primarily used in coil windings in the stator and rotor portions of generators ( which convert mechanical energy into electrical energy ), in high voltage and low voltage cable conductors including the vertical electrical cable that connects the nacelle to the base of the wind
##utakorpi, and r. silvennoinen. estimation of the mechanical energy output of the kite wind generator. renewable energy, 34 : 1525 β 1532, 2009. optimal cross - wind towing and power generation with tethered kites kites renewable energy airborne wind power
</ctx>
The following are multiple choice questions (with answers) about STEM.
What type of useful product can be made from the moving winds?
A. metal
B. wood
C. bananas
D. electricity
Answer: | [
"metal",
"wood",
"bananas",
"electricity"
] | D | |
m1_6018 | m1 | <ctx>
multiple frequency bands and / or radios allow the establishment of a wps session for a specific band and / or radio for connection with clients which cannot have the ssid or band ( e. g., 2. 4 / 5 ghz ) explicitly selected by the user on the client for connection with wps ( e. g. pushing the 5 ghz, where supported, wps button on the wireless router will force a client device to connect via wps on only the 5 ghz band after a wps session has been established by the client device which cannot explicitly allow the selection of wireless network and / or band for the wps connection method ). vulnerabilities online brute - force attack in december 2011, researcher stefan viehbock reported a design and implementation flaw that makes brute - force attacks against pin - based wps feasible to be performed on wps - enabled wi - fi networks. a successful attack on wps allows unauthorized parties to gain access to the network, and the only effective workaround is to disable wps. the vulnerability centers around the acknowledgement messages sent between the registrar and enrollee when attempting to validate a pin, which is an eight - digit number used to add new wpa enrollees to the network. since the last digit is a checksum of the previous digits, there are seven unknown digits in each pin, yielding 107 = 10, 000, 000 possible combinations. when an enrollee attempts to gain access using a pin, the registrar reports the validity of the first and second halves of the pin separately. since the first half of the pin consists of four digits ( 10, 000 possibilities ) and the second half has only three active digits ( 1000 possibilities ), at most 11, 000 guesses are needed before the pin is recovered. this is a reduction by three orders of magnitude from the number of pins that would be required to be tested. as a result, an attack can be completed in under four hours. the ease or difficulty of exploiting this flaw is implementation - dependent, as wi - fi router manufacturers could defend against such attacks by slowing or disabling the wps feature after several failed pin validation attempts. a young developer based out of a small town in eastern
is to brute - force the actual pin. the access point sends two hashes, e - hash1 and e - hash2, to the client, proving that it also knows the pin. e - hash1 and e - hash2 are hashes of ( e - s1 | psk1 | pke | pkr ) and ( e - s2 | psk2 | pke | pkr ), respectively. the hashing function is hmac - sha - 256 and uses the " authkey " that is the key used to hash the data. physical security issues all wps methods are vulnerable to usage by an unauthorized user if the wireless access point is not kept in a secure area. many wireless access points have security information ( if it is factory - secured ) and the wps pin printed on them ; this pin is also often found in the configuration menus of the wireless access point. if this pin cannot be changed or disabled, the only remedy is to get a firmware update to enable the pin to be changed, or to replace the wireless access point. it is possible to extract a wireless passphrase with the following methods using no special tools : a wireless passphrase can be extracted using wps under windows vista and newer versions of windows, under administrative privileges by connecting with this method then bringing up the properties for this wireless network and clicking on " show characters ". within most linux desktop and unix distributions ( like ubuntu ), all network connections and their details are visible to a regular user, including the password obtained through wps. furthermore, root ( aka admin ) can always access all network details through terminal, i. e. even if there is no window manager active for regular users. a simple exploit in the intel proset wireless client utility can reveal the wireless passphrase when wps is used, after a simple move of the dialog box which asks if you want to reconfigure this access point. references external links wi - fi protected setup knowledge center at the wi - fi alliance upnp device architecture us - cert vu # 723755 waiting for wps fix wps pixie dust attack ( offline wps attack ) hack
set as the " ssid " and therefore will be transmitted at regular intervals. it is also possible to use a ddns " push " request to automatically send an amateur call sign in plain text ( ascii ) every 10 minutes. this requires that a computer's hostname be set to the call sign of the amateur operator and that the dhcp servers lease time be set to less than or equal to 10 minutes. with this method implemented the computer will send a dns " push " request that includes the local computers hostname every time the dhcp lease is renewed. this method is supported by all modern operating systems including but not limited to windows, mac os x, bsd, and linux. 802. 11 hardware may transmit and receive the entire time it is powered on even if the user is not sending data. security because the meaning of amateur transmissions may not be obscured, security measures that are implemented must be published. this does not necessarily restrict authentication or login schemes, but it does restrict fully encrypted communications. this leaves the communications vulnerable to various attacks once the authentication has been completed. this makes it very difficult to keep unauthorized users from accessing hsmm networks, although casual eavesdroppers can effectively be deterred. current schemes include using mac address filtering, wep and wpa / wpa2. mac address filtering and wep are all hackable by using freely available software from the internet, making them the less secure options. per fcc rules the encryption keys themselves must be published in a publicly accessible place if using wep, wpa / wpa2 or any other encryption, thereby undermining the security of their implementation. such measures however are effective against casual or accidental wireless intrusions. using professional or modified hardware it is possible to operate on 802. 11a channels that are outside the fcc authorized part 15 bands but still inside the 5. 8 ghz ( 5 cm ) or 2. 4 ghz ( 13 cm ) amateur radio bands. transverters or " frequency converters " can also be used to move hsmm 802. 11b / g / n operations from the 2. 4 ghz ( 13 cm ) band to the 3. 4 ghz ( 9 cm ) amateur radio
wired equivalent privacy ( wep ) was a severely flawed security algorithm for 802. 11 wireless networks. introduced as part of the original ieee 802. 11 standard ratified in 1997, its intention was to provide data confidentiality comparable to that of a traditional wired network. wep, recognizable by its key of 10 or 26 hexadecimal digits ( 40 or 104 bits ), was at one time widely used, and was often the first security choice presented to users by router configuration tools. subsequent to a 2001 disclosure of a severe design flaw in the algorithm, wep was never again secure in practice. in the vast majority of cases, wi - fi hardware devices relying on wep security could not be upgraded to secure operation. some of the design flaws were addressed in wep2, but wep2 also proved insecure, and another generation of hardware could not be upgraded to secure operation. in 2003, the wi - fi alliance announced that wep and wep2 had been superseded by wi - fi protected access ( wpa ). in 2004, with the ratification of the full 802. 11i standard ( i. e. wpa2 ), the ieee declared that both wep - 40 and wep - 104 have been deprecated. wpa retained some design characteristics of wep that remained problematic. wep was the only encryption protocol available to 802. 11a and 802. 11b devices built before the wpa standard, which was available for 802. 11g devices. however, some 802. 11b devices were later provided with firmware or software updates to enable wpa, and newer devices had it built in. history wep was ratified as a wi - fi security standard in 1999. the first versions of wep were not particularly strong, even for the time they were released, due to u. s. restrictions on the export of various cryptographic technologies. these restrictions led to manufacturers restricting their devices to only 64 - bit encryption. when the restrictions were lifted, the encryption was increased to 128 bits. despite the introduction of 256 - bit wep, 128 - bit remains one of the most common implementations. encryption details wep was included as the privacy component of the original ieee 802.
</ctx>
The following are multiple choice questions (with answers) about STEM.
Tick the \textbf{incorrect} assertion regarding WEP and WPA2.
A. WPA2 seems to be more secure than WEP.
B. WPA2 seems to ensure confidentiality.
C. WPA2 uses AES.
D. WEP seems to ensure message integrity.
Answer: | [
"WPA2 seems to be more secure than WEP.",
"WPA2 seems to ensure confidentiality.",
"WPA2 uses AES.",
"WEP seems to ensure message integrity."
] | D | The incorrect assertion is: **"WEP seems to ensure message integrity."**\n\nWEP (Wired Equivalent Privacy) does not provide strong message integrity due to vulnerabilities in its integrity-checking mechanism. This allows for potential attacks on the data being transmitted. In contrast, WPA2 uses stronger methods for ensuring both confidentiality and message integrity. |
medmcqa_8815 | medmcqa | <ctx>
is also expressed in the stomach, colon, kidneys, and lymph nodes. newborns do not express sd ( a ) on their red cells until about 10 weeks after birth ( although they do express the antigen in their bodily fluids ), and expression of sd ( a ) antigen on red blood cells often decreases during pregnancy. clinical significance the expression of sd ( a ) antigen in positive individuals is highly variable, and ranges from expression so weak that it is barely detectable, to expression so strong that the cells are agglutinated by plasma from most human donors ( polyagglutination ). extremely strong expression of sd ( a ) is denoted as sd ( a + + ). the sd ( a + + ) phenotype is sometimes referred to as the cad phenotype, after a 1968 paper that identified a novel antigen in members of the cad family from mauritius. the cad positive cells showed polyagglutination and reacted with dolichos biflorus lectin, a reagent used to identify type a1 red blood cells, even though the cells were type b or o. later research showed that strong examples of sd ( a ) also exhibited polyagglutination and reaction with dolichos biflorus, and that cad was likely an exceptionally strong sd ( a ) positive phenotype. it is believed that sd ( a ) and cad share the same antigenic determinant and are likely synthesized by the same enzyme, but there may be structural and quantitative differences in the expression of the two substances. the cad / sd ( a + + ) phenotype, like the sd ( a ) positive phenotype, shows autosomal dominant inheritance. anti - sd ( a ) is a naturally occurring antibody, meaning sd ( a ) negative individuals produce it without having been exposed to sd ( a ) positive blood through transfusion or pregnancy. anti - sd ( a ) is not typically considered to pose a hazard in blood transfusion, but as of 2018, two cases of transfusion reactions following the transfusion of sd ( a + + ) blood have been documented. it is suggested that people with anti - sd ( a ) are transfused with " least incompatible "
as recessive. other examples may include noble families, which are inbred due to custom rather than local genetic variety. hemolytic disease of the newborn in theory, the maternal production of anti - h during pregnancy might cause hemolytic disease in a fetus who did not inherit the mother's bombay phenotype. in practice, cases of hdn caused in this way have not been described. this may be possible due to the rarity of the bombay phenotype but also because of the igm produced by the immune system of the mother. since igms are not transported across the microscopic placental blood vessels ( like igg are ) they cannot reach the blood stream of the fetus to provoke the expected acute hemolytic reaction. references external links hh at bgmut blood group antigen gene mutation database at ncbi, nih rmit university the bombay, para - bombay and other h deficiencies bombaybloodgroup. org an initiative to connect individuals who donate and who are in need of bombay blood group. genetics of the bombay phenotype know more blood antigen systems transfusion medicine south asia
dizziness headaches cold hands and feet pale skin dark urine jaundice chest pain pain in the back or legs vomiting or diarrhea heart problems such as an irregular heartbeat ( arrhythmia ), a heart murmur, an enlarged heart, or heart failure. these may occur because the heart has to work harder to make sure the body gets enough healthy red blood cells. many people with cad also experience pain and bluish coloring of the hands and feet ( acrocyanosis ) or raynaud's disease. these symptoms result from slow or poor circulation and can range from mild to disabling. other signs and symptoms of cad may include enlargement of the spleen ( splenomegaly ) and mottled discoloration of the skin ( livedo reticularis ). in people with secondary cad ( associated with another underlying condition ), there may be additional signs and symptoms depending on the condition present. for example : mycoplasma pneumoniae infection ( the most common cause of secondary cad ) may cause respiratory symptoms. various infections or cancers may cause enlarged or swollen lymph nodes. 80 β 99 % of people have these symptoms arthralgia autoimmunity fatigue hemolytic anemia muscular weakness pallor 5 β 29 % of people have these symptoms abnormal urine color back pain diarrhea headache hepatomegaly lymphadenopathy nausea and vomiting splenomegaly cause cold agglutinin disease can be either primary ( unknown cause ) or secondary ( a result of another pathology / caused by an underlying condition ). primary cold agglutinin disease the primary form is caused by excessive cell proliferation of b lymphocytes, characterized by clonal lymphoproliferative disorder. primary cold agglutinin disease occurs after the fifth decade of life and peaks prevalence in a person's 70s and 80s. secondary cold agglutinin disease secondary cold agglutinin syndrome occurs when autoantibodies bind to red blood cells, rendering them subject to attack by the complement system. it is a result of an underlying condition potentially associated with either monoclonal cold - reacting autoantibodies or polyclonal
in immunology, autoimmunity is the system of immune responses of an organism against its own healthy cells, tissues and other normal body constituents. any disease resulting from this type of immune response is termed an " autoimmune disease ". prominent examples include celiac disease, post - infectious ibs, diabetes mellitus type 1, henoch β schonlein purpura ( hsp ) sarcoidosis, systemic lupus erythematosus ( sle ), sjogren syndrome, eosinophilic granulomatosis with polyangiitis, hashimoto's thyroiditis, graves'disease, idiopathic thrombocytopenic purpura, addison's disease, rheumatoid arthritis ( ra ), ankylosing spondylitis, polymyositis ( pm ), dermatomyositis ( dm ), and multiple sclerosis ( ms ). autoimmune diseases are very often treated with steroids. autoimmunity means presence of antibodies or t cells that react with self - protein and is present in all individuals, even in normal health state. it causes autoimmune diseases if self - reactivity can lead to tissue damage. history in the later 19th century it was believed that the immune system was unable to react against the body's own tissues. paul ehrlich, at the turn of the 20th century, proposed the concept of horror autotoxicus. ehrlich later adjusted his theory to recognize the possibility of autoimmune tissue attacks, but believed certain innate protection mechanisms would prevent the autoimmune response from becoming pathological. in 1904 this theory was challenged by the discovery of a substance in the serum of patients with paroxysmal cold hemoglobinuria that reacted with red blood cells. during the following decades, a number of conditions could be linked to autoimmune responses. however, the authoritative status of ehrlich's postulate hampered the understanding of these findings. immunology became a biochemical rather than a clinical discipline. by the 1950s the modern understanding of autoantibodies and autoimmune diseases
</ctx>
The following are multiple choice questions (with answers) about STEM.
Anti-A, anti-B, antibody appear in child -
A. Just after birth
B. 1 week after
C. 6 week after
D. After 6 months
Answer: | [
"Just after birth",
"1 week after",
"6 week after",
"After 6 months"
] | D | Ans. is 'd' i.e.. After 6 months o litre of anti-A and anti-B antibodies is almost zero immedietly after birth.o Infants start producing anti-A and anti-B antibodies at age of 2-8 month when corresponding antigen is not present in cell.o Maximum titre rech at 8 to 10 years of age after that it started declinning. |
qasc_1879 | qasc | <ctx>
the ramsar convention. in addition, gotjawal forest qualifies as a ramsar site because of its unique forest ecology. the ramsar convention requires that a wetland meet just one of the criteria suggested by the convention in order to be considered an internationally important wetland. according to the above - mentioned guidelines, gotjawal forest qualifies on the basis of all of the following criteria. criterion 1 : a wetland should be considered internationally important if it contains a representative, rare, or unique example of a natural or near - natural wetland type found within the appropriate biogeographic region. ( about criterion 1, the convention further explains : criterion 2 : a wetland should be considered internationally important if it supports vulnerable, endangered, or critically endangered species or threatened ecological communities. hydrological importance. as indicated by article 2 of the ramsar convention, wetlands can be selected for their hydrological importance which, inter alia, may include the following attributes. they may : i ) play a major role in the natural control, amelioration or prevention of flooding ; ii ) be important for seasonal water retention for wetlands or other areas of conservation importance downstream ; iii ) be important for the recharge of aquifer ; iv ) form part of karst or underground hydrological or spring systems that supply major surface wetlands ; gotjawal forest in jeju - do qualifies criterion because : it is an important area of groundwater recharge, as is explained by paragraph " features of gotjawal forest " of this article. it qualifies the criterion 2, because it provides habitat for endangered species of fern, as is explained in paragraph 2 of this article. # in addition, gotjawal forest is also a regular breeding site for endangered bird species, including the fairy pitta, pitta brachyura nympha, and japanese paradise flycatcher, terpsiphone atrocaudata. see also forest ecology temperate broadleaf and mixed forest - biome ramsar classification system for wetland types wetland classification footnotes references han, sang - cheol, 2002, study on the grouting method of construction of jeju island groundwater, cheju national university. jang, yong - chang and chanwon lee
wetland indicator status denotes the probability of individual species of vascular plants occurring in freshwater, brackish and saltwater wetlands in the united states. the wetland status of 7, 000 plants is determined upon information contained in a list compiled in the national wetland inventory undertaken by the u. s. fish and wildlife service and developed in cooperation with a federal inter - agency review panel ( reed, 1988 ). the national list was compiled in 1988 with subsequent revisions in 1996 and 1998. the wetland indicator status of a species is based upon the individual species occurrence in wetlands in 13 separate regions within the united states. in some instances the specified regions contain all or part of different floristic provinces and the tension zones which occur between them. while many obligate wetland ( obl ) species do occur in permanently or semi - permanently flooded wetlands, there are also a number of obligates that occur in temporary or seasonally flooded wetlands. a few species are restricted entirely to these transient - type wetland environments. plant species are general indicators of various degrees of environmental factors ; they are however not precise. the presence of a plant species at a specific site depends on a variety of climatic, edaphic and biotic factors, and the effect of individual factors such as degree of substrate saturation and depth and duration of standing water is impossible to isolate. a plant's indicator status is applied to the species as a whole however individual variations may exist within the species, referred to as " ecotypes " ; individual plants which may have adapted to specific environments as may occur in a microhabitat, which isn't indicative of the species as a whole. the morphological differences between these ecotypes and the relevant species may or may not be easily discerned. indicator categories obligate wetland ( obl ) - almost always occurs in wetlands under natural conditions ( estimated probability > 99 % ). facultative wetland ( facw ) - usually occurs in wetlands ( estimated probability 67 % β 99 % ), but occasionally found in non - wetlands ( estimated probability 1 % β 33 % ). facultative ( fac ) - equally likely to occur in wetlands and non - wetlands ( estimated probability 34 % β 66 % ).
of the wetland. many of the world's wetlands are in the temperate zones, midway between the north or south poles and the equator. in these zones, summers are warm and winters are cold, but temperatures are not extreme. in subtropical zone wetlands, such as along the gulf of mexico, average temperatures might be. wetlands in the tropics are subjected to much higher temperatures for a large portion of the year. temperatures for wetlands on the arabian peninsula can exceed and these habitats would therefore be subject to rapid evaporation. in northeastern siberia, which has a polar climate, wetland temperatures can be as low as. peatlands in arctic and subarctic regions insulate the permafrost, thus delaying or preventing its thawing during summer, as well as inducing its formation. by precipitation amount the amount of precipitation a wetland receives varies widely according to its area. wetlands in wales, scotland, and western ireland typically receive about per year. in some places in southeast asia, where heavy rains occur, they can receive up to. in some drier regions, wetlands exist where as little as precipitation occurs each year. temporal variation : perennial systems seasonal systems episodic ( periodic or intermittent ) systems ephemeral ( short - lived ) systems surface flow may occur in some segments, with subsurface flow in other segments. processes wetlands vary widely due to local and regional differences in topography, hydrology, vegetation, and other factors, including human involvement. other important factors include fertility, natural disturbance, competition, herbivory, burial and salinity. when peat accumulates, bogs and fens arise. hydrology the most important factor producing wetlands is hydrology, or flooding. the duration of flooding or prolonged soil saturation by groundwater determines whether the resulting wetland has aquatic, marsh or swamp vegetation. other important factors include soil fertility, natural disturbance, competition, herbivory, burial, and salinity. when peat from dead plants accumulates, bogs and fens develop. wetland hydrology is associated with the spatial and temporal dispersion, flow, and physio - chemical attributes of surface and ground waters. sources of hydrological flows into wetlands are predominantly precipitation, surface water ( saltwater or freshwater
water bodies. wetlands exist "... at the interface between truly terrestrial ecosystems and aquatic systems, making them inherently different from each other, yet highly dependent on both. " in environmental decision - making, there are subsets of definitions that are agreed upon to make regulatory and policy decisions. under the ramsar international wetland conservation treaty, wetlands are defined as follows : article 1. 1 : "... wetlands are areas of marsh, fen, peatland or water, whether natural or artificial, permanent or temporary, with water that is static or flowing, fresh, brackish or salt, including areas of marine water the depth of which at low tide does not exceed six meters. " article 2. 1 : " [ wetlands ] may incorporate riparian and coastal zones adjacent to the wetlands, and islands or bodies of marine water deeper than six meters at low tide lying within the wetlands. " an ecological definition of a wetland is " an ecosystem that arises when inundation by water produces soils dominated by anaerobic and aerobic processes, which, in turn, forces the biota, particularly rooted plants, to adapt to flooding ". sometimes a precise legal definition of a wetland is required. the definition used for regulation by the united states government is :'the term " wetlands " means those areas that are inundated or saturated by surface or ground water at a frequency and duration to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. wetlands generally included swamps, marshes, bogs, and similar areas.'for each of these definitions and others, regardless of the purpose, hydrology is emphasized ( shallow waters, water - logged soils ). the soil characteristics and the plants and animals controlled by the wetland hydrology are often additional components of the definitions. types wetlands can be tidal ( inundated by tides ) or non - tidal. the water in wetlands is either freshwater, brackish, saline, or alkaline. there are four main kinds of wetlands β marsh, swamp, bog, and fen ( bogs and fens being types of peatlands or mires ). some experts also recognize wet meadows and aquatic ecosystems as additional wetland types. sub
</ctx>
The following are multiple choice questions (with answers) about STEM.
Wetlands are extremely important:
A. environments
B. food and shelter
C. animals
D. Plants growth
Answer: | [
"environments",
"food and shelter",
"animals",
"Plants growth"
] | A | |
openbookqa_2086 | openbookqa | <ctx>
environmental conditions. another group of plants employ " cam - cycling ", in which their stomata do not open at night ; the plants instead recycle produced by respiration as well as storing some during the day. plants showing inducible cam and cam - cycling are typically found in conditions where periods of water shortage alternate with periods when water is freely available. periodic drought β a feature of semi - arid regions β is one cause of water shortage. plants which grow on trees or rocks ( as epiphytes or lithophytes ) also experience variations in water availability. salinity, high light levels and nutrient availability are other factors which have been shown to induce cam. since cam is an adaptation to arid conditions, plants using cam often display other xerophytic characters, such as thick, reduced leaves with a low surface - area - to - volume ratio ; thick cuticle ; and stomata sunken into pits. some shed their leaves during the dry season ; others ( the succulents ) store water in vacuoles. cam also causes taste differences : plants may have an increasingly sour taste during the night yet become sweeter - tasting during the day. this is due to malic acid being stored in the vacuoles of the plants'cells during the night and then being used up during the day. aquatic cam cam photosynthesis is also found in aquatic species in at least 4 genera, including : isoetes, crassula, littorella, sagittaria, and possibly vallisneria, being found in a variety of species e. g. isoetes howellii, crassula aquatica. these plants follow the same nocturnal acid accumulation and daytime deacidification as terrestrial cam species. however, the reason for cam in aquatic plants is not due to a lack of available water, but a limited supply of. is limited due to slow diffusion in water, 10000x slower than in air. the problem is especially acute under acid ph, where the only inorganic carbon species present is, with no available bicarbonate or carbonate supply. aquatic cam plants capture carbon at night when it is abundant due to a lack of competition from other photosynthetic organisms
desert dunes. barchan dunes are formed as winds continuously blow in the same direction, and are characterized by a crescent - shape atop the dune. seif dunes are long and narrow, featuring a sharp crest, and are more common in the sahara desert. analysis of geological features in desert environments can reveal a lot about the geologic history of the area. through observation and identification of rock deposits, geologists are able to interpret the order of events that occurred during desert formation. for example, research conducted on the surface geology of the namib desert allowed geologists to interpret ancient movements of the kuiseb river based on rock ages and features identified in the area. organism adaptation animals deserts support diverse communities of plant and animals that have evolved resistance, and circumventing methods of extreme temperatures and arid conditions. for example, desert grasslands are more humid and slightly cooler than its surrounding ecosystems. many animals obtain energy by eating the surrounding vegetation, however, desert plants are much more difficult for organisms to consume. to avoid intense temperatures, the majority of small desert mammals are nocturnal, living in burrows to avoid the intense desert sun during the daytime. these burrows prevent overheating and dehydration as they maintain an optimal temperature for the mammal. desert ecology is characterized by dry, alkaline soils, low net production and opportunistic feeding patterns by herbivores and carnivores. other organisms'survival tactics are physiologically based. such tactics include the completion of life cycles ahead of anticipated drought seasons, and storing water with the help of specialized organs. desert climates are particularly demanding on endothermic organisms. however, endothermic organisms have adapted mechanisms to aid in water retention in habitats such as desert ecosystems which are commonly affected by drought. in environments where the external temperature is less than their body temperature, most endotherms are able to balance heat production and heat loss to maintain a comfortable temperature. however, in deserts where air and ground temperatures exceed body temperature, endotherms must be able to dissipate the large amounts of heat being absorbed in these environments. in order to cope with extreme conditions, desert endotherms have adapted through the means of avoidance, relaxation of homeostasis, and
, called puquios, was developed. most archaeologists believe that the puquios are indigenous and date to about 500 ce, but a few believe they are of spanish origin, brought to the americas in the 16th century. puquios were still in use in the nazca region in the 21st century. technical features qanats are constructed as a series of well - like vertical shafts, connected by a gently sloping tunnel which carries a water channel. qanats efficiently deliver large amounts of subterranean water to the surface without need for pumping. the water drains by gravity, typically from an upland aquifer, with the destination lower than the source. qanats allow water to be transported over long distances in hot dry climates without much water loss to evaporation. it is very common for a qanat to start below the foothills of mountains, where the water table is closest to the surface. from this source, the qanat tunnel slopes gently downward, slowly converging with the steeper slope of the land surface above, and the water finally flows out above ground where the two levels meet. to connect a populated or agricultural area with an aquifer, qanats must often extend for long distances. qanats are sometimes split into an underground distribution network of smaller canals called kariz. like qanats, these smaller canals are below ground to avoid contamination and evaporation. in some cases water from a qanat is stored in a reservoir, typically with night flow stored for daytime use. an ab anbar is an example of a traditional persian qanat - fed reservoir for drinking water. the qanat system has the advantage of being resistant to natural disasters such as earthquakes and floods, and to deliberate destruction in war. furthermore, it is almost insensitive to the levels of precipitation, delivering a flow with only gradual variations from wet to dry years. from a sustainability perspective, qanats are powered only by gravity, and thus have low operation & maintenance costs once built. qanats transfer freshwater from the mountain plateau to the lower - lying plains with saltier soil. this helps to control soil salinity and prevent desertification. qanat vs. spring - flow
surface water is water located on top of land, forming terrestrial ( surrounding by land on all sides ) waterbodies, and may also be referred to as blue water, opposed to the seawater and waterbodies like the ocean. the vast majority of surface water is produced by precipitation. as the climate warms in the spring, snowmelt runs off towards nearby streams and rivers contributing towards a large portion of human drinking water. levels of surface water lessen as a result of evaporation as well as water moving into the ground becoming ground - water. alongside being used for drinking water, surface water is also used for irrigation, wastewater treatment, livestock, industrial uses, hydropower, and recreation. for usgs water - use reports, surface water is considered freshwater when it contains less than 1, 000 milligrams per liter ( mg / l ) of dissolved solids. there are three major types of surface water. permanent ( perennial ) surface waters are present year round, and includes lakes, rivers and wetlands ( marshes and swamps ). semi - permanent ( ephemeral ) surface water refers to bodies of water that are only present at certain times of the year including seasonally dry channels such as creeks, lagoons and waterholes. human - made surface water is water that can be continued by infrastructures that humans have assembled. this would be dammed artificial lakes, canals and artificial ponds ( e. g. garden ponds ) or swamps. the surface water held by dams can be used for renewable energy in the form of hydropower. hydropower is the forcing of surface water sourced from rivers and streams to produce energy. measurement surface water can be measured as annual runoff. this includes the amount of rain and snowmelt drainage left after the uptake of nature, evaporation from land, and transpiration from vegetation. in areas such as california, the california water science center records the flow of surface water and annual runoff by utilizing a network of approximately 500 stream gages collecting real time data from all across the state. this then contributes to the 8, 000 stream gage stations that are overseen by the usgs national stream gage record. this in turn has provided to date records and documents of water data over the years.
</ctx>
The following are multiple choice questions (with answers) about STEM.
If a person is dehydrated in a desert without water sources like rivers or ponds, they can hydrate by
A. cracking open peyote
B. staring at sand
C. eating a pebble
D. licking a rock
Answer: | [
"cracking open peyote",
"staring at sand",
"eating a pebble",
"licking a rock"
] | A | |
qasc_1727 | qasc | <ctx>
, and often the thyroid gland ( depending on local referral patterns ). they also deal with diseases involving the skin, breast, soft tissue, trauma, peripheral artery disease and hernias and perform endoscopic procedures such as gastroscopy and colonoscopy. genetics β is a branch of biology concerned with the study of genes, genetic variation, and heredity in organisms. genitourinary system β the genitourinary system, or urogenital system, are the organs of the reproductive system and the urinary system. these are grouped together because of their proximity to each other, their common embryological origin and the use of common pathways, like the male urethra. also, because of their proximity, the systems are sometimes imaged together. geriatrics β or geriatric medicine, is a specialty that focuses on health care of elderly people. it aims to promote health by preventing and treating diseases and disabilities in older adults. there is no set age at which patients may be under the care of a geriatrician, or geriatric physician, a physician who specializes in the care of elderly people. rather, this decision is determined by the individual patient's needs, and the availability of a specialist. it is important to note the difference between geriatrics, the care of aged people, and gerontology, which is the study of the aging process itself. the term geriatrics comes from the greek Ξ³Ξ΅ΟΟΞ½ geron meaning " old man ", and ΞΉΞ±ΟΟΞΏΟ iatros meaning " healer ". however, geriatrics is sometimes called medical gerontology. gonad β a gonad, sex gland, or reproductive gland is a mixed gland that produces the gametes ( sex cells ) and sex hormones of an organism. in the female of the species the reproductive cells are the egg cells, and in the male the reproductive cells are the sperm. the male gonad, the testicle, produces sperm in the form of spermatozoa. the female gonad, the ovary, produces egg cells. both of these gametes are haploid cells. some hermaphroditic animals have a type of go
##phism, usually one of the sexes contributes more to the survival of their offspring ( usually the mother ). in such cases, the only adaptive benefit of having a second sex is the possibility of sexual selection, by which organisms can improve their genotype. evidence for this explanation for the evolution of sex is provided by the comparison of the rate of molecular evolution of genes for kinases and immunoglobulins in the immune system with genes coding other proteins. the genes coding for immune system proteins evolve considerably faster. further evidence for the red queen hypothesis was provided by observing long - term dynamics and parasite coevolution in a mixed sexual and asexual population of snails ( potamopyrgus antipodarum ). the number of sexuals, the number of asexuals, and the rates of parasitic infection for both were monitored. it was found that clones that were plentiful at the beginning of the study became more susceptible to parasites over time. as parasite infections increased, the once - plentiful clones dwindled dramatically in number. some clonal types disappeared entirely. meanwhile, sexual snail populations remained much more stable over time. in 2011, researchers used the microscopic roundworm caenorhabditis elegans as a host and the pathogenic bacterium serratia marcescens to generate a host β parasite coevolutionary system in a controlled environment, allowing them to conduct more than 70 evolution experiments testing the red queen hypothesis. they genetically manipulated the mating system of c. elegans, causing populations to mate either sexually, by self - fertilization, or a mixture of both within the same population. then they exposed those populations to the s. marcescens parasite. it was found that the self - fertilizing populations of c. elegans were rapidly driven extinct by the coevolving parasites, while sex allowed populations to keep pace with their parasites, a result consistent with the red queen hypothesis. currently, there is no consensus among biologists on the main selective forces maintaining sex. the competing models to explain the adaptive function of sex have been reviewed by birdsell and wills. evolution of aging the red queen hypothesis has been invoked by some authors to explain evolution
and colleagues have studied the molecular basis and evolution of life - cycle transitions in the ciliated green alga, chlamydomonas reinhardtii. they have identified genes in the mating - type ( mt ) locus and genes regulated by mt that control the transition between vegetative growth and gametic differentiation and zygote development. these include genes responsible for mate recognition, uniparental inheritance of chloroplast dna, and gametic differentiation, allowing analysis of their function and their evolution during speciation. they have also elucidated structural features of ciliary motility and explored the potential for producing algal biodiesel as a transportation fuel. honors founder β s day distinguished faculty award, washington university, 1995 ; senior career recognition award, american society for cell biology, 1999 ; fellow, american women in science, 2009 ; fellow, american academy of arts and sciences, 2009 ; fellow, american academy of microbiology, 2013 ; fellow, american society for cell biology, 2017 ; doctor of letters honoris causa, meadville school of theology, 2022 ; member, national academy of sciences, 2023 references external links religious naturalist association board of directors ( goodenough, president ) 1943 births living people 21st - century american biologists american spiritual writers harvard graduate school of arts and sciences alumni religious naturalists washington university in st. louis faculty presidents of the institute on religion in an age of science barnard college alumni columbia graduate school of arts and sciences alumni american women biologists 21st - century american women fellows of the american academy of arts and sciences members of the united states national academy of sciences
in genetics, paternal mtdna transmission and paternal mtdna inheritance refer to the incidence of mitochondrial dna ( mtdna ) being passed from a father to his offspring. paternal mtdna inheritance is observed in a small proportion of species ; in general, mtdna is passed unchanged from a mother to her offspring, making it an example of non - mendelian inheritance. in contrast, mtdna transmission from both parents occurs regularly in certain bivalves. in animals paternal mtdna inheritance in animals varies. for example, in mytilidae mussels, paternal mtdna " is transmitted through the sperm and establishes itself only in the male gonad. " in testing 172 sheep, " the mitochondrial dna from three lambs in two half - sib families were found to show paternal inheritance. " an instance of paternal leakage resulted in a study on chickens. there has been evidences that paternal leakage is an integral part of mitochondrial inheritance of drosophila simulans. in humans in human mitochondrial genetics, there is debate over whether or not paternal mtdna transmission is possible. many studies hold that paternal mtdna is never transmitted to offspring. this thought is central to mtdna genealogical dna testing and to the theory of mitochondrial eve. the fact that mitochondrial dna is maternally inherited enables researchers to trace maternal lineage far back in time. y chromosomal dna, paternally inherited, is used in an analogous way to trace the agnate lineage. in sexual reproduction, paternal mitochondria found in the sperm are actively decomposed, thus preventing " paternal leakage ". mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. in 1999 it was reported that paternal sperm mitochondria ( containing mtdna ) are marked with ubiquitin to select them for later destruction inside the embryo. some in vitro fertilization ( ivf ) techniques, particularly intracytoplasmic sperm injection ( icsi ) of a sperm into an oocyte, may interfere with this. it is now understood that the tail of the sperm, which contains additional mtdna, may also enter the egg. this had led to increased controversy about the fate of paternal mtdna. some sources state that so
</ctx>
The following are multiple choice questions (with answers) about STEM.
Why do male gametes swim from male to female reproductive organs?
A. temperature changes
B. to leave the egg
C. fertilization
D. competition
Answer: | [
"temperature changes",
"to leave the egg",
"fertilization",
"competition"
] | C | |
arc_challenge_1 | arc_challenge | <ctx>
22. 2 ferromagnets and electromagnets learning objectives by the end of this section, you will be able to : β’ define ferromagnet. β’ describe the role of magnetic domains in magnetization. β’ explain the significance of the curie temperature. β’ describe the relationship between electricity and magnetism. ferromagnets only certain materials, such as iron, cobalt, nickel, and gadolinium, exhibit strong magnetic effects. such materials are called ferromagnetic, after the latin word for iron, ferrum. a group of materials made from the alloys of the rare earth elements are also used as strong and permanent magnets ; a popular one is neodymium. other materials exhibit weak magnetic effects, which are detectable only with sensitive instruments. not only do ferromagnetic materials respond strongly to magnets ( the way iron is attracted to magnets ), they can also be magnetized themselves β that is, they can be induced to be magnetic or made into permanent magnets. making connections : take - home experiment β refrigerator magnets we know that like magnetic poles repel and unlike poles attract. see if you can show this for two refrigerator magnets. will the magnets stick if you turn them over? why do they stick to the door anyway? what can you say about the magnetic properties of the door next to the magnet? do refrigerator magnets stick to metal or plastic spoons? do they stick to all types of metal? 956 22 β’ magnetism access for free at openstax. org figure 22. 7 an unmagnetized piece of iron is placed between two magnets, heated, and then cooled, or simply tapped when cold. the iron becomes a permanent magnet with the poles aligned as shown : its south pole is adjacent to the north pole of the original magnet, and its north pole is adjacent to the south pole of the original magnet. note that there are attractive forces between the magnets. when a magnet is brought near a previously unmagnetized ferromagnetic material, it causes local magnetization of the material with unlike poles closest, as in figure 22. 7. ( this results in the attraction
the door frame or flush mount with wire moulding. installation is straightforward. with in - swinging applications, the electromagnet is typically installed in the opening corner of the door at the door's header. maglocks can also be installed vertically in the door opening when they are furnished with a full length housing. with this configuration the armature is through - bolted through the door and oriented to mate with the face of the electromagnet. the armature plate and electromagnet must touch in order to provide locking holding force. with out - swinging applications, the electromagnet is typically installed on the side of the door header. in this configuration, the armature is mounted on a z shaped bracket that orients the armature to mate with the electromagnet. magnetic locks are almost always part of a complete electronic security system. such a system may simply consist of an attached keycard reader or may be more complex, involving connection to a central computer that monitors the building's security. whatever the choice of locking system, fire safety is an important consideration. other variations and improvements on the electromagnetic locks have been developed. the most remarkable is the shear lock, where the armature does not directly pull off the face, but the load is instead in shear, like a mechanical stop. the shear magnetic lock allows a door to swing in both directions, as opposed to the original ( and now ubiquitous ) direct pull type, which normally works either in an in - swing or out - swing configuration. in order to provide the shear magnetic lock with the appropriate holding force, then two pins lock the armature onto the magnet itself, and ensure that the magnet locks into place. an improved " shear " electromagnetic lock was patented on may 2, 1989, by arthur, richard and david geringer of security door controls, an access control hardware manufacturing firm. the device outlined in their designs was the same in principle as the modern magnetic lock consisting of an electromagnet and an armature plate. the patent did not make any reference to the manufacturing methods of the electromagnet and detailed several variations on the design, including one that used a spring - loaded arm
22. 1 magnets learning objectives by the end of this section, you will be able to : β’ describe the difference between the north and south poles of a magnet. β’ describe how magnetic poles interact with each other. figure 22. 3 magnets come in various shapes, sizes, and strengths. all have both a north pole and a south pole. there is never an isolated pole ( a monopole ). all magnets attract iron, such as that in a refrigerator door. however, magnets may attract or repel other magnets. experimentation shows that all magnets have two poles. if freely suspended, one pole will point toward the north. 954 22 β’ magnetism access for free at openstax. org the two poles are thus named the north magnetic pole and the south magnetic pole ( or more properly, north - seeking and south - seeking poles, for the attractions in those directions ). figure 22. 4 one end of a bar magnet is suspended from a thread that points toward north. the magnet β s two poles are labeled n and s for north - seeking and south - seeking poles, respectively. figure 22. 5 unlike poles attract, whereas like poles repel. universal characteristics of magnets and magnetic poles it is a universal characteristic of all magnets that like poles repel and unlike poles attract. ( note the similarity with electrostatics : unlike charges attract and like charges repel. ) further experimentation shows that it is impossible to separate north and south poles in the manner that + and β charges can be separated. misconception alert : earth β s magnetic poles earth acts like a very large bar magnet with its south - seeking pole near the geographic north pole. that is why the north pole of your compass is attracted toward the geographic north pole of earth β because the magnetic pole that is near the geographic north pole is actually a south magnetic pole! confusion arises because the geographic term β north pole β has come to be used ( incorrectly ) for the magnetic pole that is near the north pole. thus, β north magnetic pole β is actually a misnomer β it should be called the south magnetic pole. 22. 1 β’ magnets 955 figure 22. 6 north and south poles always occur in
and not every particle has the needed amount of energy to attach at that part of the surface ( for different parts, different energies are needed ). if one has a flux f of particles incoming, part of it will be adsorbed, given by the adsorption flux where s here is the sticking coefficient. not only does this variable depend on the surface and on the energy of the incoming atom, but also on the chemical nature of both the particle and the surface. if both the particle and surface are made of a substance that easily reacts with other particles, it is easier for the atoms to stick to the surface. surface thermodynamics taking a look at the thermodynamics at the surface of the film, it is seen that bonds are broken, releasing energy, and bonds are formed, confining energy. the thermodynamics involved were modeled by the two germans w. kossel and n. stranski in 1920. this model is called terrace ledge kink model ( tlk ). the adatom can create more than one bond with the crystal, depending on the structure of the crystal. if it is a simple cubic lattice, the adatom can have up to 6 bonds, whereas in a face - centered cubic lattice, it can have up to 12 nearest neighbors. the more bonds created, the more energy is confined, making it harder to desorb the adatom. a special site for an adatom is a kink, where exactly half of the bonds with the surface can be created, also called the " half - crystal position ". magnetic adatoms adatoms, due to having fewer bonds than the other atoms in the crystal, have unbound electrons. these electrons have spin and therefore a magnetic moment. this magnetic moment has no preference for orientation until an external influence, like a magnetic field, is present. the structure of the adatoms on a surface can be adjusted by changing the external magnetic field. through this method theoretical situations, such as the atomic chain, can be simulated. quantum mechanics needs to be taken into account when using adatoms due to the small scale. the magnetic field created by an atom is caused mostly by the orbit and spin
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of the following statements best explains why magnets usually stick to a refrigerator door?
A. The refrigerator door is smooth.
B. The refrigerator door contains iron.
C. The refrigerator door is a good conductor.
D. The refrigerator door has electric wires in it.
Answer: | [
"The refrigerator door is smooth.",
"The refrigerator door contains iron.",
"The refrigerator door is a good conductor.",
"The refrigerator door has electric wires in it."
] | B | |
mmlu_aux_1576 | mmlu_auxiliary | <ctx>
##pidopterous insects collected at sarawak, in borneo, by mr. a. r. wallace, with descriptions of new species. ( continued from page 145 ). journal of the proceedings of the linnean society : zoology. 6 : 171 β 198. 1863. list of the specimens of lepidopterous insects in the collection of the british museum. part xxvii. crambites & tortricites. british museum ( natural history ), london. 1 β 286 pp. 1863. list of the specimens of lepidopterous insects in the collection of the british museum. part xxviii. tortricites & tineites. british museum ( natural history ), london. 287 β 561 pp. 1864. list of the specimens of lepidopterous insects in the collection of the british museum. part xxix. tineites. british museum ( natural history ), london. 533 β 835 pp. 1864. list of the specimens of lepidopterous insects in the collection of the british museum. part xxx. tineites. british museum ( natural history ), london. 837 β 1096 pp. 1865. descriptions of new species of the dipterous insects of new guinea. journal of the proceedings of the linnean society : zoology. 8 : 102 β 130. 1865. descriptions of some new species of dipterous insects from the island of salwatty, near new guinea. journal of the proceedings of the linnean society : zoology. 8 : 130 β 136. 1865. list of the specimens of lepidopterous insects in the collection of the british museum. part xxxii. supplement. - - part 2. british museum ( natural history ), london. 323 β 706 pp. 1866. list of the specimens of lepidopterous insects in the collection of the british museum. part xxxiv. supplement. - - part 4. british museum ( natural history ), london. 1121 β 1533 pp. 1866. appendix. a list of mammals, birds, insects, reptiles, fishes, shells, annelides, and diatomaceae, collected by myself in british columbia
##yla. he explains, with detailed diagrams of arthropod and chordate development and a brief, richly - cited but conversational text, how that symmetry is produced. the next four chapters are on the fly, the butterfly, the snake, and the cheetah. each consists of three to eight sections named in the style of rudyard kipling's just so stories, with titles like " how the butterfly got its spots " or " how the snake elongated its body ", though a couple of sections use " why? " rather than " how? ", as in " why the fly twirls his penis " ( it rotates during embryonic development under the control of different genes ). in these chapters, held explains the mechanics of evolutionary developmental biology, complete with accounts of what genes such as hox, hedgehog, and engrailed do to shape bodies. the third part is a single chapter providing " an evo - devo bestiary, " a long list of stories, such as " how the turtle got its shell ", " how the vampire bat reinvented running ", " how the quetzal got its crest ", and " how the firefly got its flashlight ". these are just so only in name, since each one is reliably cited to recent research rather than an author's whimsy. since by this point the reader has been introduced to the core elements of the evo - devo gene toolkit, held makes each section brief, 50 stories in 32 pages, and minimally technical : he discusses what the evo - devo system achieves in terms of each animal's structures and organs, ecology and behaviour. the main text is supported by an accurate glossary and thorough index. glossary terms are printed in boldface in the text, a helpful feature, while the glossary, like the text, is cited to the key research papers on which the book is based. the book thus provides a wide overview of evo - devo, with guidance on how to read more deeply on any chosen aspect. publication the book was published by cambridge university press as a paperback in 2014 ( ). the main text is 148 pages
, zoology, 19 pt. 1 pp. 57 β 58. hirst, a. stanley, 1911a. " on a collection of arachnida and chilopoda made by mr. s. a. neave in rhodesia, north of the zambesi ". manchester mem. lit. phil. soc. 2 : 56. 11 pages. hirst, a. stanley, 1911b. " on some new opiliones from japan and the loo - choo islands ". annals and magazine of natural history, london, ser. 8, 8 : 625 β 636. hirst, a. stanley, 1911c. " percy sladen trust expedition to the indian ocean in 1905 under the leadership of mr. j. stanley gardiner ". no. xviii. the araneae, opiliones and pseudoscorpiones. transactions of the linnean society of london, zoology, series 2, 14 : 379 β 395. hirst, [ a. ] stanley, 1912. " descriptions of new harvest - men of the family phalangodidae ". the annals and magazine of natural history, including zoology, botany, and geology, london, 8th series, 10 ( 55 ) : 63 β 84, plate i. hirst, a. stanley, 1913. " the percy sladen trust expedition to the indian ocean in 1905 under the leadership of mr. j. stanley gardiner ". no. ii. second report on the arachnida β the scorpions, pedipalpi, and supplementary notes on the opiliones and pseudoscorpions. transactions of the linnean society of london, zoology, series 2, 16 : 31 β 37. hirst, a. stanley, 1914. " report on the arachnida and myriapoda collected by the british ornithologists'union expedition and the wollaston expedition in dutch new guinea ". transactions of the linnean society of london, zoology, 20 : 325 β 334. hirst, a. stanley, 1919. " studies on acari no. 1. " the genus demodex, owen. trustees of the british museum, london.
##s des seances de l'academie des sciences, paris 91 : 1034 β 1038 ( in french ). olsson, a., emmett, d., henson, d. and fanning, e. 2006. activity patterns and abundance of microchiropteran bats at a cave roost in southwest madagascar ( subscription required ). african journal of ecology 44 : 401 β 403. peterson, r. l., eger, j. l. and mitchell, l. 1995. chiropteres. faune de madagascar 84 : 1 β 204 ( in french ). racey, p. a., goodman, s. m. and jenkins, r. k. b. 2010. the ecology and conservation of malagasy bats. pp. 369 β 404 in fleming, t. h. and racey, p. a. ( eds. ). island bats : evolution, ecology, and conservation. university of chicago press, 549 pp. rakotoarivelo, a. a., ranaivoson, n., ramilijaona, o. r., kofoky, a. f., racey, p. a. and jenkins, r. k. b. 2007. seasonal food habits of five sympatric forest microchiropterans in western madagascar ( subscription required ). journal of mammalogy 88 ( 4 ) : 959 β 966. ranivo, j. and goodman s. m. 2006. revision taxinomique des triaenops malgaches ( mammalia, chiroptera, hipposideridae ). zoosystema 28 ( 4 ) : 963 β 985 ( in french ). russell, a. l., ranivo, j., palkovacs, e. p., goodman, s. m. and yoder, a. d. 2007. working at the interface of phylogenetics and population genetics : a biogeographical analysis of triaenops spp. ( chiroptera : hipposideridae ). molecular ecology 16 : 839 β 851. russell, a. l., goodman, s. m.
</ctx>
The following are multiple choice questions (with answers) about STEM.
Just when scientists think they have discovered and recorded every species of insect that crawls on the earth they find something new.Earlier this month, Peter Jager, the head of Frankfurt's Senckenberg Research Institute reported the discovery of a unique eyeless spider! The creature that has a leg span of six centimeters and a body size of just twelve millimeters was found in a cave inside the tiny nation of Laos, in Southeast Asia. It was identified as a member of the Huntsman spider, a large family that consists of 1100 species.However, the other members of the family all have eyes.The spider is the first one that has no eyes.The scientists believe that the reason for the lack of eyes can be that these spiders spend their entire life inside caves.In the past they have seen other members of the Huntsman spiders living in caves with six or less eyes.They believe that the kind of spider has adapted to the fact that they do not really need the eyes to such an extreme measure that they have _ them altogether. But they are not the only creature to have done that.Over the years, scientists have discovered completely blind cave fish, and even a freshwater crab, which happens to live in the same cave as this eyeless spider! Surrounded by Burma and China on the northwest, Vietnam on the east, Cambodia on the South and Thailand on the west, the landlocked nation of Laos has been a treasure trove of an incredible number of new and exotic species.In the last few years researchers have discovered the Saola -- never-before seen antelope-like creature, a small deer species called muntjacs, a tiny striped rabbit and a rock rat that was thought to have been extinct 11 million years ago.In addition to that it is also home to over one hundred species of large mammals ranging from tigers to macaques and over 165 species of amphibians and reptiles like the terrible King cobra and the noisy Tokay gecko, a permanent resident inside most Laos households. What can we know about the newly discovered species of spider?
A. They are smaller in size.
B. They only live in Laos.
C. They belong to the Huntsman spider.
D. About 1100 such spiders were discovered.
Answer: | [
"They are smaller in size.",
"They only live in Laos.",
"They belong to the Huntsman spider.",
"About 1100 such spiders were discovered."
] | C | null |
medmcqa_6003 | medmcqa | <ctx>
, o'donnell cj, ganesh sk ; charge hematology working group, larson eb, carlson cs, jarvik gp ; the electronic medical records and genomics ( emerge ) network. genetic variation associated with circulating monocyte count in the emerge network. hum mol genet. jan 12, 2013 [ epub ahead of print ] ( ). kim ds, burt aa, crosslin dr, robertson pd, ranchalis je, boyko ej, nickerson da, furlong ce, jarvik gp. novel common and rare genetic determinants of paraoxonase activity : fto, serpina12, and itgal. j lipid res. nov 15, 2012.. guo y, lanktree mb, taylor kc, hakonarson h, lange la, keating bj ; ibc 50k snp array bmi consortium. gene - centric meta - analyses of 108 912 individuals confirm known body mass index loci and reveal three novel signals. hum mol genet. 2013 jan 1 ; 22 ( 1 ) : 184 - 201. doi : 10. 1093 / hmg / dds396. epub 2012 sep 21. ( ; pmc3522401 ). kim ds, burt aa, ranchalis je, richter rj, marshall jk, nakayama ks, jarvik er, eintracht jf, rosenthal ea, furlong ce, jarvik gp. dietary cholesterol increases paraoxonase 1 enzyme activity. j lipid res. 2012 aug 15. [ epub ahead of print ] ( ). fullerton, sm, wolf, wa, brothers, kb, clayton, ew, crawford, dc, denny, jc, greenland, p, koenig, ba, leppig, ka, lindor, nm, mccarty, ca, mcguire, al, mcpeek hinz er, mirel, db, ramos, em, ritchie, md, smith, me, waudby, cj, burke, w, & jarvik, gp. return of individual research results from genome - wide association studies : experience of the electronic medical records & genomics
##hosis. proteomics clin appl. 2007 jun ; 1 ( 6 ) : 545 - 54. faria lc, gigou m, roque - afonso am, sebagh m, roche b, fallot g, ferrari tc, guettier c, dussaix e, castaing d, brechot c, samuel d. hepatocellular carcinoma is associated with an increased risk of hepatitis b virus recurrence after liver transplantation. gastroenterology. 2008 jun ; 134 ( 7 ) : 1890 - 9 ; quiz 2155. herve, j., sa cunha, a., liu, b., valogne, y., longuet, m., boisgard, r., bregerie, o., roux, j., guettier, c., cales, p., tavitian, b., samuel, d., clerc, j., brechot, c., faivre, j. internal radiotherapy of liver cancer with rat hepatocarcinoma - intestine - pancreas gene as a liver tumor - specific promoter. hum gene hum gene ther. 2008 sep ; 19 ( 9 ) : 915 - 26 battaglia s, benzoubir n, nobilet s, charneau p, samuel d, zignego al, atfi a, brechot c, bourgeade mf liver cancer - derived hepatitis c virus core proteins shift tgf - beta responses from tumor suppression to epithelial - mesenchymal transition plos one. 2009 ; 4 ( 2 ) : e4355. dos santos a, court m, thiers v, sar s, guettier c, samuel d, brechot c, garin j, demaugre f, masselon cd. identification of cellular targets in human intrahepatic cholangiocarcinoma using laser microdissection and accurate mass and time tag proteomics. mol cell proteomics. 2010 sep ; 9 ( 9 ) : 1991 - 2004. moniaux, n.
##ugliani, r., munoz rojas, v., martins, a., valadares, e., clarke, j., goes, j., kakkis, e., worden, m., sidman, m., cox, g. : a dose - optimization trial of laronidase ( aldurazyme ) in patients with mucopolysaccharidosis i. molecular genetics and metabolism, 96 ( 1 ) ; 13 - 19, 2009. clarke, l. a., wraith, j. e., beck, m., kolodny, e. h., pastores, g. m., muenzer, j., rapoport, d. m., berger, k. i., sidman, m., kakkis, e. d., cox, g. f. : long - term efficacy and safety of laronidase in the treatment of mucopolysaccharidosis i. pediatrics, 123 ; 229 - 240, 2009. trefz fk, burton bk, longo n, casanova mm, gruskin dj, dorenbaum a, kakkis ed, crombez ea, grange dk, harmatz p, lipson mh, milanowski a, randolph lm, vockley j, whitley cb, wolff ja, bebchuk j, christ - schmidt h, hennermann jb, sapropterin study group efficacy of sapropterin dihydrochloride in increasing phenylalanine tolerance in children with phenylketonuria : a phase iii, randomized, double - blind, placebo - controlled study. j pediatr, 2009 mar 2, vol., pages,. kakkis, emil ( 2022 ) saving ryan, impositivity media, references external links everylife foundation saving ryan - reader's digest battling rare diseases in a blockbuster world new fda groups for rare, neglected diseases could speed path to market biomarin genzyme dr. kakkis testifies in congress ultragenyx pharmaceuticals american genetic
##elon cd. identification of cellular targets in human intrahepatic cholangiocarcinoma using laser microdissection and accurate mass and time tag proteomics. mol cell proteomics. 2010 sep ; 9 ( 9 ) : 1991 - 2004. moniaux, n., song, h., darnaud, m., garbin, k., gigou, m., mitchell, c., samuel, d., jamot, l., amouyal, p., amouyal, g., brechot, c., faivre, j. human hepatocarcinoma - intestine - pancreas / pancreatitis - associated protein cures fas - induced acute liver failure in mice by attenuating free - radical damage in injured livers. hepatology. 2011 feb ; 53 ( 2 ) : 618 - 27. doi : 10. 1002 / hep. 24087 arumugam m, raes j, brechot c, merieux a, weissenbach j, ehrlich sd, bork p. enterotypes of the human gut microbiome. nature. 2011 may 12 ; 473 ( 7346 ) : 174 - 80. claire lacoste, julie herve, myriam bou nader, alexandre dos santos, nicolas moniaux, yannick valogne, rodrick montjean, olivier dorseuil, didier samuel, doris cassio, carla portulano, nancy carrasco, christian brechot, and jamila faivre. the iodide transporter nis regulates cancer cell motility and invasiveness by interacting with the rhogef larg. cancer research. 2012 nov 1 ; 72 ( 21 ) : 5505 - 15 n. benzoubir, c. lejamtel, b. testoni, s. battaglia, b. benassi, c. desterke, d. samuel, m. levrero4, c. brechot and mf bourgeade. hcv core - mediated activation of latent tgf - Ξ² via th
</ctx>
The following are multiple choice questions (with answers) about STEM.
30 year male with chronic diarrhoea, anemia, raised liver enzymes. Most likely associated with
A. Antimitochondrial antibody
B. Anti-endomysial antibody
C. Anti-smooth muscle antibody
D. Antinuclear antibody
Answer: | [
"Antimitochondrial antibody",
"Anti-endomysial antibody",
"Anti-smooth muscle antibody",
"Antinuclear antibody"
] | B | Anti-endomysial antibody The clinical features are suggestive of celiac sprue0. The typical symptoms of celiac sprue are weight loss, chronic diarrhoea, abdominal distension, growth retardation anemia. Raised liver enzymes is not mentioned in many text books including Harrison. But according to C.M.D.T. 'Mild elevation of aminotransferases are seen in upto 40% of patientsdeg " Impoant points about the diagnosis of celiac sprue Small intestinal biopsy in celiac sprue demonstrates characteristic features but it is not specific for the diagnosis of celiac sprue. It can be seen in tropical sprue also. Serological tests helpful in the diagnosis of celiac sprue are - Anti endomysial antibody (Anti EMA) - Anti tissue (t) transglutaminase antibody Anti (t) TGA A negative test excludes the diagnosis of celiac sprueQ The diagnosis of celiac sprue still rests upon - Clinical demonstration of malabsotptioni2 - Demonstration of intestinal lesion by small bowel biopsyo - Unequivocal improvement in both symptoms and mensal histology on gluten with drawl from the diets'. Antimitochondrial antibody Antimitochondrial antibody testing is done for primary biliary cirrhosis. The points against the diagnosis of primary biliary cirrhosis. - It typically presents in middle aged females. - initial clinical manifestations of the disease are - Pruritus -Fatigue - Characteristic elevation of alkaline phosphatase Antismooth muscle antibody Antismooth muscle antibody testing is done for autoiminune hepatitis Antoimmune hepatitis is common in young women. The usual presentation is an acute attack of hepatitis. |
medmcqa_5733 | medmcqa | <ctx>
would it be easier if monochromatic light were used?
source used. recessed lighting ( often called " pot lights " in canada, " can lights " or'high hats " in the us ) is popular, with fixtures mounted into the ceiling structure so as to appear flush with it. these downlights can use narrow beam spotlights, or wider - angle floodlights, both of which are bulbs having their own reflectors. there are also downlights with internal reflectors designed to accept common'a'lamps ( light bulbs ) which are generally less costly than reflector lamps. downlights can be incandescent, fluorescent, hid ( high intensity discharge ) or led. track lighting, invented by lightolier, was popular at one period of time because it was much easier to install than recessed lighting, and individual fixtures are decorative and can be easily aimed at a wall. it has regained some popularity recently in low - voltage tracks, which often look nothing like their predecessors because they do not have the safety issues that line - voltage systems have, and are therefore less bulky and more ornamental in themselves. a master transformer feeds all of the fixtures on the track or rod with 12 or 24 volts, instead of each light fixture having its own line - to - low voltage transformer. there are traditional spots and floods, as well as other small hanging fixtures. a modified version of this is cable lighting, where lights are hung from or clipped to bare metal cables under tension. a sconce is a wall - mounted fixture, particularly one that shines up and sometimes down as well. a torchere is an uplight intended for ambient lighting. it is typically a floor lamp but may be wall - mounted like a sconce. further interior light fixtures include chandeliers, pendant lights, ceiling fans with lights, close - to - ceiling or flush lights, and various types of lamps the portable or table lamp is probably the most common fixture, found in many homes and offices. the standard lamp and shade that sits on a table is general lighting, while the desk lamp is considered task lighting. magnifier lamps are also task lighting. the illuminated ceiling was once popular in the 1960s and 1970s but fell out of favor after the 1980s. this uses diffuser panels hung
chloride, and xylene. annually monitoring is required such as turbidity, ph, residual free chlorine. legionella management must be determined in the building. nourishment well ensures the project to operate food outlet for at least two varieties of fruits and non - fried vegetables that would be clearly be seen by users. other type of foods such as packaged foods and beverages must display the total calories per serving, macronutrient content, and sugar content. owner must communicates users on food allergy issue. high sugar foods or over than 25 grams of sugar foods per serving is banned from the menu in dining spaces or at least identified in the items for users to make a decision. light daylighting design hugely integrates in the project by daylight simulation technique such as the spatial daylight analysis that shows how much daylight illuminates throughout working hours. adequate daylighting level can be decided on the interior layout or the building design such as a distance from windows. for the project that finds it difficult for daylight access, circadian lighting design can replace daylight such as an intrinsically photosensitive retinal ganglion cell ( iprgc ) receiving enough light at least 150 melanopic lux ( eml ) or 275 eml. well still keeps visual lighting design which is conventional lighting method for user's visual comfort and acuity. lighting design standard in well follows illuminating engineering society lighting library or en standard 12464 - 1 & 2 or iso 8995 - 1 or chinese standard gb 50034 or cibse sll code for lighting. alternately well allows light level threshold from u. s. general services administration's facilities standards. movement this subject is the summary of optimization points which well requires the project to achieve as least one point from four optimization features specifically stair circulation, cycling network with bike parking or showers, lockers and changing rooms, pedestrian - friendly environment or mass transit within walking distance, free sport facilities or green space for outdoor activities. ergonomic workstation design intended for users to adjust freely such as monitor, work surface, chair, and foot support. thermal comfort well ensures thermal indoor environment to be controlled. for hvac control system ( mechanically conditioned space ), acceptable thermal
the battery fits quite well in the base of the fixture. modern installations in the united states and europe, modern emergency lighting is installed in virtually every commercial and high occupancy residential building. the lights consist of one or more incandescent bulbs or one or more clusters of high - intensity light - emitting diodes ( led ). the emergency lighting heads have usually been either incandescent par 36 sealed beams or wedge base lamps, but led illumination is increasingly common. all units have some sort of a device to focus and intensify the light they produce. this can either be in the form of a plastic cover over the fixture, or a reflector placed behind the light source. most individual light sources can be rotated and aimed for where light is needed most in an emergency, such as toward fire exits. modern fixtures usually have a test button of some sort which simulates a power failure and causes the unit to switch on the lights and operate from battery power, even if the main power is still on. modern systems are operated with relatively low voltage, usually from 6 - 12 vdc. this both reduces the size of the batteries required and reduces the load on the circuit to which the emergency light is wired. modern fixtures include a small transformer in the base of the fixture which steps - down the voltage from main current to the low voltage required by the lights. batteries are commonly made of lead - calcium, and can last for 10 years or more on continuous charge. us fire safety codes require a minimum of 90 minutes on battery power during a power outage along the path of egress. compliance codes new york city requires emergency lights to carry a calendar number signifying approval for local installation, chicago requires emergency lighting to have a metal face plate, and los angeles requires additional exit signs be installed within of the floor around doors to mark exits during a fire, as smoke rises and tends to block out higher installed units. as there are strict requirements to provide an average of one foot candle of light along the path of egress, emergency lighting should be selected carefully to ensure codes are met. in recent years, emergency lighting has made less use of the traditional two - head unit - with manufacturers stretching the concept of emergency lighting
</ctx>
The following are multiple choice questions (with answers) about STEM.
What is the recommended illumination for general office work?
A. 100 lux
B. 400 lux
C. 900 lux
D. 1000 lux
Answer: | [
"100 lux",
"400 lux",
"900 lux",
"1000 lux"
] | B | The recommended illumination for casual reading is 100 lux, 400 lux for office work, 900 lux for fine assembly and for watch making 2000 - 3000 lux is recommended. Ref: Park's Textbook of Preventive Medicine, 17th Edition, Page 519. |
aquarat_49494 | aquarat | <ctx>
of these interest and principal payments must equal the cumulative fixed payments at time t i. e. mat. evaluating the first integral on the right we obtain an expression for i ( t ), the interest paid : unsurprisingly the second integral evaluates to p0 β p ( t ) and therefore : the reader may easily verify that this expression is algebraically identical to the one above. loan cost factor the cost of a loan is simply the annual rate multiplied by loan period : let s = rt. then we may define loan cost factor c ( s ) such that c = p0c ( s ) i. e. : c ( s ) is the cost per unit of currency loaned. the function c ( s ) is characterised by having a limiting value of 1 when s is close to zero since for small values of s, exp ( βs ) β 1 β s and the denominator simplifies to s. also when s is very large, exp ( βs ) is small so c ( s ) β s and thus loan cost c β p0rt ( rt > > 0 ). by way of example, consider a loan of 1000000 at 10 % repaid over 20 years. then s = 0. 1 Γ 20 = 2. the product rt is an easily obtained but important parameter in determining loan cost according to the equation c = p0xc ( s ). this is best illustrated by plotting the cost factor function for s values in domain [ 0 ; 5 ]. the linear behaviour of the function for higher values of s is clear. equivalent simple interest cost factor for a fixed term loan of t years, we may compare the above loan cost factor against an equivalent simple interest cost factor 1 + se where se = ret and re is the equivalent simple interest rate : it is straightforward to determine se in terms of s. dividing by loan time period t will then give the equivalent simple interest rate. more challenging is the reverse determination of s given se. in his book problem solving with true basic, dr b. d. hahn has a short section on certain'hire purchase'schemes in which interest is calculated in advance in one lump sum, which is added to the
is also called the logarithmic return, continuously compounded return, or force of interest. the e - folding time Ο is the time it takes to grow by a factor e. the doubling time t is the time it takes to double. the percent increase ( a dimensionless number ) in a period. the quantities,, and, and for a given also, have a one - to - one connection given by the following equation ( which can be derived by taking the natural logarithm of the above ) : where corresponds to and to and being infinite. if is the unit of time the quotient is simply the number of units of time. using the notation for the ( dimensionless ) number of units of time rather than the time itself, can be replaced by, but for uniformity this has been avoided here. in this case the division by in the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit. a popular approximated method for calculating the doubling time from the growth rate is the rule of 70, that is,. reformulation as log - linear growth if a variable exhibits exponential growth according to, then the log ( to any base ) of grows linearly over time, as can be seen by taking logarithms of both sides of the exponential growth equation : this allows an exponentially growing variable to be modeled with a log - linear model. for example, if one wishes to empirically estimate the growth rate from intertemporal data on, one can linearly regress on. differential equation the exponential function satisfies the linear differential equation : saying that the change per instant of time of at time is proportional to the value of, and has the initial value. the differential equation is solved by direct integration : so that in the above differential equation, if, then the quantity experiences exponential decay. for a nonlinear variation of this growth model see logistic function. other growth rates in the long run, exponential growth of any kind will overtake linear growth of any kind ( that is the basis of the malthusian catastrophe ) as well as any polynomial growth, that is, for all : there is a whole hierarchy of conceiva
a mortgage ( 1 million for 20 years @ r = 10 % ) calculated firstly according to the above time continuous model and secondly using the excel pv function. as may be seen the curves are virtually indistinguishable β calculations effected using the model differ from those effected using the excel pv function by a mere 0. 3 % ( max ). the data from which the graph ( s ) were derived can be viewed comparison with similar physical systems define the " reverse time " variable z = t β t. ( t = 0, z = t and t = t, z = 0 ). then : this may be recognized as a solution to the " reverse time " differential equation : electrical / electronic engineers and physicists will be familiar with an equation of this nature : it is an exact analogue of the type of differential equation which governs ( for example ) the charging of a capacitor in an rc circuit. the key characteristics of such equations are explained in detail at rc circuits. for home owners with mortgages the important parameter to keep in mind is the time constant of the equation which is simply the reciprocal of the annual interest rate r. so ( for example ) the time constant when the interest rate is 10 % is 10 years and the period of a home loan should be determined β within the bounds of affordability β as a minimum multiple of this if the objective is to minimise interest paid on the loan. mortgage difference and differential equation the conventional difference equation for a mortgage loan is relatively straightforward to derive - balance due in each successive period is the previous balance plus per period interest less the per period fixed payment. given an annual interest rate r and a borrower with an annual payment capability mn ( divided into n equal payments made at time intervals Ξ΄t where Ξ΄t = 1 / n years ), we may write : if n is increased indefinitely so that Ξ΄t β 0, we obtain the continuous time differential equation : note that for there to be a continually diminishing mortgage balance, the following inequality must hold : p0 is the same as p ( 0 ) β the original loan amount or loan balance at time t = 0. solving the difference equation we begin by re - writing the difference equation in rec
continuous compounding rates are sometimes converted into the continuous compound interest rate equivalent because the continuous equivalent is more convenient ( for example, more easily differentiated ). each of the formulæ above may be restated in their continuous equivalents. for example, the present value at time 0 of a future payment at time t can be restated in the following way, where e is the base of the natural logarithm and r is the continuously compounded rate : this can be generalized to discount rates that vary over time : instead of a constant discount rate r, one uses a function of time r ( t ). in that case the discount factor, and thus the present value, of a cash flow at time t is given by the integral of the continuously compounded rate r ( t ) : indeed, a key reason for using continuous compounding is to simplify the analysis of varying discount rates and to allow one to use the tools of calculus. further, for interest accrued and capitalized overnight ( hence compounded daily ), continuous compounding is a close approximation for the actual daily compounding. more sophisticated analysis includes the use of differential equations, as detailed below. examples using continuous compounding yields the following formulas for various instruments : annuity perpetuity growing annuity growing perpetuity annuity with continuous payments these formulas assume that payment a is made in the first payment period and annuity ends at time t. differential equations ordinary and partial differential equations ( odes and equations involving derivatives and one ( respectively, multiple ) variables are ubiquitous in more advanced treatments of financial mathematics. while time value of money can be understood without using the framework of differential equations, the added sophistication sheds additional light on time value, and provides a simple introduction before considering more complicated and less familiar situations. this exposition follows. the fundamental change that the differential equation perspective brings is that, rather than computing a number ( the present value now ), one computes a function ( the present value now or at any point in future ). this function may then be how does its value change over or compared with other functions. formally, the statement that " value decreases over time " is given by defining the linear differential operator as : this states that value
</ctx>
The following are multiple choice questions (with answers) about STEM.
If money is invested at r percent interest, compounded annually, the amount of investment will double in approximately 70/r years. If Pat's parents invested $ 10000 in a long term bond that pays 8 percent interest, compounded annually, what will be the approximate total amount of investment 18 years later, when Pat is ready for college?
A. A)$40000
B. B)$15000
C. C)$12000
D. D)$10000
Answer: | [
"A)$40000",
"B)$15000",
"C)$12000",
"D)$10000"
] | A | Since investment doubles in 70/r years then for r=8 it'll double in 70/8=~9 years (we are not asked about the exact amount so such an approximation will do). Thus in 18 years investment will double twice and become ($10,000*2)*2=$40,000 (after 9 years investment will become $10,000*2=$20,000 and in another 9 years it'll become $20,000*2=$40,000).
Answer: A. |
medmcqa_5169 | medmcqa | <ctx>
that of the ambient atmosphere is a function of the altitude, and increases with an increase in altitude in direct proportion to pressure drop. the amount of supplementary oxygen actually used is also proportional to respiratory minute volume, which depends on the level of exertion. drop tanks to avoid carrying the weight of all the cylinders all the way up and down a peak, one or more cylinders can be cached along the route for retrieval on the return. this implies the use of two or more cylinders to carry the oxygen supply for the summit day, which may last up to 18 hours, making an endurance of about 6 hours for a full cylinder a practical choice. this strategy is not possible if a single cylinder is used. types of apparatus high altitude supplemental oxygen mountaineering breathing apparatus provides oxygen at a higher concentration than available from atmospheric air in a naturally hypoxic environment. it needs to be lightweight and to be reliable in severe cold, including not getting choked with deposited frost from the exhaled gas, which is saturated with water vapour at body temperature. both chemically generated and compressed gas oxygen have been used in experimental closed - circuit oxygen mountaineering systems, but open circuit has mostly been used, although relatively wasteful, as the equipment is more reliable. for mountaineering at high altitudes where the user has to carry the stored oxygen, open circuit demand or closed circuit may be used to maximise endurance of the set. breathing pure oxygen results in an elevated partial pressure of oxygen in the blood : a climber breathing pure oxygen at the summit of mt. everest has a greater arterial oxygen partial pressure than breathing air at sea level. this results in being able to exert greater physical effort at altitude. the exothermic carbon dioxide absorption reaction of a rebreather helps keep the scrubber contents from freezing while it is in use, and helps reduce heat loss from the user, but is vulnerable to freezing during periods when it is not in active use. unpressurised aircraft and high altitude parachuting have similar requirements and working environment to mountaineering, but weight is less of a problem. closed circuit oxygen rebreather in a closed circuit system, any unused oxygen is retained and rebreathed, so the utilisation
) 0. 100 l of co2 at 307 torr and 26 Β°c ( b ) 8. 75 l of c2h4, at 378. 3 kpa and 483 k ( c ) 221 ml of ar at 0. 23 torr and β 54 Β°c 36. a high altitude balloon is filled with 1. 41 104 l of hydrogen at a temperature of 21 Β°c and a pressure of 745 torr. what is the volume of the balloon at a height of 20 km, where the temperature is β 48 Β°c and the pressure is 63. 1 torr? 37. a cylinder of medical oxygen has a volume of 35. 4 l, and contains o2 at a pressure of 151 atm and a temperature of 25 Β°c. what volume of o2 does this correspond to at normal body conditions, that is, 1 atm and 37 Β°c? 38. a large scuba tank ( figure 9. 16 ) with a volume of 18 l is rated for a pressure of 220 bar. the tank is filled at 20 Β°c and contains enough air to supply 1860 l of air to a diver at a pressure of 2. 37 atm ( a depth of 45 feet ). was the tank filled to capacity at 20 Β°c? 39. a 20. 0 - l cylinder containing 11. 34 kg of butane, c4h10, was opened to the atmosphere. calculate the mass of the gas remaining in the cylinder if it were opened and the gas escaped until the pressure in the cylinder was equal to the atmospheric pressure, 0. 983 atm, and a temperature of 27 Β°c. 40. while resting, the average 70 - kg human male consumes 14 l of pure o2 per hour at 25 Β°c and 100 kpa. how many moles of o2 are consumed by a 70 kg man while resting for 1. 0 h? 41. for a given amount of gas showing ideal behavior, draw labeled graphs of : ( a ) the variation of p with v ( b ) the variation of v with t ( c ) the variation of p with t ( d ) the variation of with v 42. a liter of methane gas, ch4, at stp contains more atoms of hydrogen than does a liter of pure hydrogen gas
13. 6 humidity, evaporation, and boiling 49. dry air is 78. 1 % nitrogen. what is the partial pressure of nitrogen when the atmospheric pressure is? 50. ( a ) what is the vapor pressure of water at? ( b ) what percentage of atmospheric pressure does this correspond to? ( c ) what percent of air is water vapor if it has 100 % relative humidity? ( the density of dry air at is. ) 51. pressure cookers increase cooking speed by raising the boiling temperature of water above its value at atmospheric pressure. ( a ) what pressure is necessary to raise the boiling point to? ( b ) what gauge pressure does this correspond to? 52. ( a ) at what temperature does water boil at an altitude of 1500 m ( about 5000 ft ) on a day when atmospheric pressure is ( b ) what about at an altitude of 3000 m ( about 10, 000 ft ) when atmospheric pressure is 53. what is the atmospheric pressure on top of mt. everest on a day when water boils there at a temperature of 54. at a spot in the high andes, water boils at, greatly reducing the cooking speed of potatoes, for example. what is atmospheric pressure at this location? 55. what is the relative humidity on a day when the air contains of water vapor? 56. what is the density of water vapor in on a hot dry day in the desert when the temperature is and the relative humidity is 6. 00 %? 57. a deep - sea diver should breathe a gas mixture that has the same oxygen partial pressure as at sea level, where dry air contains 20. 9 % oxygen and has a total pressure of. ( a ) what is the partial pressure of oxygen at sea level? ( b ) if the diver breathes a gas mixture at a pressure of, what percent oxygen should it be to have the same oxygen partial pressure as at sea level? 58. the vapor pressure of water at is. using the ideal gas law, calculate the density of water vapor in that creates a partial pressure equal to this vapor pressure. the result should be the same as the saturation vapor density at that temperature 59. air in human lungs has a temperature of and a saturation vapor density
systen may be used to ensure a minimum fresh gas supply, with a demand valve to add gas if the loop is emptied. gas supply and absorbent duratio would usually be matched for efficiency insulation may be needed for the breathing loop for use in sub - zero temperatures where freezing of the absorbent would reduce its effectiveness. a water trap may be provided to collect condensed water from the absorbent reaction and from metabolic product. monitoring functions may include : cylinder pressure ( analog and digital ) estimated remaining duration of use until residual low pressure warning is activated temperature indication automatic recording of warning signals optical and / or acoustic warning signals may be provided for : low gas pressure in the cylinder alarm triggered by lack of user movement manually triggered alarm. mountaineering rebreathers mountaineering rebreathers provide oxygen at a higher concentration than available from atmospheric air in a naturally hypoxic environment. they need to be lightweight and to be reliable in severe cold including not getting choked with deposited frost. a high rate of system failures due to extreme cold has not been solved. breathing pure oxygen results in an elevated partial pressure of oxygen in the blood : a climber breathing pure oxygen at the summit of mt. everest has a greater oxygen partial pressure than breathing air at sea level. this results in being able to exert greater physical effort at altitude. the exothermic reaction helps keep the scrubber contents from freezing, and helps reduce heat loss from the user. both chemical and compressed gas oxygen have been used in experimental closed - circuit oxygen systems β the first on mount everest in 1938. the 1953 expedition used closed - circuit oxygen equipment developed by tom bourdillon and his father for the first assault team of bourdillon and evans ; with one " dural " 800l compressed oxygen cylinder and soda lime canister ( the second ( successful ) assault team of hillary and tenzing used open - circuit equipment ). rebreathers for unpressurised aircraft and high altitude parachuting similar requirement and working environment to mountaineering, but weight is less of a problem. the soviet ida71 rebreather was also manufactured in a high altitude version, which was operated as an oxygen rebreather. anaesthesia systems
</ctx>
The following are multiple choice questions (with answers) about STEM.
At the altitude of 6500 meters, the atmospheric pressure is 347 mmHg. What is the inspired PO2.
A. 73 mm Hg
B. 63 mm Hg
C. 53 mm Hg
D. 83 mm Hg
Answer: | [
"73 mm Hg",
"63 mm Hg",
"53 mm Hg",
"83 mm Hg"
] | B | According to law of paial pressures : Pressure exeed by a gas in a mixture is directly propoional to its concentration in the mixture. In inspired gases; 20 % = O2 78% = N2 PH2O = 47 mmHg (air humidified with inspiration) =347 - 47 = 300 mmHg = gaseous component |
m1_2773 | m1 | <ctx>
we may prove that : this inequality implies that the amount by which we can be sure the function is decreased depends on a trade off between the two terms in square brackets. the first term in square brackets measures the angle between the descent direction and the negative gradient. the second term measures how quickly the gradient changes along the descent direction. in principle inequality ( ) could be optimized over and to choose an optimal step size and direction. the problem is that evaluating the second term in square brackets requires evaluating, and extra gradient evaluations are generally expensive and undesirable. some ways around this problem are : forgo the benefits of a clever descent direction by setting, and use line search to find a suitable step - size, such as one that satisfies the wolfe conditions. a more economic way of choosing learning rates is backtracking line search, a method that has both good theoretical guarantees and experimental results. note that one does not need to choose to be the gradient ; any direction that has positive intersection product with the gradient will result in a reduction of the function value ( for a sufficiently small value of ). assuming that is twice - differentiable, use its hessian to estimate then choose and by optimising inequality ( ). assuming that is lipschitz, use its lipschitz constant to bound then choose and by optimising inequality ( ). build a custom model of for. then choose and by optimising inequality ( ). under stronger assumptions on the function such as convexity, more advanced techniques may be possible. usually by following one of the recipes above, convergence to a local minimum can be guaranteed. when the function is convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution. solution of a linear system gradient descent can be used to solve a system of linear equations reformulated as a quadratic minimization problem. if the system matrix is real symmetric and positive - definite, an objective function is defined as the quadratic function, with minimization of so that for a general real matrix, linear least squares define in traditional linear least squares for real and the euclidean norm is used, in which case the line search minimization
matrix is real symmetric and positive - definite, an objective function is defined as the quadratic function, with minimization of so that for a general real matrix, linear least squares define in traditional linear least squares for real and the euclidean norm is used, in which case the line search minimization, finding the locally optimal step size on every iteration, can be performed analytically for quadratic functions, and explicit formulas for the locally optimal are known. for example, for real symmetric and positive - definite matrix, a simple algorithm can be as follows, to avoid multiplying by twice per iteration, we note that implies, which gives the traditional algorithm, the method is rarely used for solving linear equations, with the conjugate gradient method being one of the most popular alternatives. the number of gradient descent iterations is commonly proportional to the spectral condition number of the system matrix ( the ratio of the maximum to minimum eigenvalues of, while the convergence of conjugate gradient method is typically determined by a square root of the condition number, i. e., is much faster. both methods can benefit from preconditioning, where gradient descent may require less assumptions on the preconditioner. solution of a non - linear system gradient descent can also be used to solve a system of nonlinear equations. below is an example that shows how to use the gradient descent to solve for three unknown variables, x1, x2, and x3. this example shows one iteration of the gradient descent. consider the nonlinear system of equations let us introduce the associated function where one might now define the objective function which we will attempt to minimize. as an initial guess, let us use we know that where the jacobian matrix is given by we calculate : thus and now, a suitable must be found such that this can be done with any of a variety of line search algorithms. one might also simply guess which gives evaluating the objective function at this value, yields the decrease from to the next step's value of is a sizable decrease in the objective function. further steps would reduce its value further until an approximate solution to the system was found. comments gradient descent works in spaces of any number of dimensions, even in infinite - dimensional ones.
as backfitting, which entails reconsidering a previous pair, recalculating the residual given how other pairs have changed, refitting to account for that new information, and then cycling through all fit - pairs this way until parameters converge. this process typically results in a model that performs better with fewer fit - pairs, though it takes longer to train, and it is usually possible to achieve the same performance by skipping backfitting and simply adding more fits to the model ( increasing r ). solving the simplified error function to determine an pair can be done with alternating optimization, where first a random is used to project in to 1d space, and then the optimal is found to describe the relationship between that projection and the residuals via your favorite scatter plot regression method. then if is held constant, assuming is once differentiable, the optimal updated weights can be found via the gauss - newton method β a quasi - newton method in which the part of the hessian involving the second derivative is discarded. to derive this, first taylor expand, then plug the expansion back in to the simplified error function and do some algebraic manipulation to put it in the form this is a weighted least squares problem. if we solve for all weights and put them in a diagonal matrix, stack all the new targets in to a vector, and use the full data matrix instead of a single example, then the optimal is given by the closed - form use this updated to find a new projection of and refit to the new scatter plot. then use that new to update by resolving the above, and continue this alternating process until converges. it has been shown that the convergence rate, the bias and the variance are affected by the estimation of and. discussion the ppr model takes the form of a basic additive model but with the additional component, so each fits a scatter plot of vs the residual ( unexplained variance ) during training rather than using the raw inputs themselves. this constrains the problem of finding each to low dimension, making it solvable with common least squares or spline fitting methods and sidestepping the curse of dimensionality during training. because is taken of a projection of, the result looks like a "
high. one major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the n variables. the derivatives provide detailed information for such optimizers, but are even harder to calculate, e. g. approximating the gradient takes at least n + 1 function evaluations. for approximations of the 2nd derivatives ( collected in the hessian matrix ), the number of function evaluations is in the order of nΒ². newton's method requires the 2nd - order derivatives, so for each iteration, the number of function calls is in the order of nΒ², but for a simpler pure gradient optimizer it is only n. however, gradient optimizers need usually more iterations than newton's algorithm. which one is best with respect to the number of function calls depends on the problem itself. methods that evaluate hessians ( or approximate hessians, using finite differences ) : newton's method sequential quadratic programming : a newton - based method for small - medium scale constrained problems. some versions can handle large - dimensional problems. interior point methods : this is a large class of methods for constrained optimization, some of which use only ( sub ) gradient information and others of which require the evaluation of hessians. methods that evaluate gradients, or approximate gradients in some way ( or even subgradients ) : coordinate descent methods : algorithms which update a single coordinate in each iteration conjugate gradient methods : iterative methods for large problems. ( in theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite β precision computers. ) gradient descent ( alternatively, " steepest descent " or " steepest ascent " ) : a ( slow ) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. subgradient methods : an iterative method for large locally lipschitz functions using generalized gradients. following boris t. polyak, subgradient β projection methods are similar to conjugate β gradient methods. bundle method of descent
</ctx>
The following are multiple choice questions (with answers) about STEM.
You write a Python code to optimize the weights of your linear regression with 10 features \textbf{using gradient descent} for 500 epochs. What is the minimum number of for-loops you need to perform your optimization?
A. Two for-loops, one to iterate over the weights and the other to iterate over the epochs
B. Only one for-loop to iterate over the epochs.
C. Only one for-loop to iterate over the weights.
D. No for-loop is really necessary. Everything can be vectorized
Answer: | [
"Two for-loops, one to iterate over the weights and the other to iterate over the epochs",
"Only one for-loop to iterate over the epochs.",
"Only one for-loop to iterate over the weights.",
"No for-loop is really necessary. Everything can be vectorized"
] | B | To analyze the optimization of weights in a linear regression model using gradient descent, we first need to understand how gradient descent works and the structure of the algorithm.
Gradient descent is an iterative optimization algorithm used to minimize a function. In the context of linear regression, we are trying to minimize the cost function, typically the mean squared error (MSE) between the predicted and actual values. The weights of the model are updated in each iteration based on the gradient of the cost function with respect to the weights.
The general form of the gradient descent update rule can be expressed mathematically as follows:
\[
w_j^{(t+1)} = w_j^{(t)} - \eta \frac{\partial J(w)}{\partial w_j}
\]
Where:
- \(w_j^{(t)}\) is the weight for feature \(j\) at iteration \(t\).
- \(\eta\) is the learning rate.
- \(\frac{\partial J(w)}{\partial w_j}\) is the partial derivative of the cost function \(J(w)\) with respect to the weight \(w_j\).
In a typical implementation of gradient descent, we loop through a predefined number of epochs (iterations) to update the weights. In each epoch, we compute the gradients for all weights simultaneously. This means we need to update all weights based on the gradients computed from the entire training set.
Now, letβs analyze the options presented in the multiple-choice question regarding the number of for-loops required for the optimization:
1. **Two for-loops, one to iterate over the weights and the other to iterate over the epochs:** This option suggests that we separately loop over each weight and then loop over the epochs. However, in practice, we compute the gradients for all weights in one go using vectorized operations (matrix computations), eliminating the need for a separate loop over the weights.
2. **Only one for-loop to iterate over the epochs:** This option indicates that we would only need a single loop to iterate through the epochs while updating all weights simultaneously within that loop. This is indeed how gradient descent is commonly implemented, as the weight updates can be computed using vectorized operations that apply to all weights in a single step.
3. **Only one for-loop to iterate over the weights:** This option implies that we would iterate through the weights, updating each one individually over epochs. This is inefficient and unnecessary because we can compute all updates in a single vectorized step.
4. **No for-loop is really necessary. Everything can be vectorized:** While it's true that the weight updates can be vectorized, we still need at least one loop to iterate over the epochs. Thus, this statement is partially incorrect as we still require an outer loop for epochs.
Given this analysis, the correct understanding is that during each epoch, we compute all weight updates simultaneously through vectorized operations, which means we only need one outer loop to iterate through the epochs.
Therefore, the correct answer to the question is: **Only one for-loop to iterate over the epochs.** |
openbookqa_2054 | openbookqa | <ctx>
water between the friction surfaces and the drum can act as a lubricant and reduce braking efficiency. the water tends to stay until heated sufficiently to vaporize, at which point braking efficiency returns. all friction braking systems have a maximum theoretical rate of energy conversion. once that rate is reached, applying greater pedal pressure doesn't change it β in fact, the effects mentioned can substantially reduce it. ultimately, this is what brake fade is, regardless of the mechanisms of its causes. disc brakes are not immune to any of these processes, but they deal with heat and water more effectively than drums. drum brakes can be grabby if the drum surface gets light rust or if the brake is cold and damp, giving the pad material greater friction. grabbing can be so severe that the tires skid and continue to skid even when the pedal is released. grab is the opposite of fade : when the pad friction goes up, the self - assisting nature of drum brakes causes application force to go up. if the pad friction and self - amplification are high enough, the brake stays engaged due to self - application, even when the external application force is released. while disc brake rotors can be machined to clean the friction surface ( i. e.,'turning'), the same generally cannot be done with brake drums. machining the friction surface of a brake drum increases the diameter, which might require oversized shoes to maintain proper contact with the drum. however, since oversized shoes are generally unavailable for most applications, worn or damaged drums generally must be replaced. it is quite simple to machine brake drums if one has a slow running lathe ( one rule of thumb is that cast iron should not be machined faster than fifty feet per minute ). usually it is only necessary to machine away the ridge that forms that makes brake drum removal difficult, especially if the brakes are self - adjusting. in severe cases the ridge can make the brake drum captive, however most drum brake designs provide a way to externally release the self - adjusting mechanism in order to ease drum removal and service. another disadvantage of drum brakes is their relative complexity. a person must have a general understanding of how drum brakes work and take several simple steps to ensure the brakes are
wears away. because the motion of most brakes is not perfectly horizontal, the pads may lose their centering as they wear, causing the pads to wear unevenly. over longer time and use, rims can become worn. rims should be checked for wear periodically as they can fail catastrophically if the braking surface becomes too worn. wear is accelerated by wet and muddy conditions. rim brakes require that the rims be straight ( not out - of - round or warped ). if a rim has a pronounced wobble, then the braking force may be intermittent or uneven, and the pads may rub the rims even when the brake is not applied. during braking, the friction surfaces ( brake pads and rims ) will experience thermal heating. in normal use this is not a problem, as the brakes are applied with limited force and for a short time, so the heat quickly dissipates to the surrounding air. however, on a heavily laden bike on a long descent, heat energy may be added more quickly than it can dissipate causing heat build - up, which may damage components and cause brake failure. a ceramic coating for the rims is available which may reduce wear and can also improve both wet and dry braking. it may also slightly reduce heat transfer to the inside of the rims because it is a thermal insulator. brake pads brake pads are available with numerous shapes and materials. many consist of a replaceable rubber pad held on a mounting, or brake shoe, with a post or bolt on the back to attach to the brake. some are made as one piece with the attachment directly molded in the pad for lower production costs ; brake pads of the cartridge type are held in place by a metal split pin or threaded grub screw and can be replaced without moving the brake shoe from its alignment to the rim. the rubber can be softer for more braking force with less lever effort, or harder for longer life. many pad designs have a rectangular shape ; others are longer and curved to match the radius of the rim. larger pads do not necessarily provide more braking force, but will wear more slowly ( in relation to thickness ), so can usually be thinner. in general, a brake can be fitted with a variety of pads
are not particularly prone to sudden ( rather than progressive ) failure, and are in any case necessarily duplicated to allow even and balanced application of brake force to all wheels. it would also be prohibitively costly to further double - up the main components and they would add considerable weight. however, the similarly critical systems for actuating the brakes under driver control are inherently less robust, generally using a cable ( can rust, stretch, jam, snap ) or hydraulic fluid ( can leak, boil and develop bubbles, absorb water and thus lose effectiveness ). thus in most modern cars the footbrake hydraulic brake circuit is diagonally divided to give two smaller points of failure, the loss of either only reducing brake power by 50 % and not causing as much dangerous brakeforce imbalance as a straight front - back or left - right split, and should the hydraulic circuit fail completely ( a relatively very rare occurrence ), there is a failsafe in the form of the cable - actuated parking brake that operates the otherwise relatively weak rear brakes, but can still bring the vehicle to a safe halt in conjunction with transmission / engine braking so long as the demands on it are in line with normal traffic flow. the cumulatively unlikely combination of total foot brake failure with the need for harsh braking in an emergency will likely result in a collision, but still one at lower speed than would otherwise have been the case. in comparison with the foot pedal activated service brake, the parking brake itself is a less critical item, and unless it is being used as a one - time backup for the footbrake, will not cause immediate danger if it is found to be nonfunctional at the moment of application. therefore, no redundancy is built into it per se ( and it typically uses a cheaper, lighter, but less hardwearing cable actuation system ), and it can suffice, if this happens on a hill, to use the footbrake to momentarily hold the vehicle still, before driving off to find a flat piece of road on which to stop. alternatively, on shallow gradients, the transmission can be shifted into park, reverse or first gear, and the transmission lock / engine compression used to hold it stationary, as there is no
while the tread is separating the vehicle will oversteer while the treadseparating tire is on the outside of a turn. once the tread separation has completed the vehicle will cease pulling in any direction. tread separations for firestone atx, atx ii, and wilderness at tires were significantly worse than for comparable tires manufactured by other companies including the goodyear wrangler rt / s which was also installed on new ford explorers. the tread - separating tire was most frequently the leftrear wheel of the vehicle. reasons suggested include additional heat from the exhaust, higher weight on the driver's side because vehicles are often driven without any passengers or higher temperatures in the middle of the road than on the shoulders. causes there were several primary causes for the tread separations : tire age, manufacturing facility, operating temperature, tire design, as well as labor and management problems in bridgestone / firestone and at the decatur, illinois factory. tire age the belt wedge and skim rubber compounds change as they age. skim rubber is a thin strip of rubber sandwiched between the inner and outer belts and the belt wedge is a thin strip of rubber installed along the outer edge of the tire between the two steel belts and in the area of highest stress on the tire. these changes reduce the ability of the tire to resist crack growth and belt separation. most of the tread separations occurred on tires that had been in service for at least three years. atx and wilderness at tires manufactured in decatur started to fail between one and two years after production, but between two and three years for tires from wilson and between three and four years for tires from joliette. tire aging for wilderness at tires, regardless of the factory that produced them, was similar to other tires manufactured at decatur and was worse than comparable tires manufactured by goodyear. manufacturing facility the failure rates for atx and atx ii tires manufactured at the decatur, illinois, factory were higher than for the other factories that also manufactured these same tire models, wilson, north carolina, and joliette, quebec. tires manufactured in decatur had " reduced fracture resistance " compared to tires manufactured at joliette or wilson. that is, once a crack formed within the tire, the crack
</ctx>
The following are multiple choice questions (with answers) about STEM.
Hitting the brakes hard leaves rubber on the road due to
A. the sun
B. abrasion
C. plastic tires
D. metal roadways
Answer: | [
"the sun",
"abrasion",
"plastic tires",
"metal roadways"
] | B | |
mmlu_aux_3092 | mmlu_auxiliary | <ctx>
hours at 600 myr ago to the current 24 - hour value. by counting the microscopic lamina that form at higher tides, tidal frequencies ( and thus day lengths ) can be estimated, much like counting tree rings, though these estimates can be increasingly unreliable at older ages. resonant stabilization the current rate of tidal deceleration is anomalously high, implying earth's rotational velocity must have decreased more slowly in the past. empirical data tentatively shows a sharp increase in rotational deceleration about 600 myr ago. some models suggest that earth maintained a constant day length of 21 hours throughout much of the precambrian. this day length corresponds to the semidiurnal resonant period of the thermally driven atmospheric tide ; at this day length, the decelerative lunar torque could have been canceled by an accelerative torque from the atmospheric tide, resulting in no net torque and a constant rotational period. this stabilizing effect could have been broken by a sudden change in global temperature. recent computational simulations support this hypothesis and suggest the marinoan or sturtian glaciations broke this stable configuration about 600 myr ago ; the simulated results agree quite closely with existing paleorotational data. global events some recent large - scale events, such as the 2004 indian ocean earthquake, have caused the length of a day to shorten by 3 microseconds by reducing earth's moment of inertia. post - glacial rebound, ongoing since the last ice age, is also changing the distribution of earth's mass, thus affecting the moment of inertia of earth and, by the conservation of angular momentum, earth's rotation period. the length of the day can also be influenced by man - made structures. for example, nasa scientists calculated that the water stored in the three gorges dam has increased the length of earth's day by 0. 06 microseconds due to the shift in mass. measurement the primary monitoring of earth's rotation is performed by very - long - baseline interferometry coordinated with the global positioning system, satellite laser ranging, and other satellite geodesy techniques. this provides an absolute reference for the determination of universal time, precession
utc ) and a 7. 7 mw at 15 : 26 jst ( 6 : 26 utc ). over eight hundred aftershocks of magnitude 4. 5 mw or greater have occurred since the initial quake, including one on 26 october 2013 ( local time ) of magnitude 7. 1 mw. aftershocks follow omori's law, which states that the rate of aftershocks declines with the reciprocal of the time since the main quake. the aftershocks will thus taper off in time, but could continue for years. the earthquake moved honshu east, shifted the earth on its axis by estimates of between and, increased earth's rotational speed by 1. 8 Β΅s per day, and generated infrasound waves detected in perturbations of the low - orbiting gravity field and steady - state ocean circulation explorer satellite. initially, the earthquake caused sinking of part of honshu's pacific coast by up to roughly a metre, but after about three years, the coast rose back and then kept on rising to exceed its original height. geology this megathrust earthquake was a recurrence of the mechanism of the earlier 869 sanriku earthquake, which has been estimated as having a magnitude of at least 8. 4 mw, which also created a large tsunami that inundated the sendai plain. three tsunami deposits have been identified within the holocene sequence of the plain, all formed within the last 3, 000 years, suggesting an 800 to 1, 100 year recurrence interval for large tsunamigenic earthquakes. in 2001 it was reckoned that there was a high likelihood of a large tsunami hitting the sendai plain as more than 1, 100 years had then elapsed. in 2007, the probability of an earthquake with a magnitude of mw 8. 1 β 8. 3 was estimated as 99 % within the following 30 years. this earthquake occurred where the pacific plate is subducting under the plate beneath northern honshu. the pacific plate, which moves at a rate of 8 to 9 cm ( 3. 1 to 3. 5 in ) per year, dips under honshu's underlying plate, building large amounts of elastic energy. this motion pushes the upper plate down until
seismologists discovered that the san andreas fault near parkfield in central california consistently produces a magnitude 6. 0 earthquake approximately once every 22 years. following recorded seismic events in 1857, 1881, 1901, 1922, 1934, and 1966, scientists predicted that another earthquake should occur in parkfield in 1993. it eventually occurred in 2004. due to the frequency of predictable activity, parkfield has become one of the most important areas in the world for large earthquake research. in 2004, work began just north of parkfield on the san andreas fault observatory at depth ( safod ). the goal of safod is to drill a hole nearly into the earth's crust and into the san andreas fault. an array of sensors will be installed to record earthquakes that happen near this area. a 2023 study found a link between the water level in lake cahuilla ( now the salton sea ) and seismic activity along the southern san andreas fault. the study suggests that major earthquakes along this section of the fault coincided with high water levels in the lake. the hydrological load caused by high water levels can more than double the stress on the southern san andreas fault, which is likely sufficient for triggering earthquakes. this may explain the abnormally long period of time since the last major earthquake in the region since the lake has dried up. the san andreas fault system has been the subject of a flood of studies. in particular, scientific research performed during the last 23 years has given rise to about 3, 400 publications. the next " big one " a study published in 2006 in the journal nature by yuri fialko, an associate professor at the cecil h. and ida m. green institute of geophysics and planetary physics at the scripps institution of oceanography, found that the san andreas fault has reached a sufficient stress level for an earthquake of magnitude greater than 7. 0 on the moment magnitude scale to occur. this study also found that the risk of a large earthquake may be increasing more rapidly than scientists had previously believed. moreover, the risk is currently concentrated on the southern section of the fault, i. e. the region around los angeles, because strong earthquakes have occurred relatively recently on the central ( 1857 ) and northern ( 1906 ) segments of
large seismic events ( mβ₯6. 0 ) occurring globally from 2000 to 2014 was based on the presence of solar weather. when the solar data are removed from the time series, the correlation is no longer statistically significant. a subsequent article in physics of the earth and planetary interiors in 2020 shows that solar weather and ionospheric disturbances are a potential cause to trigger large earthquakes based on this statistical relationship. the proposed mechanism is electromagnetic induction from the ionosphere to the fault zone. fault fluids are conductive, and can produce telluric currents at depth. the resulting change in the local magnetic field in the fault triggers dissolution of minerals and weakens the rock, while also potentially changing the groundwater chemistry and level. after the seismic event, different minerals may be precipitated thus changing groundwater chemistry and level again. this process of mineral dissolution and precipitation before and after an earthquake has been observed in iceland. this model makes sense of the ionospheric, seismic and groundwater data. satellite observation of the expected ground temperature declination one way of detecting the mobility of tectonic stresses is to detect locally elevated temperatures on the surface of the crust measured by satellites. during the evaluation process, the background of daily variation and noise due to atmospheric disturbances and human activities are removed before visualizing the concentration of trends in the wider area of a fault. this method has been experimentally applied since 1995. in a newer approach to explain the phenomenon, nasa's friedmann freund has proposed that the infrared radiation captured by the satellites is not due to a real increase in the surface temperature of the crust. according to this version the emission is a result of the quantum excitation that occurs at the chemical re - bonding of positive charge carriers ( holes ) which are traveling from the deepest layers to the surface of the crust at a speed of 200 meters per second. the electric charge arises as a result of increasing tectonic stresses as the time of the earthquake approaches. this emission extends superficially up to 500 x 500 square kilometers for very large events and stops almost immediately after the earthquake. trends instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or
</ctx>
The following are multiple choice questions (with answers) about STEM.
A new study revealed that the formation of the Earth, Sun and Moon that cause changes in Earth's tides can also trigger a special kind of Earthquake along the California's San Andreas Fault . The study, published in the journal Proceedings of the National Academy of Science,is based on previous researcher suggesting that the gravitational tug of sun and moon is not only affecting the sea levels of Earth, but can also stretch and compress that Earth's crust, producing tremors and quakes. Furthermore, the new study showed that small tremors deep underground were more likely to occur during certain periods of the tidal cycle. For the study, the researchers looked at about 81,000 low-frequency earthquakes from 2008 to 2015 along the Parkfield section of the San Andreas Fault and compared it to the two-week tidal cycle known as the "fortnightly tide," Low-frequency earthquakes are named due to the rumbling sound they make and not by the rate of its occurrence. These kinds of earthquake only occur about 15 to 30 kilometers below ground and are no larger than magnitude 1 in the Richter scale. The researchers discovered that low-frequency earthquakes tend to occur during the waxing period of the tidal cycle, when the tides are getting little by little. It is already known that the strongest Earth tides occur when the sun and moon are in a straight line, while the tide is at its weakest state when the sun and moon are 90 degrees apart. According to the researchers, low-frequency earthquakes show that some part of the San Andreas Fault is creeping, or slowly moving, almost all the time. With the help of the tidal forces, low-frequency earthquakes could provide more information about the deeper part of the fault. "They tell us that the fault continues down below where the regular or typical earthquakes stop on the San Andreas, about 10 or 12 km (about 6 to 7 miles)," said David Shelly, a seismologist at USGS and helped write the study, in a statement. "And they tell us a lot of things about that deep part of the fault that we had no idea existed at all before." "It's almost like having a lot of little creep meters embedded in the fault,"Shelly said. "We can use these low-frequency earthquakes as measurements of, at least in a sense, how much slip is happening at each little spot on the deep part of the fault where we see these events. When we don't see them, we don't know what's happening; we don't know whether it,s slipping silently or not slipping at all." The information is incredibly useful, he added. Whenever the deep part of the fault slips, the stress gets transferred to the shallow part of the fault. "So if all of a sudden, we saw that the deep part of the fault was slipping a huge amount, it might be an indication that there was an increased chance of having an earthquake come at the shallower part of the fault" he said. According to Shelly, the most important function of the study is _ .
A. to find out where the typical earthquakes occur
B. to indicate when the regular earthquakes occur
C. to uncover how low-frequency earthquakes occur
D. to offer more information about the deeper part of the fault
Answer: | [
"to find out where the typical earthquakes occur",
"to indicate when the regular earthquakes occur",
"to uncover how low-frequency earthquakes occur",
"to offer more information about the deeper part of the fault"
] | D | null |
medmcqa_8284 | medmcqa | <ctx>
##ruginosa produces the pigments pyocyanin and pyoverdin, which give the colonies a greenish sheen. some specimens of serratia marcescens produce an orange - red pigment called prodigiosin. organisms with swarming motility, like proteus species, exhibit concentric waves of growth extending from the inoculation point. interpretation colonial morphology serves as the first step in the identification of microbial species from clinical samples. based on the visual appearance of the colonies, microbiologists can narrow down the list of possible organisms, allowing them to select appropriate tests to provide a definitive diagnosis. for example, if a microbiologist observes colonies that resemble a staphylococcus species, they may perform a catalase test to confirm that it belongs to the genus staphylococcus, and a coagulase test to determine whether it is a coagulase - negative staphylococcus or a more pathogenic species, such as s. aureus. observation of hemolysis is useful in the presumptive identification of bacteria, especially streptococci, which are classified on the basis of their hemolytic reactions. for example, streptococcus pyogenes, which causes strep throat and scarlet fever, displays beta - hemolysis, while streptococcus pneumoniae, which can cause pneumonia and meningitis, displays alpha - hemolysis. the highly pathogenic s. aureus classically displays beta - hemolysis, while staphylococcus epidermidis, part of the normal skin flora and an occasional opportunistic pathogen, does so weakly or not at all. although automated techniques like maldi - tof are increasingly used to identify microorganisms in clinical laboratories, colonial morphology remains useful to distinguish potential pathogens, which must be identified, from normal flora, for which definitive identification is unnecessary, and to confirm identification when automated techniques give inconclusive results. references see also bacterial cellular morphologies β the microscopic appearance of bacterial cells external links colony morphology image gallery at the american society for microbiology microbiology techniques microbiology terms
instead of a negative one. the negatively charged cell wall of many microorganisms attracts the positively charged chromophore which causes the specimen to absorb the stain giving it the color of the stain being used. positive staining is more commonly used than negative staining in microbiology. the different types of positive staining are listed below. simple versus differential simple staining is a technique that only uses one type of stain on a slide at a time. because only one stain is being used, the specimens ( for positive stains ) or background ( for negative stains ) will be one color. therefore, simple stains are typically used for viewing only one organism per slide. differential staining uses multiple stains per slide. based on the stains being used, organisms with different properties will appear different colors allowing for categorization of multiple specimens. differential staining can also be used to color different organelles within one organism which can be seen in endospore staining. types techniques gram gram staining is used to determine gram status to classifying bacteria broadly based on the composition of their cell wall. gram staining uses crystal violet to stain cell walls, iodine ( as a mordant ), and a fuchsin or safranin counterstain to ( mark all bacteria ). gram status, helps divide specimens of bacteria into two groups, generally representative of their underlying phylogeny. this characteristic, in combination with other techniques makes it a useful tool in clinical microbiology laboratories, where it can be important in early selection of appropriate antibiotics. on most gram - stained preparations, gram - negative organisms appear red or pink due to their counterstain. due to the presence of higher lipid content, after alcohol - treatment, the porosity of the cell wall increases, hence the cvi complex ( crystal violet β iodine ) can pass through. thus, the primary stain is not retained. in addition, in contrast to most gram - positive bacteria, gram - negative bacteria have only a few layers of peptidoglycan and a secondary cell membrane made primarily of lipopolysaccharide. endospore endospore staining is used to identify the presence or absence of endospores, which make bacteria very difficult
diagnostic microbiology is the study of microbial identification. since the discovery of the germ theory of disease, scientists have been finding ways to harvest specific organisms. using methods such as differential media or genome sequencing, physicians and scientists can observe novel functions in organisms for more effective and accurate diagnosis of organisms. methods used in diagnostic microbiology are often used to take advantage of a particular difference in organisms and attain information about what species it can be identified as, which is often through a reference of previous studies. new studies provide information that others can reference so that scientists can attain a basic understanding of the organism they are examining. aerobic vs anaerobic anaerobic organisms require an oxygen - free environment. when culturing anaerobic microbes, broths are often flushed with nitrogen gas to extinguish oxygen present, and growth can also occur on media in a chamber without oxygen present. sodium resazurin can be added to indicate redox potential. cultures are to be incubated in an oxygen - free environment for 48 hours at 35 Β°c before growth is examined. anaerobic bacteria collection can come from a variety of sources in patient samples, including blood, bile, bone marrow, cerebrospinal fluid, direct lung aspirate, tissue biopsies from a normally sterile site, fluid from a normally sterile site ( like a joint ), dental, abscess, abdominal or pelvic abscess, knife, gunshot, or surgical wound, or severe burn. incubation length incubation times vary based upon the microbe that requires culturing. traditional culturing techniques, for example, require less than 24 hours culture time for escherichia coli but 6 β 8 weeks for successful culturing of mycobacterium tuberculosis before definitive results are expressed. a benefit of non - culture tests is that physicians and microbiologists are not handicapped by waiting periods. incubation follows a growth curve variable for every microorganism. cultures follow a lag, log, stationary, and finally death phase. the lag phase is not well known in microbiology, but it is speculated that this phase consists of the microorganism adjusting to its environment by synthesizing
2007. 02190. x. issn 0266 - 8254. pmid 17718845. curtin, john j. ; donlan, rodney m. ( 2006 - 04 - 01 ). " using bacteriophages to reduce formation of catheter - associated biofilms by staphylococcus epidermidis ". antimicrobial agents and chemotherapy 50 ( 4 ) : 1268 β 1275. doi : 10. 1128 / aac. 50. 4. 1268 - 1275. 2006. issn 0066 - 4804. pmc 1426991. pmid 16569839. iandolo, john j. ; worrell, veronica ; groicher, kajetan h. ; qian, yudong ; tian, runying ; kenton, steve ; dorman, angela ; ji, honggui ; lin, shaoping ( 2002 - 05 - 01 ). " comparative analysis of the genomes of the temperate bacteriophages Ο11, Ο12 and Ο13 of staphylococcus aureus 8325 ". gene 289 ( 1 β 2 ) : 109 β 118. doi : 10. 1016 / s0378 - 1119 ( 02 ) 00481 - x. loessner, m. j. ; gaeng, s. ; wendlinger, g. ; maier, s. k. ; scherer, s. ( 1998 - 05 - 15 ). " the two - component lysis system of staphylococcus aureus bacteriophage twort : a large ttg - start holin and an associated amidase endolysin ". fems microbiology letters162 ( 2 ) : 265 β 274. issn 0378 - 1097. pmid 9627962. martinez, virginia ; garcia, pedro ; garcia, jose luis ; prieto, maria auxiliadora ( 2011 - 07 - 01 ). " controlled autolysis facilitates the polyhydroxyalkanoate recovery in pseudomonas putida kt2440 ". microbial biotechnology 4 ( 4 ) : 533 β 547. doi
</ctx>
The following are multiple choice questions (with answers) about STEM.
How to differentiate between staphylococci and micrococci?
A. Catalase test
B. Modified Oxidase test
C. Gram staining
D. AFS
Answer: | [
"Catalase test",
"Modified Oxidase test",
"Gram staining",
"AFS"
] | B | Micrococci:Micrococci are gram-positive cocci which occur mostly in pairs, tetrads or irregular clusters. They are catalase and modified oxidase positive and aerobic. They are ordinarily non-pathogenic. The resemble Staphylococci, but in general, they are large in size and tetrads. The common laboratory test used to differentiate between micrococci and staphylococci is modified oxidase test.Ref: Ananthanarayan 9th edition, p206 |
qasc_6653 | qasc | <ctx>
converting the fusion energy to electrical energy.
7. 67 294 7 β’ work, energy, and energy resources access for free at openstax. org converted into thermal energy through metabolism ; light energy is converted into chemical energy through photosynthesis. in a larger example, the chemical energy contained in coal is converted into thermal energy as it burns to turn water into steam in a boiler. this thermal energy in the steam in turn is converted to mechanical energy as it spins a turbine, which is connected to a generator to produce electrical energy. ( in all of these examples, not all of the initial energy is converted into the forms mentioned. this important point is discussed later in this section. ) another example of energy conversion occurs in a solar cell. sunlight impinging on a solar cell ( see figure 7. 19 ) produces electricity, which in turn can be used to run an electric motor. energy is converted from the primary source of solar energy into electrical energy and then into mechanical energy. figure 7. 19 solar energy is converted into electrical energy by solar cells, which is used to run a motor in this solar - power aircraft. ( credit : nasa ) object / phenomenon energy in joules big bang energy released in a supernova fusion of all the hydrogen in earth β s oceans annual world energy use Γ large fusion bomb ( 9 megaton ) Γ 1 kg hydrogen ( fusion to helium ) Γ 1 kg uranium ( nuclear fission ) Γ hiroshima - size fission bomb ( 10 kiloton ) Γ 90, 000 - metric ton aircraft carrier at 30 knots Γ 1 barrel crude oil Γ 1 ton tnt Γ table 7. 1 energy of various objects and phenomena
5. 3 enthalpy chemical reactions, such as those that occur when you light a match, involve changes in energy as well as matter. societies at all levels of development could not function without the energy released by chemical reactions. in 2012, about 85 % of us energy consumption came from the combustion of petroleum products, coal, wood, and garbage. we use this energy to produce electricity ( 38 % ) ; to transport food, raw materials, manufactured goods, and people ( 27 % ) ; for industrial production ( 21 % ) ; and to heat and power our homes and businesses ( 10 % ). 1 while these combustion reactions help us meet our essential energy needs, they are also recognized by the majority of the scientific community as a major contributor to global climate change. useful forms of energy are also available from a variety of chemical reactions other than combustion. for example, the energy produced by the batteries in a cell phone, car, or flashlight results from chemical reactions. this chapter introduces many of the basic ideas necessary to explore the relationships between chemical changes and energy, with a focus on thermal energy. figure 5. 1 sliding a match head along a rough surface initiates a combustion reaction that produces energy in the form of heat and light. ( credit : modification of work by laszlo ilyes ) chapter outline 1 us energy information administration, primary energy consumption by source and sector, 2012, http : / / www. eia. gov / totalenergy / data / monthly / pdf / flow / css _ 2012 _ energy. pdf. data derived from us energy information administration, monthly energy review ( january 2014 ).
of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. different kinds of spectra are often used in chemical spectroscopy, e. g. ir, microwave, nmr, esr, etc. spectroscopy is also used to identify the composition of remote objects β like stars and distant galaxies β by analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. reaction when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid β base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons.
</ctx>
The following are multiple choice questions (with answers) about STEM.
what converts chemical energy into electrical energy
A. sensory neurons
B. AA lithium cell
C. rotten food
D. Firecrackers
Answer: | [
"sensory neurons",
"AA lithium cell",
"rotten food",
"Firecrackers"
] | B | |
arc_challenge_978 | arc_challenge | <ctx>
##aging. the mass and volume of packaging ( per unit of contents ) can be measured and used as criteria for minimizing the package in the design process. usually " reduced " packaging also helps minimize costs. packaging engineers continue to work toward reduced packaging. reuse β reusable packaging is encouraged. returnable packaging has long been useful ( and economically viable ) for closed - loop logistics systems. inspection, cleaning, repair, and recouperage are often needed. some manufacturers re - use the packaging of the incoming parts for a product, either as packaging for the outgoing product or as part of the product itself. recycling β recycling is the reprocessing of materials ( pre - and post - consumer ) into new products. emphasis is focused on recycling the largest primary components of a package : steel, aluminum, papers, plastics, etc. small components can be chosen which are not difficult to separate and do not contaminate recycling operations. packages can sometimes be designed to separate components to better facilitate recycling. energy recovery β waste - to - energy and refuse - derived fuel in approved facilities make use of the heat available from incinerating the packaging components. disposal β incineration, and placement in a sanitary landfill are undertaken for some materials. certain us states regulate packages for toxic contents, which have the potential to contaminate emissions and ash from incineration and leachate from landfill. packages should not be littered. development of sustainable packaging is an area of considerable interest to standards organizations, governments, consumers, packagers, and retailers. sustainability is the fastest - growing driver for packaging development, particularly for packaging manufacturers that work with the world's leading brands, as their csr ( corporate social responsibility ) targets often exceed those of the eu directive. packaging machinery choosing packaging machinery includes an assessment of technical capabilities, labor requirements, worker safety, maintainability, serviceability, reliability, ability to integrate into the packaging line, capital cost, floorspace, flexibility ( change - over, materials, multiple products, etc. ), energy requirements, quality of outgoing packages, qualifications ( for food, pharmaceuticals, etc. ), throughput, efficiency, productivity, ergonomics, return on investment, etc. packaging machinery can be
plastic " due to the common sourcing of chitosan from discarded shrimp shells. shrilk and the concept of bioinspired materials using unmodified biomolecules are regarded as having the potential to produce a global paradigm change towards sustainable manufacturing. in 2015 he was hired again by mit as an associate professor and member of the singapore university of technology and design ( sutd ). at sutd fernandez developed the fungus - like adhesive materials ( flam ), the first example of large - scale 3d printing using unmodified chitin and cellulose. with a focus on low cost and manufacturability, flam enabled the industrial production of 3d - printed large biological objects, bringing the field of bioinspired manufacturing to an industrially relevant level. he also demonstrated the integration of flam and bioinspired manufacturing with ecological cycles and urban waste management. fernandez β s approach to manufacturing based on unmodified biomolecules and their integration within ecological cycles, in addition to being a critical factor for sustainable development, has been demonstrated to be an essential aspect of long - term extraplanetary colonization. honors and awards 2014 : bayer early excellence in science award 2015 : awarded innovator under 35 by mit technology review 2016 : awarded top innovator in close - loop technologies by the launch organization 2018 : formnext purmundus innovation award in additive manufacturing 2018 : sg mark for the invention of shrilk and bioinspired materials 2019 : a β design award 2019 : sg mark for the developing large - scale printing of biomimetic materials 2019 : good design award for the development of large - scale additive manufacturing with biological composites references external links the fermart lab at sutd spanish physicists 1981 births living people
recycling is the process of converting waste materials into new materials and objects. this concept often includes the recovery of energy from waste materials. the recyclability of a material depends on its ability to reacquire the properties it had in its original state. it is an alternative to " conventional " waste disposal that can save material and help lower greenhouse gas emissions. it can also prevent the waste of potentially useful materials and reduce the consumption of fresh raw materials, reducing energy use, air pollution ( from incineration ) and water pollution ( from landfilling ). recycling is a key component of modern waste reduction and is the third component of the " reduce, reuse, and recycle " waste hierarchy. it promotes environmental sustainability by removing raw material input and redirecting waste output in the economic system. there are some iso standards related to recycling, such as iso 15270 : 2008 for plastics waste and iso 14001 : 2015 for environmental management control of recycling practice. recyclable materials include many kinds of glass, paper, cardboard, metal, plastic, tires, textiles, batteries, and electronics. the composting and other reuse of biodegradable waste β such as food and garden waste β is also a form of recycling. materials for recycling are either delivered to a household recycling center or picked up from curbside bins, then sorted, cleaned, and reprocessed into new materials for manufacturing new products. in ideal implementations, recycling a material produces a fresh supply of the same material β for example, used office paper would be converted into new office paper, and used polystyrene foam into new polystyrene. some types of materials, such as metal cans, can be remanufactured repeatedly without losing their purity. with other materials, this is often difficult or too expensive ( compared with producing the same product from raw materials or other sources ), so " recycling " of many products and materials involves their reuse in producing different materials ( for example, paperboard ). another form of recycling is the salvage of constituent materials from complex products, due to either their intrinsic value ( such as lead from car batteries and gold from printed circuit boards ), or their hazardous nature ( e
organic materials can be treated by composting and anaerobic digestion and turned into energy, compost or fertilizer. similarly, wastes currently stored in industrial landfills and around old mines can be treated with bioleaching and engineered nanoparticles to recover metals such as lithium, cobalt and vanadium for use in low - carbon technologies such as electric vehicles and wind turbines. a limiting factor of resource recovery is the irrevocable loss of raw materials due to their increase in entropy in our current linear business model. starting with the production of waste in manufacturing, the entropy increases further by mixing and diluting materials in their manufacturing assembly, followed by corrosion and wear and tear during the usage period. at the end of the life cycle, there is an exponential increase in disorder arising from the mixing of materials in landfills. as a result of this directionality of the entropy law, the potentials of rescource recovery are diminishing. this further motivates a circular economy infrastructure and business model. solid waste recycling is a resource recovery practice that refers to the collection and reuse of disposed materials such as empty beverage containers. the materials from which the items are made can be reprocessed into new products. material for recycling may be collected separately from general waste using dedicated bins and collection vehicles, or sorted directly from mixed waste streams. the most common consumer products recycled include aluminium such as beverage cans, copper such as wire, steel food and aerosol cans, old steel furnishings or equipment, polyethylene and pet bottles, glass bottles and jars, paperboard cartons, newspapers, magazines and light paper, and corrugated fiberboard boxes. pvc, ldpe, pp, and ps ( see resin identification code ) are also recyclable. these items are usually composed of a single type of material, making them relatively easy to recycle into new products. the recycling of complex products ( such as computers and electronic equipment ) is more difficult, due to the additional dismantling and separation required. the type of recycling material accepted varies by city and country. each city and country have different recycling programs in place that can handle the various types of recyclable materials
</ctx>
The following are multiple choice questions (with answers) about STEM.
A scientist working on a new package design wants to use a material that is highly recyclable, biodegradable, and inexpensive. The best material for the package design is
A. aluminum.
B. cardboard.
C. plastic.
D. glass.
Answer: | [
"aluminum.",
"cardboard.",
"plastic.",
"glass."
] | B | |
aquarat_26987 | aquarat | <ctx>
a random message being accepted. cut and join attacks do not appear to be possible. if it is not acceptable to have very long messages, they can be broken into chunks say of 60 words and chained analogously to the methods used for des. however, due to the incomplete nature of the round function, two large ciphertexts of 53 or more 32 - bit words identical in all but 12 words can be found by a simple brute - force collision search requiring 296βn memory, 2n time and 2n + 296βn chosen plaintexts, in other words with a total time * memory complexity of 296, which is actually 2wordsize * fullcycles / 2 for any such cipher. it is currently unknown if such partial collisions pose any threat to the security of the cipher. eight full cycles would raise the bar for such collision search above complexity of parallel brute - force attacks. the unusually small size of the xxtea algorithm would make it a viable option in situations where there are extreme constraints e. g. legacy hardware systems ( perhaps embedded ) where the amount of available ram is minimal, or alternatively single - board computers such as the raspberry pi, banana pi or arduino. cryptanalysis an attack published in 2010 by e. yarrkov presents a chosen - plaintext attack against full - round xxtea with wide block, requiring 259 queries for a block size of 212 bytes or more, and negligible work. it is based on differential cryptanalysis. to cipher " 212 bytes or more " algorithm performs just 6 rounds, and carefully chosen bit patterns allows to detect and analyze avalanche effect. reference code the original formulation of the corrected block tea algorithm, published by david wheeler and roger needham, is as follows : # define delta 0x9e3779b9 # define mx ( ( ( z > > 5 ^ y < < 2 ) + ( y > > 3 ^ z < < 4 ) ) ^ ( ( sum ^ y ) + ( k [ p & 3 ^ e ] ^ z ) ) ) uint32 _ t btea ( uint32 _ t * v, ssize _ t n, uint32 _ t * k ) { uint32
prescribed size is obtained ( in case of gpt - 3, the size is 50257 ). token vocabulary consists of integers, spanning from zero up to the size of the token vocabulary. new words can always be interpreted as combinations of the tokens and the initial - set uni - grams. a token vocabulary based on the frequencies extracted from mainly english corpora uses as few tokens as possible for an average english word. an average word in another language encoded by such an english - optimized tokenizer is however split into suboptimal amount of tokens. tokenizer : texts - > series of numerical " tokens " may be split into : probabilistic tokenization also compresses the datasets, which is the reason for using the byte pair encoding algorithm as a tokenizer. because llms generally require input to be an array that is not jagged, the shorter texts must be " padded " until they match the length of the longest one. how many tokens are, on average, needed per word depends on the language of the dataset. dataset cleaning removal of toxic passages from the dataset, discarding low - quality data, and de - duplication are examples of dataset cleaning. resulting, cleaned ( high - quality ) datasets contain up to 17 trillion words in 2022, raising from 985 million words, used in 2018 for gpt - 1, and 3. 3 billion words, used for bert. the future data is, however, expected to be increasingly " contaminated " by llm - generated contents themselves. training and architecture details reinforcement learning from human feedback ( rlhf ) reinforcement learning from human feedback ( rlhf ) through algorithms, such as proximal policy optimization, is used to further fine - tune a model based on a dataset of human preferences. instruction tuning using " self - instruct " approaches, llms have been able to bootstrap correct responses, replacing any naive responses, starting from human - generated corrections of a few cases. for example, in the instruction " write an essay about the main themes represented in hamlet, " an initial naive completion might be'if you submit the essay after march 17,
ching, queneau's cent mille milliards de poemes and nabokov's pale fire to computer - generated poems like christopher strachey's love letter generator ( 1952 ) and more contemporary examples. jhave describes the process of working with the output from the neural network as " carving ". in his book my life as an artificial creative intelligence, mark amerika writes that the " method of carving the digital outputs provided by the language model as part of a collaborative remix jam session with gpt - 2, where the language artist and the language model play off each other β s unexpected outputs as if caught in a live postproduction set, is one i share with electronic literature composer david jhave johnston, whose ai poetry experiments precede my own investigations. " references 2010s electronic literature works new media 21st - century poetry canadian poetry applications of artificial intelligence natural language processing generative literature
iruvf oreei oaxhh asmoo ulrss tknro rueni nremv qtswt aneat sdjie rmohe xotea in most uses of code poems, this process of selecting an indicator group and transposing the text would be repeated once ( double transposition ) to further scramble the letters. as an additional security measure, the agent would add prearranged errors into the text as security checks. for example, there might be an intentional error in every 18th letter to ensure that, if the agent was captured or the poem was found, the enemy might transmit without the security checks. analysis the code's advantage is to provide relatively strong security while not requiring any codebook. however, the encryption process is error - prone when done by hand, and for security reasons, messages should be at least 200 words long. the security check was usually not effective : if a code was used after being intercepted and decoded by the enemy, any security checks were revealed. further, the security check could often be tortured out from the agent. there are a number of other weaknesses because the poem is re - used, if one message is broken by any means ( including threat, torture, or even cryptanalysis ), past and future messages will be readable. if the agent used the same poem code words to send a number of similar messages, these words could be discovered easily by enemy cryptographers. if the words could be identified as coming from a famous poem or quotation, then all of the future traffic submitted in that poem code could be read. the german cryptologic units were successful in decoding many of the poems by searching through collections of poems. since the poems used must be memorable for ease of use by an agent, there is a temptation to use well - known poems or poems from well - known poets, further weakening the encryption ( e. g., soe agents often used verses by shakespeare, racine, tennyson, moliere, keats, etc. ). development when leo marks was appointed codes officer of the special operations executive ( soe ) in london during world war ii, he very quickly recognized the weakness of
</ctx>
The following are multiple choice questions (with answers) about STEM.
Jessica can text 85 words per minute, while Maurice can text 55 words per minute. An abridged English translation of Tolstoyβs War and Peace contains 410,400 words. Working together, how many days will it take Jessica and Maurice to text the entire abridged translation of War and Peace?
A. A)0.3
B. B)1.9
C. C)3
D. D)9
Answer: | [
"A)0.3",
"B)1.9",
"C)3",
"D)9"
] | B | Since the answer choices are reasonably spaced out, we can approximate and save time
Combined words per day = 150*60*24 which should be 216000
410400/216000 will be 1.9,
So, Answer should be B |
mmlu_aux_5246 | mmlu_auxiliary | <ctx>
, m33 ), according to a procedure similar to that of the first scenario. the consequences for humanity are the same as in the first scenario. the scenario of a complete colonization of space has no probability of being realized according to kardashev because if it were realizable then " they " would already be on earth ; yet this is not the case. however, in the case of a contact, the consequences on humanity are the same as in the first scenario. this scenario assumes that all civilizations would have destroyed themselves before any contact. kardashev estimates the probability of this to be 10 %. humanity should be able to detect ancient megastructures in the vicinity of the nearest stars. as a result, no contact with humanity can take place. the last scenario suggests that we are the first or the only ones in the universe. kardashev estimates its probability at 10 %. only exobiology can confirm or falsify such a scenario. we can imagine a potential contact in the distant future, and then the consequences would be similar to those of the other five scenarios. fourth publication ( 1997 ) in the article cosmology and civilizations published in 1997, kardashev reiterates the need to carefully observe astronomical objects with strong radiation in order to detect supercivilizations. however, the discovery of a civilization at a stage of development similar to ours is unlikely. the existence of such supercivilizations is made possible by the fact that life on earth is recent compared to the age of the universe ( 8 Γ 109 years before the formation of the solar system ). he then examines the conditions for the appearance of life on cosmological time scales. assuming the rate of evolution of life on earth and considering the age of the universe, it is reasonable to assume that a civilization could have reached our level of technological development in 6 Γ 109 years. such civilizations can be observed in nearby regions, since the farther away we observe, the younger the objects are. recent discoveries of sources of intense radiation deadly to life show that life could have flourished under cover for the time necessary for its appearance and maintenance. another argument for the possibility of a very old supercivilization is that most
and thus lacks a center. because of this, one could not imagine where an ethereal soul would go, due to the universe's infinite nature. they believe that infinity exists in time as well as in space, for all levels of life. raelians believe that humanity would be able to create life on other planets only if humanity is peaceful enough to stop war. in that case, humanity could travel the distances between stars and create life on another planet. progress in terraforming, molecular biology, and cloning would enable these teams to create continents and life from scratch. progress in social engineering would ensure that this creation would have a better chance of both surviving and having the potential to understand its creators. research on how civilization would occur on another planet would allow scientists to decide what traces of their origin should be left behind so that their role in life creation would someday be revealed. the progress achieved by the science teams would ultimately sustain a perpetual chain of life. intelligent design creation of life on earth by extraterrestrials in his book the message given to me by extraterrestrials ( now republished as intelligent design : message from the designers 2006 ), vorilhon claims that on 13 december 1973, he found a spacecraft shaped like a flattened bell that landed inside puy de lassolas, a volcano near the capital city of auvergne. a 25, 000 - year - old human - like extraterrestrial inside the spacecraft named yahweh said that elohim was the name that primitive people of earth called members of his extraterrestrial race, who were seen as " those who came from the sky ". yahweh explained that earth was originally void of life, with thick clouds and shallow seas, but the elohim came, broke apart the clouds, exposed the seas to sunlight, built a continent, and synthesized a global ecosystem. solar astronomy, terraformation, nanotechnology, and genetic engineering allowed elohim to adapt life to earth's thermal and chemical makeup. yahweh gave materialistic explanations of the garden of eden, a large laboratory that was based on an artificially constructed continent ; noah's ark, a spaceship that preserved dna that
observed around other star systems, though in far less detail. astrobiology earth is the only body known to contain life ; this results in geologic and atmospheric life signatures apart from the organisms themselves. methane observed on mars has been postulated but cannot be definitively ascribed as a biosignature. multiple processes of non - biological methane generation are seen on earth as well. the detection of biomarkers or biosignatures on other worlds is an active area of research. although oxygen and / or ozone are generally considered strong signs of life, these too have alternate, non - biological explanations. the galileo mission, while performing a gravity assist flyby of earth, treated the planet as an extraterrestrial one, in a test of life detection techniques. conversely, the deep impact mission's high resolution imager, intended for examining comets starting from great distances, could be repurposed for exoplanet observations in its epoxi extended mission. conversely, detection of life entails identification of those processes favoring or preventing life. this occurs primarily via study of earth life and earth processes, though this is in effect a sample size of one. care must be taken to avoid observation and selection biases. astrobiologists consider alternative chemistries for life, and study on earth extremophile organisms that expand the potential definitions of habitable worlds. see also europlanet list of mars analogs lunar crater national natural landmark terrestrial analogue sites bibliography murray, b. earthlike planets ( 1981 ) w. h. freeman and company references external links nasa astrobiology astrobiology magazine - comparative planetology laboratory for comparative planetology, vernadsky institute planetary science space science
than the sun. however, the life of the stars which the planets orbit are so short that intelligent extraterrestrial life is not very likely. a and b type stars were surveyed for the research which involved nasa's spitzer space telescope, the two micron all - sky survey, and astronomers from the center for astrophysics harvard & smithsonian ( cfa ) and the national optical astronomy observatory ( noao ). gravitational wave detection in a span of three months seventeen pulsars - millisecond pulsars - were discovered in the milky way galaxy. unknown high - energy sources detected by nasa's fermi gamma - ray space telescope revealed the existence and location of the pulsars. this is an accelerated pace for discovering such objects, which could be used as a " galactic gps " to detect gravitational waves passing near earth. although the pulsars are relatively old they have not slowed because, these millisecond pulsars have been kept rapidly rotating and renewed with material by accretion of matter from a companion star. the combined total of 60 known millisecond pulsars creates an all - sky array. precise monitoring of timing changes, utilizing this array, may allow the first direct detection of gravitational waves. temperature, gravity, and planet migration according to the classical model of planet migration, the earth should have been drawn into the sun as a planetoid, along with other planets. however, a new theoretical model was presented at the annual meeting. it shows that the assumption a proto - planetary disk around a star has constant temperature across its whole span is erroneous. portions of the disk are actually opaque and so cannot cool quickly by radiating heat out to space. this creates temperature differences across the disk, and these differences have not been accounted for before in models that were applied. the differences in temperature counteract the natural gravitational pull of the sun ( or proto - sun ), at a crucial time during planet formation. kepler space telescope on 4 january 2010, the kepler space telescope announced the discovery its first five new exoplanets, named kepler - 4b, 5b, 6b, 7b and 8b. these exoplanets had sizes comparable to that of neptune to
</ctx>
The following are multiple choice questions (with answers) about STEM.
Is there intelligent life on other planets? For years, scientists said "no." or "we don't know." But today this is changing. Seth Shostak and Alexandra Barnett are astronomers . They believe intelligent life exists somewhere in the universe. They also think we sill soon contact these beings(;). Why do Shostak and Barnett think intelligent life exists on other planets? The first reason is time. Scientists believe the universe is about 12 billion years old. "This is long enough for other planets to have intelligent life," say Shostak and Barnett. The second reason is size--the universe is huge. "Tools like the Hubble Telescope have shown that there are at least 100 billion galaxies," says Shostak. "And our galaxy, the Milky Way, has at least 100 billion stars. Some planets going around these stars might be similar to Earth." In the past, it was hard to look for signs of intelligent life in the universe. But now, powerful telescopes allow scientists to discover smaller planets--the size of Mars or Earth--in other solar systems. These planets might have intelligent life. Have beings from space already visited Earth? "Probably not," says Shostak. "It's a long way away. However, intelligent beings may contact us in other way, such as radio signals . In fact, they may be trying to communicate with us now, but we don't have the right tools to receive their messages. However, this is changing. By 2025, we could make contact with other life forms in our universe and we might help each other." Why haven't beings from space visited us yet according to Shostak?
A. They're afraid of us.
B. It's a long way away.
C. They don't want to see us.
D. They don't know how to use radio signals.
Answer: | [
"They're afraid of us.",
"It's a long way away.",
"They don't want to see us.",
"They don't know how to use radio signals."
] | B | null |
m1_2748 | m1 | <ctx>
k - means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean ( cluster centers or cluster centroid ), serving as a prototype of the cluster. this results in a partitioning of the data space into voronoi cells. k - means clustering minimizes within - cluster variances ( squared euclidean distances ), but not regular euclidean distances, which would be the more difficult weber problem : the mean optimizes squared errors, whereas only the geometric median minimizes euclidean distances. for instance, better euclidean solutions can be found using k - medians and k - medoids. the problem is computationally difficult ( np - hard ) ; however, efficient heuristic algorithms converge quickly to a local optimum. these are usually similar to the expectation - maximization algorithm for mixtures of gaussian distributions via an iterative refinement approach employed by both k - means and gaussian mixture modeling. they both use cluster centers to model the data ; however, k - means clustering tends to find clusters of comparable spatial extent, while the gaussian mixture model allows clusters to have different shapes. the unsupervised k - means algorithm has a loose relationship to the k - nearest neighbor classifier, a popular supervised machine learning technique for classification that is often confused with k - means due to the name. applying the 1 - nearest neighbor classifier to the cluster centers obtained by k - means classifies new data into the existing clusters. this is known as nearest centroid classifier or rocchio algorithm. description given a set of observations, where each observation is a d - dimensional real vector, k - means clustering aims to partition the n observations into ( ) sets so as to minimize the within - cluster sum of squares ( wcss ) ( i. e. variance ). formally, the objective is to find : where ΞΌi is the mean ( also called centroid ) of points in, i. e. is the size of, and is the usual l2 norm. this is equivalent to minimizing the pairwise squared deviations of points in the same
in data mining, k - means + + is an algorithm for choosing the initial values ( or " seeds " ) for the k - means clustering algorithm. it was proposed in 2007 by david arthur and sergei vassilvitskii, as an approximation algorithm for the np - hard k - means problem β a way of avoiding the sometimes poor clusterings found by the standard k - means algorithm. it is similar to the first of three seeding methods proposed, in independent work, in 2006 by rafail ostrovsky, yuval rabani, leonard schulman and chaitanya swamy. ( the distribution of the first seed is different. ) background the k - means problem is to find cluster centers that minimize the intra - class variance, i. e. the sum of squared distances from each data point being clustered to its cluster center ( the center that is closest to it ). although finding an exact solution to the k - means problem for arbitrary input is np - hard, the standard approach to finding an approximate solution ( often called lloyd's algorithm or the k - means algorithm ) is used widely and frequently finds reasonable solutions quickly. however, the k - means algorithm has at least two major theoretic shortcomings : first, it has been shown that the worst case running time of the algorithm is super - polynomial in the input size. second, the approximation found can be arbitrarily bad with respect to the objective function compared to the optimal clustering. the k - means + + algorithm addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard k - means optimization iterations. with the k - means + + initialization, the algorithm is guaranteed to find a solution that is o ( log k ) competitive to the optimal k - means solution. example of a suboptimal clustering to illustrate the potential of the k - means algorithm to perform arbitrarily poorly with respect to the objective function of minimizing the sum of squared distances of cluster points to the centroid of their assigned clusters, consider the example of four points in r2 that form an axis - aligned rectangle whose width is greater than its height.
a constant and we have a characterization of distance between points which may not be reflect our data set in meaningful ways. one way to mitigate the affects of the curse of dimensionality is by using spectral clustering. spectral clustering achieves a more appropriate analysis by reducing the dimensionality of then data using principle component analysis, projecting the data points into the lower dimensional subspace, and then running the chosen clustering algorithm as before. one thing to note, however, is that as with any dimension reduction we lose information, so it must be weighed against clustering in advanced how much reduction is necessary before too much data is lost. high dimensionality doesn't only affect distance metrics however, as the time complexity also increases with the number of features. k - medoids is sensitive to initial choice of medoids, as they are usually selected randomly. depending on how such medoids are initialized, k - medoids may converge to different local optima, resulting in different clusters and quality measures, meaning k - medoids might need to run multiple times with different initializations, resulting in a much higher run time. one way to counterbalance this is to use k - medoids + +, an alternative to k - medoids similar to its k - means counterpart, k - means + + which chooses initial medoids to begin with based on a probability distribution, as a sort of " informed randomness " or educated guess if you will. if such medoids are chosen with this rationale, the result is an improved runtime and better performance in clustering. the k - medoids + + algorithm is described as follows : the initial medoid is chosen randomly among all of the spatial points. for each spatial point, compute the distance between and the nearest medoids which is termed as d ( ) and sum all the distances to. the next medoid is determined by using weighted probability distribution. specifically, a random number between zero and the summed distance is chosen and the corresponding spatial point is the next medoid. step ( 2 ) and step ( 3 ) are repeated until medoids have been chosen. now that we have appropriate first selections for medoids, the normal variation of k
. better bounds are proven for simple cases. for example, it is shown that the running time of k - means algorithm is bounded by for points in an integer lattice. lloyd's algorithm is the standard approach for this problem. however, it spends a lot of processing time computing the distances between each of the k cluster centers and the n data points. since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naive implementation very inefficient. some implementations use caching and the triangle inequality in order to create bounds and accelerate lloyd's algorithm. variations jenks natural breaks optimization : k - means applied to univariate data k - medians clustering uses the median in each dimension instead of the mean, and this way minimizes norm ( taxicab geometry ). k - medoids ( also : partitioning around medoids, pam ) uses the medoid instead of the mean, and this way minimizes the sum of distances for arbitrary distance functions. fuzzy c - means clustering is a soft version of k - means, where each data point has a fuzzy degree of belonging to each cluster. gaussian mixture models trained with expectation - maximization algorithm ( em algorithm ) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate gaussian distributions instead of means. k - means + + chooses initial centers in a way that gives a provable upper bound on the wcss objective. the filtering algorithm uses kd - trees to speed up each k - means step. some methods attempt to speed up each k - means step using the triangle inequality. escape local optima by swapping points between clusters. the spherical k - means clustering algorithm is suitable for textual data. hierarchical variants such as bisecting k - means, x - means clustering and g - means clustering repeatedly split clusters to build a hierarchy, and can also try to automatically determine the optimal number of clusters in a dataset. internal cluster evaluation measures such as cluster silhouette can be helpful at determining the number of clusters. minkowski weighted k - means automatically calculates cluster specific feature weights, supporting the intuitive idea that a feature may have
</ctx>
The following are multiple choice questions (with answers) about STEM.
K-Means:
A. always converges to the same solution, no matter the initialization
B. always converges, but not always to the same solution
C. doesn't always converge
D. can never converge
Answer: | [
"always converges to the same solution, no matter the initialization",
"always converges, but not always to the same solution",
"doesn't always converge",
"can never converge"
] | B | Letβs break down the options for K-Means:
1. **Always converges to the same solution, no matter the initialization**:
- **Incorrect**. K-Means can end up with different clusters based on where you start the centroids.
2. **Always converges, but not always to the same solution**:
- **Correct**. K-Means will always reach a point where the centroids stabilize, but the final clustering can vary because of initial centroid placement.
3. **Doesn't always converge**:
- **Incorrect**. K-Means is guaranteed to converge eventually, as it keeps refining the centroids.
4. **Can never converge**:
- **Incorrect**. This is not true; K-Means will always converge to a solution, though it might not be the best one.
### So, the only correct answer is:
- **Always converges, but not always to the same solution.**
This captures the essence of K-Means well! |
mmlu_aux_3819 | mmlu_auxiliary | <ctx>
smoking was clearly correlated with lung cancer, some, such as fisher and yerushalmy, believed that the two variables were confounded and argued against the hypothesis that cigarettes caused the cancer. the authors then explain how causal reasoning ( as developed in the rest of the book ) can be used to argue that cigarettes do indeed cause cancer. chapter 6 : paradoxes galore! this chapter examines several paradoxes, including the monty hall problem, simpson's paradox, berkson's paradox and lord's paradox. the authors show how these paradoxes can be resolved using causal reasoning. chapter 7 : beyond adjustment : the conquest of mount intervention this chapter looks at the'second rung'of the ladder of causation introduced in chapter 1. the authors describe how to use causal diagrams to ascertain the causal effect of performing interventions ( eg. smoking ) on outcomes ( such as lung cancer ). the'front - door criterion'and the'do - calculus'are introduced as tools for doing this. the chapter finishes with two examples, used to introduce the use of instrumental variables to estimate causal relationships. the first is john snow's discovery that cholera is caused by unsanitary water supplies. the second is the relationship between cholesterol levels and likelihood of a heart attack. chapter 8 : counterfactuals : mining worlds that could have been this chapter examines the third rung of the ladder of causation : counterfactuals. the chapter introduces'structural causal models ', which allow reasoning about counterfactuals in a way that traditional ( non - causal ) statistics does not. then, the applications of counterfactual reasoning are explored in the areas of climate science and the law. chapter 9 : mediation : the search for mechanism this chapter discusses mediation : the mechanism by which a cause leads to an effect. the authors discuss the work of barbara stoddard burks on the causes of intelligence of children, the'algebra for all'policy by chicago public schools, and the use of tourniquets to treat combat wounds. chapter 10 : big data, artificial intelligence and the big questions the final chapter discusses the use of causal reasoning in big data and
, as well as its many variations and nuances, fails to capture the intuition behind " probability raising ", which is inherently a manipulative or counterfactual notion. the correct formulation, according to pearl, should read : pr ( effect | do ( cause ) ) > pr ( effect | do ( ~ cause ) ) where do ( c ) stands for an external intervention that compels the truth of c. the conditional probability pr ( e | c ), in contrast, represents a probability resulting from a passive observation of c, and rarely coincides with pr ( e | do ( c ) ). indeed, observing the barometer falling increases the probability of a storm coming, but does not " cause " the storm ; were the act of manipulating the barometer to change the probability of storms, the falling barometer would qualify as a cause of storms. in general, formulating the notion of " probability raising " within the calculus of do - operators resolves the difficulties that probabilistic causation has encountered in the past half - century, cartwright, n. ( 1989 ). nature's capacities and their measurement, clarendon press, oxnard. among them the infamous simpson's paradox, and clarifies precisely what relationships exist between probabilities and causation. the establishing of cause and effect, even with this relaxed reading, is notoriously difficult, expressed by the widely accepted statement " correlation does not imply causation ". for instance, the observation that smokers have a dramatically increased lung cancer rate does not establish that smoking must be a cause of that increased cancer rate : maybe there exists a certain genetic defect which both causes cancer and a yearning for nicotine ; or even perhaps nicotine craving is a symptom of very early - stage lung cancer which is not otherwise detectable. scientists are always seeking the exact mechanisms by which event a produces event b. but scientists also are comfortable making a statement like, " smoking probably causes cancer, " when the statistical correlation between the two, according to probability theory, is far greater than chance. in this dual approach, scientists accept both deterministic and probabilistic causation in their terminology. in statistics, it
##trolled for potentially equally or unequally probable but still possible causes ( some non - 100 % degree of confidence, known as a confidence interval in inferential statistics ). incidentally, this problem of hidden variables forms the foundation for the scientific method β which is a solution to this problem of hidden variables. only via the scientific method can one be absolutely sure that some true antecedent causes a conclusion to also be true. we say a cause causes an effect if and only if there exists a 100 % perfect correlation ( positive or negative ) between the cause and the effect when and only when all other possible variables are controlled for ( a 100 % degree of confidence ). in these cases these effects are called dependent variables, and causes are called independent variables ( so named because the dependent variable ( s ) depend on the independent variable ( s ), and the independent variable ( s ) do not depend on any other variable ). indeed, when possible hidden variables cannot be controlled for, such as is the case of quantum physics, causality is indeterminate ( quantum indeterminacy ). this constitutes one of the biggest open problems in physics today, the interpretation of quantum physics and its reconciliation with the causal structure of special relativity ( einstein - podolsky - rosen paradox, rietdijk β putnam argument ). theories of causality also play important roles in debates of both free will and determinism. the empirical observations predicted by special relativity suggest that anything that could happen, already has happened. depending on your philosophy of mathematics, since special relativity is a continuous mathematical model, the experimental confirmation of predicted effects described by the possibly fictional and conceptually - reliable - and - informative theory have implications for an ontology of time, which touches on the metaphysics of time which is intimately tied up with notions of causality and reasoning about cause and effect. and the ( for now ) indeterminism of quantum physics suggests the possibility of free will in a deterministic reality. to quote hameroff, " quantum state reductions seem to involve temporal non - locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real - time conscious causal action. quantum brain biology and orch
causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. the main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. the study of why things occur is called etiology, and can be described using the language of scientific causal notation. causal inference is said to provide the evidence of causality theorized by causal reasoning. causal inference is widely studied across all sciences. several innovations in the development and implementation of methodology designed to determine causality have proliferated in recent decades. causal inference remains especially difficult where experimentation is difficult or impossible, which is common throughout most sciences. the approaches to causal inference are broadly applicable across all types of scientific disciplines, and many methods of causal inference that were designed for certain disciplines have found use in other disciplines. this article outlines the basic process behind causal inference and details some of the more conventional tests used across different disciplines ; however, this should not be mistaken as a suggestion that these methods apply only to those disciplines, merely that they are the most commonly used in that discipline. causal inference is difficult to perform and there is significant debate amongst scientists about the proper way to determine causality. despite other innovations, there remain concerns of misattribution by scientists of correlative results as causal, of the usage of incorrect methodologies by scientists, and of deliberate manipulation by scientists of analytical results in order to obtain statistically significant estimates. particular concern is raised in the use of regression models, especially linear regression models. definition inferring the cause of something has been described as : "... reason [ ing ] to the conclusion that something is, or is likely to be, the cause of something else ". " identification of the cause or causes of a phenomenon, by establishing covariation of cause and effect, a time - order relationship with the cause preceding the effect, and the elimination of plausible alternative causes. " methodology general causal inference is conducted via the study of systems where the measure of one variable is suspected to affect the measure of another. causal inference is conducted with regard to the scientific method. the first step of causal inference is to
</ctx>
The following are multiple choice questions (with answers) about STEM.
Another common type of reasoning is the search for causes and results.We want to know whether cigarettes really do cause lung cancer, what causes malnutrition, the decay of cities, or the decay of teeth.We are equally interested in effects: what is the effect of sulphur or lead in the atmosphere, of oil spills and raw sewage in rivers and the sea, of staying up late on the night before an examination? Causal reasoning may go from cause to effect or from effect to cause.Either way, we reason from what we know to what we want to find out.Sometimes we reason from an effect to a cause and then on to another effect.Thus, if we reason that because the lights have gone out, the refrigerator won't work, we first relate the effect (lights out) to the cause (power off) and then relate that cause to another effect (refrigerator not working).This kind of reasoning is called, for short, effect to effect.It is quite common to reason through an extensive chain of causal relations.When the lights go out we might reason in the following causal chain: lights out--power off--refrigerator not working--temperature will rise--milk will sour.In other words, we diagnose a succession of effects from the power failure, each becoming the cause of the next. Causes are classified as necessary, sufficient, or contributory.A necessary cause is one which must be present for the effect to occur, as combustion is necessary to drive a gasoline engine.A sufficient cause is one which can produce an effect unaided, though there may be more than one sufficient cause; a dead battery is enough to keep a car from starting, but faulty spark plugs or an empty gas tank will have the same effect.A contributory cause is one which helps to produce an effect but cannot do so by itself, as running through a red light may help cause an accident, though other factors -- pedestrians or other cars at the crossroads -- must also be present. In establishing or refuting a causal relation it is usually necessary to show the process by which the alleged cause produces the effect.Such an explanation is called a causal process. A necessary cause is _ .
A. one without which it is impossible for the effect to occur
B. one of the causes that can produce the effect
C. one that is enough to make the effect occur
D. none of them
Answer: | [
"one without which it is impossible for the effect to occur",
"one of the causes that can produce the effect",
"one that is enough to make the effect occur",
"none of them"
] | A | null |
aquarat_43236 | aquarat | <ctx>
a profitable trading system. if he set his average win at around $ 400 per trade ( this can be done using proper exit strategy ) and managing / limiting the losses to around $ 100 per trade ; the expectancy is around : expectancy = ( trading system winning probability * average win ) β ( trading system losing probability * average loss ) expectancy = ( 0. 4 x 400 ) - ( 0. 6 x 100 ) = $ 160 - $ 60 = $ 100 net average profit per trade ( of course commissions are not included in the computations ). therefore the key to successful money management is maximizing every winning trades and minimizing losses ( regardless whether you have a winning or losing trading system, such as % loss probability > % win probability ). see also active management alpha capture system asset management company corporate governance exchange fund exchange - traded fund factor investing financial management fund governance investment investment style list of asset management firms low - volatility investing momentum investing passive management pension fund performance attribution portfolio private equity quantitative investing securities lending separately managed account sovereign wealth fund stockbroker style investing style drift returns - based style analysis tracking error transition management value investing outline of management references further reading david swensen, " pioneering portfolio management : an unconventional approach to institutional investment, " new york, ny : the free press, may 2000. rex a. sinquefeld and roger g. ibbotson, annual yearbooks dealing with stocks, bonds, bills and inflation ( relevant to long - term returns to us financial assets ). harry markowitz, portfolio selection : efficient diversification of investments, new haven : yale university press s. n. levine, the investment managers handbook, irwin professional publishing ( may 1980 ),. v. le sourd, 2007, " performance measurement for traditional investment β literature survey ", edhec publication. d. broby, " a guide to fund management ", risk books, ( aug 2010 ),. c. d. ellis, " a new paradigm : the evolution of investment management. " financial analysts journal, vol. 48, no. 2 ( march / april 1992 ) : 16 β 18. external links of the investment company institute β us industry body of the investment management association β
. 11 ( plus the $ 100 initial wager ). see the article odds for conversion equations. references bayesian statistics
##n. when all bets lose, the total loss is the probability the gambler does not lose all n bets is 1 β qn. in all other cases, the gambler wins the initial bet ( b. ) thus, the expected profit per round is whenever q > 1 / 2, the expression 1 β ( 2q ) n < 0 for all n > 0. thus, for all games where a gambler is more likely to lose than to win any given bet, that gambler is expected to lose money, on average, each round. increasing the size of wager for each round per the martingale system only serves to increase the average loss. suppose a gambler has a 63 - unit gambling bankroll. the gambler might bet 1 unit on the first spin. on each loss, the bet is doubled. thus, taking k as the number of preceding consecutive losses, the player will always bet 2k units. with a win on any given spin, the gambler will net 1 unit over the total amount wagered to that point. once this win is achieved, the gambler restarts the system with a 1 unit bet. with losses on all of the first six spins, the gambler loses a total of 63 units. this exhausts the bankroll and the martingale cannot be continued. in this example, the probability of losing the entire bankroll and being unable to continue the martingale is equal to the probability of 6 consecutive losses : ( 10 / 19 ) 6 = 2. 1256 %. the probability of winning is equal to 1 minus the probability of losing 6 times : 1 β ( 10 / 19 ) 6 = 97. 8744 %. the expected amount won is ( 1 Γ 0. 978744 ) = 0. 978744. the expected amount lost is ( 63 Γ 0. 021256 ) = 1. 339118. thus, the total expected value for each application of the betting system is ( 0. 978744 β 1. 339118 ) = β0. 360374. in a unique circumstance, this strategy can make sense. suppose the gambler possesses exactly 63 units but desperately needs a total of 64. assuming q
be preferred ). the break - even algorithm the break - even algorithm instructs one to rent for 9 days and buy skis on the morning of day 10 if one is still up for skiing. if one has to stop skiing during the first 9 days, it costs the same as what one would pay if one had known the number of days one would go skiing. if one has to stop skiing after day 10, one's cost is $ 19 which is 90 % more than what one would pay if one had known the number of days one would go skiing in advance. this is the worst case for the break - even algorithm. the break - even algorithm is known to be the best deterministic algorithm for this problem. the randomized algorithm a person can flip a coin. if it comes up heads, she buy skis on day eight ; otherwise, she buys skis on day 10. this is an instance of a randomized algorithm. the expected cost is at most 80 % more than what the person would pay if she had known the number of days she would go skiing, regardless of how many days she skis. in particular, if the person skis for 10 days, her expected cost is 1 / 2 [ 7 + 10 ] + 1 / 2 [ 9 + 10 ] = 18 dollars, only 80 % excess instead of 90 %. a randomized algorithm can be understood as a composition of different algorithms, each one which occurs with a given probability. we define the expected competitive ratio on a given instance i as :, where is the competitive ratio for instance i, given. consequently, the competitive ratio of a randomized algorithm is given by the worst value of over all given instances. in the case of the coin flipping ski - rental, we note that the randomized algorithm has 2 possible branches : if the coin comes up heads, we buy on day 8, otherwise we buy on day 10. we may call the branches and, respectively., for.,, and, for. therefore, the competitive ratio of the randomized ski - rental coin flipping algorithm is 1. 8. the best randomized algorithm against an oblivious adversary is to choose some day i at random according to the following distribution p
</ctx>
The following are multiple choice questions (with answers) about STEM.
A cycle is bought for Rs.800 and sold for Rs.1080, find the gain percent?
A. A)22
B. B)20
C. C)35
D. D)88
Answer: | [
"A)22",
"B)20",
"C)35",
"D)88"
] | C | 800 ---- 180
100 ---- ? => 35%
Answer:C |
m1_5819 | m1 | <ctx>
##l ( y, 3 ), y ^ = x ) # define rounds 32 void encrypt ( uint64 _ t ct [ 2 ], uint64 _ t const pt [ 2 ], uint64 _ t const k [ 2 ] ) { uint64 _ t y = pt [ 0 ], x = pt [ 1 ], b = k [ 0 ], a = k [ 1 ] ; r ( x, y, b ) ; for ( int i = 0 ; i < rounds - 1 ; i + + ) { r ( a, b, i ) ; r ( x, y, b ) ; } ct [ 0 ] = y ; ct [ 1 ] = x ; } note that this code computes the round keys ( key schedule ) on - demand. in practice, as with other block ciphers it is common for implementations to compute the round keys just once and cache them, rather than recomputing them for every block encrypted or decrypted. although, as the authors point out, " given that small code size was a major goal of the design, it made sense to reuse the round function for round key generation. this approach enables on - the - fly round key generation for microcontroller implementations, using just the round function code, very little rom, and no ram beyond what is required to hold the key and plaintext. " for 16 - bit words ( speck32 ), the rotates are 7 bits right and 2 bits left ; for all other word sizes, they are 8 and 3 as shown here. if the key is more than 2 words long, there are 2 or 3 a values, which are used in rotation. endianness the original speck paper does not explicitly state the endianness of bytes when the plaintext block is interpreted as the two words used in the cipher algorithm. the test vectors given in the paper suggest big - endian order. however, the authors of the algorithm have advised some implementers that little - endian byte order is to be used for keys, plaintext, and ciphertext, and the practice was accepted by others. performance according to ecry
if ( c _ f07b! = 2 ) display * setswall 1 : 1 : 1 : 80 : * bgcolor = 2, * color = 15 : * p49 : 1, " 7 - find " else display * setswall 1 : 1 : 1 : 80 : * bgcolor = 7, * color = 0 : * p49 : 1, " 7 - find " endif staterest mystate trap get _ pro noreset if f7 endif if ( show _ filter and this _ filter and c _ custno < > " mag " ) loadmod " filter " pack pass _ id with " qed ", qed _ id1, blanks move " filter display ( f6 ) " to pass _ desc set c _ bigflt call run _ filt using pass _ id, pass _ desc, " 432 " unload " filter " clear this _ filter endif references external links sunbelt implementation of pl / b ansi pl / b standards committee mmcc pl / b programming notebook db / c dx, databus, and pl / b overview databus simplified user guide procedural programming languages cross - platform software programming languages created in 1972 structured programming languages
> > 0 ) & 0x1f a = ( aek > > 64 ) & m b = ( aek > > 32 ) & m c = ( aek > > 0 ) & m kr = ( k > > ( 32 + 64 * ( 3 - ri % 4 ) ) ) & m kl = ( k > > ( 0 + 64 * ( 3 - ri % 4 ) ) ) & m x = op [ 0 ] ( l, kl ) y = op [ 2 ] ( op [ 1 ] ( rol ( x, s1 ), x ), a ) z = op [ 5 ] ( op [ 4 ] ( op [ 3 ] ( rol ( y, s2 ), y ), b ), kr ) return op [ 8 ] ( op [ 7 ] ( op [ 6 ] ( rol ( z, s3 ), z ), c ), r ), l def m8 _ keyexpand ( dk, kek, adks, aeks ) : " " " key expansion. dk : 64 - bit data key kek : 256 - bit key expansion key adks : algorithm decision keys aeks : algorithm expansion keys " " " l = ( dk > > 32 ) & m r = ( dk > > 0 ) & m k = 0 for i in range ( 8 ) : l, r = m8 _ round ( l, r, i, kek, adks [ i ], aeks [ i ] ) k | = ( l < < ( 32 * ( 7 - i ) ) ) return k def m8 _ encrypt ( data, n, dk, kek, adks, aeks ) : " " " encrypt one block with m8. data : 64 - bit input block n : number of rounds ( must be > = 8 ) dk : 64 - bit data key kek : 256 - bit key expansion key adks : a list of n 24 - bit algorithm decision keys aeks : a list of n 96 - bit algorithm expansion keys " " " ek = m8 _ keyexpan
in cryptography, a round or round function is a basic transformation that is repeated ( iterated ) multiple times inside the algorithm. splitting a large algorithmic function into rounds simplifies both implementation and cryptanalysis. for example, encryption using an oversimplified three - round cipher can be written as, where is the ciphertext and is the plaintext. typically, rounds are implemented using the same function, parameterized by the round constant and, for block ciphers, the round key from the key schedule. parameterization is essential to reduce the self - similarity of the cipher, which could lead to slide attacks. increasing the number of rounds " almost always " protects against differential and linear cryptanalysis, as for these tools the effort grows exponentially with the number of rounds. however, increasing the number of rounds does not always make weak ciphers into strong ones, as some attacks do not depend on the number of rounds. the idea of an iterative cipher using repeated application of simple non - commutating operations producing diffusion and confusion goes as far back as 1945, to the then - secret version of c. e. shannon's work " communication theory of secrecy systems " ; shannon was inspired by mixing transformations used in the field of dynamical systems theory ( cf. horseshoe map ). most of the modern ciphers use iterative design with number of rounds usually chosen between 8 and 32 ( with 64 and even 80 used in cryptographic hashes ). for some feistel - like cipher descriptions, notably the one of the rc5, a term " half - round " is used to define the transformation of part of the data ( a distinguishing feature of the feistel design ). this operation corresponds to a full round in traditional descriptions of feistel ciphers ( like des ). round constants inserting round - dependent constants into the encryption process breaks the symmetry between rounds and thus thwarts the most obvious slide attacks. the technique is a standard feature of most modern block ciphers. however, a poor choice of round constants or unintended interrelations between the constants and other cipher components could still allow slide attacks ( e. g., attacking the initial version of the format -
</ctx>
The following are multiple choice questions (with answers) about STEM.
Tick the \textbf{false} statement regarding the DES round function.
A. There is an expansion operation $E$ from 32 to 48 bits.
B. A round key is XORed to an internal register.
C. There are $8$ identical S-boxes (substitution boxes) of size $6 \times 4$.
D. There is a permutation $P$ on 32-bits.
Answer: | [
"There is an expansion operation $E$ from 32 to 48 bits.",
"A round key is XORed to an internal register.",
"There are $8$ identical S-boxes (substitution boxes) of size $6 \\times 4$.",
"There is a permutation $P$ on 32-bits."
] | C | To determine the false statement regarding the DES (Data Encryption Standard) round function, let's analyze each statement step by step:
1. **Expansion Operation \( E \) from 32 to 48 bits**:
- In the DES round function, the 32-bit half-block is indeed expanded to 48 bits before the round key is XORed. This is done through a specific expansion permutation.
- **This statement is true.**
2. **A round key is XORed to an internal register**:
- In the DES algorithm, after the expansion of the 32-bit input, the 48-bit round key is XORed with the expanded data. This is a crucial step in the DES encryption process.
- **This statement is true.**
3. **There are 8 identical S-boxes (substitution boxes) of size \( 6 \times 4 \)**:
- DES employs 8 different S-boxes, each of which takes a 6-bit input and produces a 4-bit output. However, these S-boxes are not identical; they have different mappings. Therefore, the statement claiming they are identical is false.
- **This statement is false.**
4. **There is a permutation \( P \) on 32 bits**:
- After the S-box substitution, the 32-bit output is subjected to a permutation known as the P permutation. This step is part of the DES round function.
- **This statement is true.**
Based on the above analysis, the false statement is:
**3. There are 8 identical S-boxes (substitution boxes) of size \( 6 \times 4 \).** |
mmlu_aux_3326 | mmlu_auxiliary | <ctx>
##ft and gust fronts happening, but since this a small scale feature, detection algorithms have been developed to point convergence and divergence areas under a thunderstorm on the radar display. satellite imagery most populated areas of the earth are now well covered by weather satellites, which aid in the nowcasting of severe convective and tornadic storms. these images are available in the visible and infrared domains. the infrared ( ir : 10 - 13 Β΅m ) images permit estimation of the top height of the clouds, according to the air mass soundings of the day, and the visible ( vis : 0. 5 - 1. 1 Β΅m ) ones will show the shape of the storms by its brightness and shadow produced. meteorologists can extract information about the development stage and subsequent traits of thunderstorms by recognizing specific signatures in both domains. visible imagery permits the most detailed imagery whereas infrared imagery has the advantage of availability at night. sensors on satellites can also detect emissions from water vapor ( wv : 6 - 7 Β΅m ), but mostly in the middle to upper levels of the troposphere, so thunderstorms are only seen after being well developed. it is, however, useful in convective storm prediction, as it illustrates the placement and movement of air masses and of moisture, as well as shortwaves and areas of vorticity and lifts. severe storms have a very strong updraft. the rising air parcels in that column accelerate and will overshoot the equilibrium level ( el ) before being pulled back by negative buoyancy. this means the cloud tops will reach higher levels than the surrounding cloud in the updraft region. this overshooting top will be noticeable by a colder temperature region in the thunderstorm on infrared images. another signature associated with this situation is the enhanced - v feature where the cold cloud tops forming at the overshooting top fan out in a v shape as cloud matter is blown downwind at that level. both features can be seen on visible satellite imagery, during daytime, by the shadows they cast on surrounding clouds. in multicellular storms and squall lines, the mid - level jet stream is often intersecting the line and its dry air introduced into the cloud is negatively unstable.
because of this, forecasters watch developing storms closely for signs of eye formation. for storms with a clear eye, detection of the eye is as simple as looking at pictures from a weather satellite. however, for storms with a filled eye, or an eye completely covered by the central dense overcast, other detection methods must be used. observations from ships and hurricane hunters can pinpoint an eye visually, by looking for a drop in wind speed or lack of rainfall in the storm's center. in the united states, south korea, and a few other countries, a network of nexrad doppler weather radar stations can detect eyes near the coast. weather satellites also carry equipment for measuring atmospheric water vapor and cloud temperatures, which can be used to spot a forming eye. in addition, scientists have recently discovered that the amount of ozone in the eye is much higher than the amount in the eyewall, due to air sinking from the ozone - rich stratosphere. instruments sensitive to ozone perform measurements, which are used to observe rising and sinking columns of air, and provide indication of the formation of an eye, even before satellite imagery can determine its formation. one satellite study found eyes detected on average for 30 hours per storm. associated phenomena eyewall replacement cycles eyewall replacement cycles, also called concentric eyewall cycles, naturally occur in intense tropical cyclones, generally with winds greater than 185km / h ( 115mph ), or major hurricanes ( category 3 or higher on the saffir β simpson hurricane scale ). when tropical cyclones reach this intensity, and the eyewall contracts or is already sufficiently small ( see above ), some of the outer rainbands may strengthen and organize into a ring of thunderstormsan outer eyewallthat slowly moves inward and robs the inner eyewall of its needed moisture and angular momentum. since the strongest winds are located in a cyclone's eyewall, the tropical cyclone usually weakens during this phase, as the inner wall is " choked " by the outer wall. eventually the outer eyewall replaces the inner one completely, and the storm can re - intensify. the discovery of this process was partially responsible for the end of the u. s. government's hurricane modification
winds, and tornadoes, as well as damage itself and flash flooding. the program was called skywarn, and the spotters were local sheriff's deputies, state troopers, firefighters, ambulance drivers, amateur radio operators, civil defense ( now emergency management ) spotters, storm chasers, and ordinary citizens. when severe weather is anticipated, local weather service offices request that these spotters look out for severe weather, and report any tornadoes immediately, so that the office can issue a timely warning. usually, spotters are trained by the nws on behalf of their respective organizations, and they report to them. the organizations activate public warning systems such as sirens and the emergency alert system, and forward the reports to the nws, which does directly disseminate information and warnings through its noaa weather radio all hazards network. there are more than 230, 000 trained skywarn weather spotters across the united states. in canada, a similar network of volunteer weather watchers, called canwarn, helps spot severe weather, with more than 1, 000 volunteers. in europe, several nations are organizing spotter networks under the auspices of skywarn europe and the tornado and storm research organisation ( torro ) has maintained a network of spotters in the united kingdom since the 1970s. storm spotters are needed because radar systems such as nexrad, and satellite images, do not detect tornadoes or hail, only indications that the storm has the potential. radar and satellite data interpretation will usually give a warning before there is any visual evidence of such events, but ground truth from an observer can either verify the threat or determine it is not imminent. the spotter's ability to see what these remote sensing devices cannot is especially important as distance from a radar site increases, because the radar beam becomes progressively higher in altitude further away from the radar, due to curvature of earth and the spread of the beam with distance. therefore, when far from a radar, only precipitations and velocities high in the storm are observed. the important areas might not then be sampled or the resolution of the data might be poor. also, some meteorological situations leading to tornadogenesis are not readily detectable by radar and on occasion tornado development may occur more
in the absence of moon and city lights ) absolutely dark, one would not be able to see the silhouette of an object against the sky. the night sky and studies of it have a historical place in both ancient and modern cultures. in the past, for instance, farmers have used the state of the night sky as a calendar to determine when to plant crops. the ancient belief in astrology is generally based on the belief that relationships between heavenly bodies influence or convey information about events on earth. the scientific study of the night sky and bodies observed within it, meanwhile, takes place in the science of astronomy. within visible - light astronomy, the visibility of celestial objects in the night sky is affected by light pollution. the presence of the moon in the night sky has historically hindered astronomical observation by increasing the amount of ambient lighting. with the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. special filters and modifications to light fixtures can help to alleviate this problem, but for the best views, both professional and amateur optical astronomers seek viewing sites located far from major urban areas. use in weather forecasting along with pressure tendency, the condition of the sky is one of the more important parameters used to forecast weather in mountainous areas. thickening of cloud cover or the invasion of a higher cloud deck is indicative of rain in the near future. at night, high thin cirrostratus clouds can lead to halos around the moon, which indicate the approach of a warm front and its associated rain. morning fog portends fair conditions and can be associated with a marine layer, an indication of a stable atmosphere. rainy conditions are preceded by wind or clouds which prevent fog formation. the approach of a line of thunderstorms could indicate the approach of a cold front. cloud - free skies are indicative of fair weather for the near future. the use of sky cover in weather prediction has led to various weather lore over the centuries. tropical cyclones within 36 hours of the passage of a tropical cyclone's center, the pressure begins to fall and a veil of white cirrus clouds approaches from the cyclone's direction. within 24 hours of the closest approach to the center, low clouds begin to move in, also
</ctx>
The following are multiple choice questions (with answers) about STEM.
Wouldn't it be great if you could just look up at the sky and read the weather forecast right away? Well, you can. The forecast is written in clouds. If you can read that writing, you can tell something about the atmosphere. With some practice, you can become a pretty good weather forecaster. Who knows, you might even do as well as meteorologists . Meteorologists use much more information than just the appearance of the clouds to make their forecast. They collect data from all over the world. Then they put it into powerful, high speed computers. This does give the meteorologists an advantage, because they can track weather patterns as they move from west to east across the country. But you have an advantage, too. You can look at the sky and get your data directly. A meteorologist uses a computer forecast that's several hours old to make a local forecast. What are you seeing when you look at a cloud? "A picture of moisture is doing in the atmosphere," says meteorologist Peter Leavitt. There's moisture throughout the atmosphere. Most of the time you don't see it, because it's in the form of an unseeable gas called water vapor. Sometimes, the temperature of the air gets cold enough to cause the waster vapor to change to liquid water. It's called condensation , and we see it happen all the time (for example, when humid air from the shower hits the cold glasses of a mirror). When enough water vapor condenses , droplets come in the air. These droplets scatter light. A cloud is seen. Watching clouds over a day or two tells you a lot more than a single cloud about the weather to come. Changes in clouds show changes in the atmosphere. You should begin to notice patterns. Certain clouds, following each other in order, can signal an approaching storm. But don't take our word for it; see for yourself. According to the passage, your advantage in weather forecast is that _ .
A. you have a high-speed computer
B. you observe the sky and get your data directly
C. you have more instruments at home
D. you can track weather patterns as they move from west to east across the country
Answer: | [
"you have a high-speed computer",
"you observe the sky and get your data directly",
"you have more instruments at home",
"you can track weather patterns as they move from west to east across the country"
] | B | null |
medmcqa_12 | medmcqa | <ctx>
application of pharmacometabolomics is the analysis of a patient's metabolic profile following the administration of a specific therapy. this process is often secondary to a pre - treatment metabolic analysis, allowing for the comparison of pre - and post - treatment metabolite concentrations. this allows for the identification of the metabolic processes and pathways that are being altered by the treatment either intentionally as a designated target of the compound, or unintentionally as a side effect. furthermore, the concentration and variety of metabolites produced from the compound itself can also be identified, providing information on the rate of metabolism and potentially leading to development of a related compound with increased efficacy or decreased side effects. an example of this approach was used to investigate the effect of several antipsychotic drugs on lipid metabolism in patients treated for schizophrenia. it was hypothesized that these antipsychotic drugs may be altering lipid metabolism in treated patients with schizophrenia, contributing to the weight gain and hypertriglyceridemia. the study monitored lipid metabolites in patients both before and after treatment with antipsychotics. the compiled pre - and post - treatment profiles were then be compared to examine the effect of these compounds on lipid metabolism. the researchers found correlations between treatment with antipsychotic drugs and lipid metabolism, in both a lipid - class - specific and drug - specific manner, establishing new foundations around the concept that pharmacometabolomics provides powerful tools for enabling detailed mapping of drug effects. additional studies by the pharmacometabolomics research network enabled mapping in ways not possible before effects of statins, atenolol and aspirin. totally new insights were gained about effect of these drugs on metabolism and they highlighted pathways implicated in response and side effects. metabolite quantification and analysis in order to identify and quantify metabolites produced by the body, various detection methods have been employed. most often, these involve the use of nuclear magnetic resonance ( nmr ) spectroscopy or mass spectrometry ( ms ), providing universal detection, identification and quantification of metabolites in individual patient samples. although both processes are used in pharmaco
physiologically based pharmacokinetic ( pbpk ) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion ( adme ) of synthetic or natural chemical substances in humans and other animal species. pbpk modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals. pbpk models strive to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the phenomena involved in the complex adme processes. a large degree of residual simplification and empiricism is still present in those models, but they have an extended domain of applicability compared to that of classical, empirical function based, pharmacokinetic models. pbpk models may have purely predictive uses, but other uses, such as statistical inference, have been made possible by the development of bayesian statistical tools able to deal with complex models. that is true for both toxicity risk assessment and therapeutic drug development. pbpk models try to rely a priori on the anatomical and physiological structure of the body, and to a certain extent, on biochemistry. they are usually multi - compartment models, with compartments corresponding to predefined organs or tissues, with interconnections corresponding to blood or lymph flows ( more rarely to diffusions ). a system of differential equations for concentration or quantity of substance on each compartment can be written, and its parameters represent blood flows, pulmonary ventilation rate, organ volumes etc., for which information is available in scientific publications. indeed, the description they make of the body is simplified and a balance needs to be struck between complexity and simplicity. besides the advantage of allowing the recruitment of a priori information about parameter values, these models also facilitate inter - species transpositions or extrapolation from one mode of administration to another ( e. g., inhalation to oral ). an example of a 7 - compartment pbpk model, suitable to describe the fate of many solvents in the mammalian body, is given in the figure on the right. history the first pharmacokinetic model described in the scientific literature was
fibrillar forms. misprocessing of app results in the accumulation of soluble aΞ² that, according to rowan's hypothesis, impairs hippocampal ltp and may lead to the cognitive decline seen early in ad. ad may also impair ltp through mechanisms distinct from aΞ². for example, one study demonstrated that the enzyme pkmΞΆ accumulates in neurofibrillary tangles, which are a pathologic marker of ad. pkmΞΆ is an enzyme with critical importance in the maintenance of late ltp. drug addiction research in the field of addiction medicine has also recently turned its focus to ltp, owing to the hypothesis that drug addiction represents a powerful form of learning and memory. addiction is a complex neurobehavioral phenomenon involving various parts of the brain, such as the ventral tegmental area ( vta ) and nucleus accumbens ( nac ). studies have demonstrated that vta and nac synapses are capable of undergoing ltp and that this ltp may be responsible for the behaviors that characterize addiction. see also neuroplasticity actin remodeling of neurons transcranial direct - current stimulation post - tetanic potentiation references further reading external links researchers provide first evidence for learning mechanism, a physorg. com report on 2006 study by bear and colleagues. short video documentary about the doogie mice. ( realplayer format ) " smart mouse ", a quantum abc tv episode about the doogie mice. neurophysiology neuroscience of memory behavioral neuroscience neuroplasticity neuroscience
which look at the abnormalities that present themselves during an attack. the researchers found interictally decreased cerebral blood flow in the posterior parts of the bilateral caudate nucleus. however, the literature does state that although this could be a cause of pkd, it could also be a result of pkd. another spect study showed an increase in the cerebral blood flow in the left posterior thalamus in a pkd patient during an attack. the researchers also subtracted the ictal from the postictal scans, and saw increased blood flow in the thalamus. they ultimately suggested that hyperactive blood flow in this area could be causing the pathophysiology of pkd. this study, however, was only performed on one patient, and would need to be replicated many more times in order to be generalized to the population of pkd patients. other spect studies have been cited showing hyperactivity in the basal ganglia. fmri studies in a study by zhou et al., the researchers performed fmri studies on pkd patients, and analyzed the differences between the amplitude low frequency fluctuations ( alff ) of the patients. they found that the left postcentral gyrus and the bilateral putamen had increased alff in pkd patients. the researchers concluded that the hyperactivity in these regions suggested that there is a dysfunction in the basal ganglia - thalamo - cortical circuit in pkd. this circuit is part of the motor control circuit in the brain, making it a reasonable place for abnormality in a movement disorder, but again, researchers are still unsure of the role these differences they found play in the disease pathology. diffusion tensor imaging diffusion tensor imaging ( dti ) displays physical alterations in the brain that may not be seen on regular mri. in one study researchers found that some of the patients had abnormalities in their thalamus. however, this does not prove that all patients have abnormalities in their thalamus. other cases are cited, including a patient who developed a similar paroxysmal dyskinesia after a thalamic infarction, implicating that an abnormality in the thalamus of
</ctx>
The following are multiple choice questions (with answers) about STEM.
The pharmakokinetic change occurring in geriatric patient is due to
A. Gastric absorption
B. Liver metabolism
C. Renal clearance
D. Hypersensitivity
Answer: | [
"Gastric absorption",
"Liver metabolism",
"Renal clearance",
"Hypersensitivity"
] | C | One of the most impoant pharmacokinetic changes associated with aging is decreased renal elimination of drugs. After age 40, creatinine clearance decreases an average of 8 mL/min/1.73 m2/decade; however, the age-related decrease varies substantially from person to person. Serum creatinine levels often remain within normal limits despite a decrease in GFR because older adults generally have less muscle mass and are generally less physically active than younger adults and thus produce less creatinine. Maintenance of normal serum creatinine levels can mislead clinicians who assume those levels reflect normal kidney function. Decreases in tubular function with age parallel those in glomerular function. Refer katzung 11/e p1039 |
aquarat_38067 | aquarat | <ctx>
8. 79 8. 80 8. 81 8. 82
a column. let β s assume the data values are placed in cells a2 through a11. in any cell, type the excel command = average ( a2 : a11 ) and press enter. excel will calculate the arithmetic mean in this cell. then, in any other cell, type the excel command = stdev. s ( a2 : a11 ) and press enter. excel will calculate the sample standard deviation in this cell. figure a25 shows the mean and standard deviation for the 10 ages. 512 a β’ appendix a : review of excel for data science access for free at openstax. org figure a25 mean and standard deviation in ms excel ( used with permission from microsoft ) here is a more comprehensive listing of various statistical functions available within excel. the reader is encouraged to go to the formulas menu in excel and select any of the submenus such as β financial, β β math & trig, β lookup & reference, β etc. to see various excel formulas. under the option for β more functions, β select statistical to see a menu of various statistical functions available within excel. table a2 shows a sample of some statistical functions available within excel : function purpose = average ( a1 : a10 ) find the mean of a set of numbers. = median ( a1 : a10 ) find the median of a set of numbers. = stdev. s ( a1 : a10 ) find the standard deviation for a set of numbers representing a sample. = stdev. p ( a1 : a10 ) find the standard deviation for a set of numbers representing a population. = var. s ( a1 : a10 ) find the variance for a set of numbers representing a sample. = var. p ( a1 : a10 ) find the variance for a set of numbers representing a population. = min ( a1 : a10 ) find the minimum of a set of numbers. = max ( a1 : a10 ) find the maximum of a set of numbers. = max ( a1 : a10 ) β min ( a1 : a10 ) find the range of a set of numbers. = correl ( a1 : a10, b1 : b10 ) find the correlation coefficient β r β for ( x, y )
formula till gate 2012 ( and year 2013 - 14 of the 2 - year validity period of gate 2013 score ), the score was calculated using the formula : gate score = where, m = marks obtained by the candidate, a = average of marks of all candidates who appeared in that subject, in that year, with marks less than zero converted to zero, s = standard deviation of marks of all candidates who appeared in that subject, in that year, with marks less than zero converted to zero, ag = global average of marks of all candidates who appeared across all subjects in current and past 5 years ( i. e. 2010 to 2013 for gate 2013 ), with marks less than zero converted to zero, sg = global standard deviation of marks of all candidates who appeared across all subjects in current and past 5 years ( i. e. 2010 to 2013 for gate 2013 ), with marks less than zero converted to zero. qualifying marks the rules for qualifying marks have varied from year to year. the qualifying marks ( out of 100 ) are different for different subjects as well as categories. here ΞΌ is the average ( i. e., arithmetic mean ) of marks of all candidates in the subject ( with negative marks converted to zero ) and Ο is the standard deviation of all marks in that subject. usually, the general category's qualifying mark is in the 25 to 50 range. the government of india implemented reservations for other backward classes in college admissions and public sector job recruitment in the year 2008. before that, all obc candidates were included in the " general " category. there was no separate obc category then. statistics the following line chart shows the number of candidates registered, appeared, and qualified ( total of all subjects ). * precise figures unavailable right now. the following line chart shows the variation of the number of candidates appeared in the 5 subjects with the largest numbers of appeared candidates, since gate 2010 : * precise figures unavailable right now. gate statistics by years difficulty level gate, for long, has been known to test the engineering basics in a smart way. complaints of " lengthy " problems have been rare. but the task of mastering an entire course of engineering ( around 30 undergraduate subjects ) for a three - hour test, itself gives the
each answer with the correct number of significant figures. ( a ) 62. 8 34 ( b ) 0. 147 + 0. 0066 + 0. 012 ( c ) 38 95
</ctx>
The following are multiple choice questions (with answers) about STEM.
Calculate Ruth's average score in an exam if she obtained the following marks 69, 89, 65, 82 and 58 out of 100 in different subjects.
A. A)75.6
B. B)72.6
C. C)71.6
D. D)73.6
Answer: | [
"A)75.6",
"B)72.6",
"C)71.6",
"D)73.6"
] | B | Explanation:
(69+89+65+82+58)/5=72.6
Answer: Option B |
medmcqa_2514 | medmcqa | <ctx>
##rtenol, a pentacyclic triterpene, a product of the cyclisation of squalene epoxide, is a major biosynthetic intermediate, whereas the same role is played by lanosterol, a tetracyclic triterpene, in non - photosynthetic eukaryotes ( fungi, vertebrates ). the particularities of this biosynthetic pathway could influence plant - insect relationships. the team joined ibmp in 1989. using genetics and molecular biology in support of previous methods, cdna clones encoding biosynthetic enzymes were isolated and characterized for the first time. mutants affected in sterol biosynthesis, site1 mutant defective in Ξ΄7 sterol - c5 - desaturase, sterov mutant overproducing sterols were isolated, transformed plants with gain or loss of function were selected and identified. advances in the knowledge of the regulation of the biosynthesis of sterols and their derivatives ( esters, glucosides ) as well as their functions have been obtained. work focused on three enzyme systems : epoxide of squalene triterpene synthase, sterols c24 and c241 methyltransferases and Ξ΄7 sterol - c5 - desaturase. the results show for the first time that in arabidopsis and all higher plants there are two gene subfamilies : smt1 and smt2 coding for methyltransferases involved in the methylation reactions leading to the formation of 24 - methyl and 24 - ethyl cholesterols. the overexpression or cosuppression of smt2 in tobacco or arabidopsis lines has a profound impact on the relative proportions of 24 - methyl and 24 - ethyl cholesterol. in particular smt2 plays a crucial role in adjusting the campesterol / sitosterol ratio to the value required for membrane integrity and balanced growth. an arabidopsis mutant has been isolated that contains mainly Ξ΄7 - sterols instead of the normal Ξ΄5 - sterols, and has a defect in the gene ( ste1 )
endocytosis β a now universally understood process in cell biology. goldstein and brown were awarded the nobel prize in physiology or medicine for the discovery in 1985. after finishing postdoctoral training, sudhof started his own laboratory at ut southwestern in 1986. briefly continuing work with goldstein and brown, sudhof helped identify a dna element in the ldl gene that produced sterol mediated end - product repression when inserted in a viral promoter. this domain, known as a sterol regulatory sequence, directly participates in the regulation of sterol biosynthesis. sterols are a major class of biomolecule and critical for life. important sterols in humans include cholesterol and steroid hormones. discovery of sterol regulatory elements and ldl receptor function led to the subsequent development of statin derived cholesterol medications such as atorvastatin ( lipitor ) β the top - selling branded pharmaceutical drug in the world in 2008. sudhof started his independent research career in neuroscience since 1986 and open the field of molecular neuroscience for synaptic transmission especially from the presynaptic nerve terminal perspective. until sudhof began his work, majority of neuroscience research was aimed at the postsynaptic neuron and its role in learning and memory. indeed, thomas sudhof is credited with discovering much of the machinery mediating neurotransmitter release and presynaptic plasticity in his 21 years at ut southwestern. sudhof began with the discovery of synaptotagmins and their role in neurotransmitter release from the presynaptic neuron. synaptotagmin, a transmembrane protein found in neurosecretory vesicles, functions as a calcium sensor triggering vesicle fusion and neurotransmitter release. stimulation of a neuron results in an increase in intracellular calcium concentration. after binding calcium ion to a region in its cytosolic domain, vesicular synaptotagmin promotes quick or slow neurotransmitter release from the presynaptic neuron via interaction with regulatory and fusion related proteins such as members of the snare complex. sudhof also discovered rims and muncs ( most notably mu
##yruvate carboxylase kinase turns the enzyme on, whereas phosphoenolpyruvate carboxylase phosphatase turns it back off. both kinase and phosphate are regulated by transcription. it is further believed that malate acts as a feedback inhibitor of kinase expression levels, and as an activator for phosphatase expression ( transcription ). since oxaloacetate is converted to malate in cam and organisms, high concentrations of malate activate phosphatase expression - the phosphatase subsequently de - phosphorylates and thus de - actives pep carboxylase, leading to no further accumulation of oxaloacetate and thus no further conversion of oxaloacetate to malate. hence malate production is down - regulated. the main allosteric inhibitors of pep carboxylase are the carboxylic acids malate ( weak ) and aspartate ( strong ). since malate is formed in the next step of the cam and cycles after pep carboxylase catalyses the condensation of co2 and pep to oxaloacetate, this works as a feedback inhibition pathway. oxaloacetate and aspartate are easily inter - convertible through a transaminase mechanism ; thus high concentrations of aspartate are also a pathway of feedback inhibition of pep carboxylase. the main allosteric activators of pep carboxylase are acetyl - coa and fructose - 1, 6 - bisphosphate ( f - 1, 6 - bp ). both molecules are indicators of increased glycolysis levels, and thus positive feed - forward effectors of pep carboxylase. they signal the need to produce oxaloacetate to allow more flux through the citric acid cycle. additionally, increased glycolysis means a higher supply of pep is available, and thus more storage capacity for binding co2 in transport to the calvin cycle. it is also noteworthy that the negative effectors aspartate competes with the positive effector acetyl - coa, suggesting that they share an allosteric binding site. studies have shown that energy equivalents such as amp
##thine into citrulline. after these initial steps the urea cycle continues in the inner membrane space until ornithine once again enters the matrix through a transport channel to continue the first to steps within matrix. transamination Ξ± - ketoglutarate and oxaloacetate can be converted into amino acids within the matrix through the process of transamination. these reactions are facilitated by transaminases in order to produce aspartate and asparagine from oxaloacetate. transamination of Ξ± - ketoglutarate produces glutamate, proline, and arginine. these amino acids are then used either within the matrix or transported into the cytosol to produce proteins. regulation regulation within the matrix is primarily controlled by ion concentration, metabolite concentration and energy charge. availability of ions such as ca2 + control various functions of the citric acid cycle. in the matrix activates pyruvate dehydrogenase, isocitrate dehydrogenase, and Ξ± - ketoglutarate dehydrogenase which increases the reaction rate in the cycle. concentration of intermediates and coenzymes in the matrix also increase or decrease the rate of atp production due to anaplerotic and cataplerotic effects. nadh can act as an inhibitor for Ξ± - ketoglutarate, isocitrate dehydrogenase, citrate synthase, and pyruvate dehydrogenase. the concentration of oxaloacetate in particular is kept low, so any fluctuations in this concentrations serve to drive the citric acid cycle forward. the production of atp also serves as a means of regulation by acting as an inhibitor for isocitrate dehydrogenase, pyruvate dehydrogenase, the electron transport chain protein complexes, and atp synthase. adp acts as an activator. protein synthesis the mitochondria contains its own set of dna used to produce proteins found in the electron transport chain. the mitochondrial dna only codes for about thirteen proteins that are used in processing mitochondrial transcripts, ribosomal proteins, ribosomal rna, transfer rna, and protein subunits found in the protein complexes
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which enzyme is a enzyme regulatory step in cholesterol synthesis?
A. HMG CoA synthase
B. HMG CoA reductase
C. Phosphomevalonate kinase
D. Diphosphomevalonokinase
Answer: | [
"HMG CoA synthase",
"HMG CoA reductase",
"Phosphomevalonate kinase",
"Diphosphomevalonokinase"
] | B | Ref: Textbook of Medical Biochemistry, Eighth Edition, Dr (Brig) MN Chatterjea Rana Shinde: page no: 431 |
medmcqa_2230 | medmcqa | <ctx>
young and do not have structural heart disease, lead to using catheter ablation of the aps with the elimination of the atrial fibrillation as well as the episodes of re - entrant ventricular tachycardia. this elimination of the atrial fibrillation with ablation implies aps have some pathophysiologic role in the development of a - fib in the wpw patient. functionally defined re - entry does not require the alternative anatomically defined circuit accessory pathways and it may not reside in just one location. ventricular fibrillation ( vf ) following ventricular tachycardia ( vt ) may be described as a functionally defined re - entry problem caused by multiple mini re - entrant circuits spontaneously created within the ventricular myocardium. the original re - entrant circuit breaks down into multiple mini reentrant circuits. ( vf becoming the grand finale of a single prolonged vt larger circus movement, propagating change in the " functional core " of the ventricular myocardium, dissipating mini reentrant circuits, exhibited as ventricular fibrillation. ) ischemia, electrolyte, ph abnormalities, or bradycardia are potential causes of functionally defined re - entry due to changes in the properties of the cardiac tissue's functional core. ( no accessory pathway required ). for reentry to occur, the path length of circuit should be greater than the wave length ( erp Γ conduction velocity ) of impulse. see also av reentrant tachycardia av nodal reentrant tachycardia references external links cardiac arrhythmia
re - entry ventricular arrhythmia is a type of paroxysmal tachycardia occurring in the ventricle where the cause of the arrhythmia is due to the electric signal not completing the normal circuit, but rather an alternative circuit looping back upon itself. there develops a self - perpetuating rapid and abnormal activation. ( " circus movement " is another term for this. ) conditions necessary for re - entry include a combination of unidirectional block and slowed conduction. circus movement may also occur on a smaller scale within the av node ( dual av nodal physiology ), a large bypass tract is not necessary. re - entry is divided into two major types : [ anatomically defined ] re - entry and [ functionally defined ] re - entry. the circus movement can occur around an anatomical or functional core. either type may occur alone, or together. anatomically defined re - entry has a fixed anatomic pathway. anomalous conduction via accessory pathways ( aps ) creates the re - entry circuit ( which are also called bypass tracts ), that exists between the atria and ventricles. wolff β parkinson β white syndrome ( wpw ) is an example of anatomically defined re - entry. wpw syndrome is an atrioventricular re - entrant tachycardia ( avrt ), secondary to an accessory pathway that connects the epicardial surfaces of the atrium and ventricle along the av groove. the majority of time symptomatic wpw fits the definition of avrt ( supraventricular tachycardia ) however avnrt ( dual av nodal physiology ) exist in ~ 10 % of patients with wpw syndrome creating the possibility of spontaneous atrial fibrillation degenerating into ventricular fibrillation ( vf ). the fact that wpw patients are young and do not have structural heart disease, lead to using catheter ablation of the aps with the elimination of the atrial fibrillation as well as the episodes of re - entrant ventricular tachycardia. this elimination of the atrial fibrillation with
lead ii potential as an indicator of heart - muscle function and see that it is coordinated with arterial blood pressure as well. heart function and its four - chamber action are explored in viscosity and laminar flow ; poiseuille β s law. basically, the right and left atria receive blood from the body and lungs, respectively, and pump the blood into the ventricles. the right and left ventricles, in turn, pump blood through the lungs and the rest of the body, respectively. depolarization of the heart muscle causes it to contract. after contraction it is repolarized to ready it for the next beat. the ecg measures components of depolarization and repolarization of the heart muscle and can yield significant information on the functioning and malfunctioning of the heart. figure 20. 31 shows an ecg of the lead ii potential and a graph of the corresponding arterial blood pressure. the major features are labeled p, q, r, s, and t. the p wave is generated by the depolarization and contraction of the atria as they pump blood into the ventricles. the qrs complex is created by the depolarization of the ventricles as they pump blood to the lungs and body. since the shape of the heart and the path of the depolarization wave are not simple, the qrs complex has this typical shape and time span. the lead ii qrs signal also masks the repolarization of the atria, which occur at the same time. finally, the t wave is generated by the repolarization of the ventricles and is followed by the next p wave in the next heartbeat. arterial blood pressure varies with each part of the heartbeat, with systolic ( maximum ) pressure occurring closely after the qrs complex, which signals contraction of the ventricles. figure 20. 31 a lead ii ecg with corresponding arterial blood pressure. the qrs complex is created by the depolarization and contraction of the ventricles and is followed shortly by the maximum or systolic blood pressure. see text for further description.
##rs complex onset. in that sense, the r wave from the ecg signal can be an important reference for detecting such events. detecting characteristic points of the icg signal : problems and solutions. the icg signal is extremely sensitive to artifacts due to movements and respiration. baseline variations can considerably alter the dz / dtmax point and the dependent hemodynamic parameters. high - frequency interferences also can affect the accuracy in estimating some characteristics points, essentially the b point, affecting the lvet dependent parameters. the qco's algorithm uses classical and advanced adaptive filtering techniques to minimize such problems with good results. the algorithm applies a band - pass filtering stage, which attenuates both the baseline drifts ( due to movement or respiration ) and the high - frequency signal. also, it features adaptive neuro - fuzzy inference system ( anfis ), for higher accuracy, which is able to obliterate even artifacts such as the electrosurgical noise. foundation and features of the different methods. since its discovery, the use of co monitoring has been restricted to certain kinds of patients because of the risk of the classical procedures. lately, new kinds of methods and devices have been developed. nowadays, in practice, the method election is determined mainly by the features of the patients and the possibilities of the hospital to accede to this technology. different co monitor methods can be characterized by their invasiveness or their ability to measure continuously. the most ideal should be reliable, continuous, non - invasive, easy to use, and cost - effective. the invasive methods like the still gold standard method swan - ganz catheter ( pulmonary artery catheter ), based on transcardiac thermodilution, are being replaced by its invasiveness ; hence these systems will not be inspected in this document. the minimally invasive methods also require catheterization, but less harmful. one of them is the thermodilution transpulmonary ( tdtp ), developed in the late 1990s, which presents risk of iatrogenic effect effects such as pneumothorax, bleeding, infection, thrombosis or vascular ischemia, inherent to the requirement of both arterial
</ctx>
The following are multiple choice questions (with answers) about STEM.
All the following features our ventricular tachycardia as the cause of broad-complex tachycardia, except -
A. Fusion beats
B. Extreme left axis detion
C. Very broad QRS complexes
D. Response to carotid sinus massage
Answer: | [
"Fusion beats",
"Extreme left axis detion",
"Very broad QRS complexes",
"Response to carotid sinus massage"
] | C | Ventricular tachycardia can be classified based on its morphology: Monomorphic ventricular tachycardia means that the appearance of all the beats match each other in each lead of a surface electrocardiogram (ECG). Scar-related monomorphic ventricular tachycardia is the most common type and a frequent cause of death in patients having survived a hea attack, especially if they have weak hea muscle. RVOT tachycardia is a type of monomorphic ventricular tachycardia originating in the right ventricular outflow tract. RVOT morphology refers to the characteristic pattern of this type of tachycardia on an ECG. The source of the re-entry circuit can be identified by evaluating the morphology of the QRS complex in the V1 lead of a surface ECG. If the R wave is dominant (consistent with a right bundle branch block morphology), this indicates the origin of the VT is the left ventricle. Conversely, if the S wave is dominant (consistent with a left bundle branch block morphology, this is consistent with VT originating from the right ventricle or interventricular septum. Polymorphic ventricular tachycardia, on the other hand, has beat-to-beat variations in morphology. This may appear as a cyclical progressive change in cardiac axis, previously referred to by its French name torsades de pointes ("twisting of the spikes"). However, at the current time, the term torsades de pointes is reserved for polymorphic VT occurring in the context of a prolonged resting QT interval. Another way to classify ventricular tachycardias is the duration of the episodes: Three or more beats in a row on an ECG that originate from the ventricle at a rate of more than 120 beats per minute constitute a ventricular tachycardia Ref Harrison 20th edition pg 1423 |
arc_challenge_631 | arc_challenge | <ctx>
and water, an increase of 119 million ha since 1990. thus, forest ecological studies are sometimes closely aligned with meteorological and hydrological studies in regional ecosystem or resource planning studies. perhaps more importantly the duff or leaf litter can form a major repository of water storage. when this litter is removed or compacted ( through grazing or human overuse ), erosion and flooding are exacerbated as well as deprivation of dry season water for forest organisms. death and regeneration woody material, often referred to as coarse woody debris, decays relatively slowly in many forests in comparison to most other organic materials, due to a combination of environmental factors and wood chemistry ( see lignin ). trees growing in arid and / or cold environments do so especially slowly. thus, tree trunks and branches can remain on the forest floor for long periods, affecting such things as wildlife habitat, fire behaviour, and tree regeneration processes. some trees leave behind eerie skeletons after death. in reality these deaths are actually very few compared to the amount of tree deaths that go unnoticed. thousands of seedlings can be produced from a single tree but only a few can actually grow to maturity. most of those deaths are caused from competition for light, water, or soil nutrients, this is called natural thinning. singular deaths caused by natural thinning go unnoticed, but many deaths can help form forest ecosystems. there are four stages to forest regrowth after a disturbance, the establishment phase which is rapid increase in seedlings, the thinning phase which happens after a canopy is formed and the seedlings covered by it die, the transition phase which occurs when one tree from the canopy dies and creates a pocket of light giving new seedlings opportunity to grow, and lastly the steady - state phase which happens when the forest has different sizes and ages of trees. see also clear cutting close to nature forestry deforestation and climate change forest ecology and management ( journal ) forest principles intact forest landscapes mountain ecology old - growth forest plant ecology regeneration ( ecology ) references bibliography philip joseph burton. 2003. towards sustainable management of the boreal forest 1039 pages robert w. christopherson. 1996. geosystems : an introduction to physical geography. prentice hall inc. c. michael hogan. 2008. wild turkey : mel
forest ecology is the scientific study of the interrelated patterns, processes, flora, fauna and ecosystems in forests. the management of forests is known as forestry, silviculture, and forest management. a forest ecosystem is a natural woodland unit consisting of all plants, animals, and micro - organisms ( biotic components ) in that area functioning together with all of the non - living physical ( abiotic ) factors of the environment. importance forests have an enormously important role to play in the global ecosystem. forests produce approximately 28 % of the earth's oxygen ( the vast majority being created by oceanic plankton ), they also serve as homes for millions of people, and billions depend on forests in some way. likewise, a large proportion of the world's animal species live in forests. forests are also used for economic purposes such as fuel and wood products. forest ecology therefore has a great impact upon the whole biosphere and human activities that are sustained by it. approaches forests are studied at a number of organisational levels, from the individual organism to the ecosystem. however, as the term forest connotes an area inhabited by more than one organism, forest ecology most often concentrates on the level of the population, community or ecosystem. logically, trees are an important component of forest research, but the wide variety of other life forms and abiotic components in most forests means that other elements, such as wildlife or soil nutrients, are also crucial components. forest ecology shares characteristics and methodological approaches with other areas of terrestrial plant ecology, however, the presence of trees makes forest ecosystems and their study unique in numerous ways due to the potential for a wide variety of forest structures created by the uniquely large size and height of trees compared with other terrestrial plants. community diversity and complexity since trees can grow larger than other plant life - forms, there is the potential for a wide variety of forest structures ( or physiognomies ). the infinite number of possible spatial arrangements of trees of varying size and species makes for a highly intricate and diverse micro - environment in which environmental variables such as solar radiation, temperature, relative humidity, and wind speed can vary considerably over large and small distances. in addition, an important proportion of a forest
forms the basis of all food webs. further, it generates oxygen ( o2 ), a molecule necessary to sustain animals and humans. on average, a human consumes about 550 liter of oxygen per day, whereas plants produce 1, 5 liter of oxygen per 10 grams of growth. ecology understanding of ecosystem services requires a strong foundation in ecology, which describes the underlying principles and interactions of organisms and the environment. since the scales at which these entities interact can vary from microbes to landscapes, milliseconds to millions of years, one of the greatest remaining challenges is the descriptive characterization of energy and material flow between them. for example, the area of a forest floor, the detritus upon it, the micro organisms in the soil, the soil biodiversity, and characteristics of the soil itself will all contribute to the abilities of that forest for providing ecosystem services like carbon sequestration, water purification, and erosion prevention to other areas within the watershed. note that it is often possible for multiple services to be bundled together and when benefits of targeted objectives are secured, there may also be ancillary benefits β the same forest may provide habitat for other organisms as well as human recreation, which are also ecosystem services. the complexity of earth's ecosystems poses a challenge for scientists as they try to understand how relationships are interwoven among organisms, processes and their surroundings. as it relates to human ecology, a suggested research agenda for the study of ecosystem services includes the following steps : identification of ecosystem service providers ( esps ) β species or populations that provide specific ecosystem services β and characterization of their functional roles and relationships ; determination of community structure aspects that influence how esps function in their natural landscape, such as compensatory responses that stabilize function and non - random extinction sequences which can erode it ; assessment of key environmental ( abiotic ) factors influencing the provision of services ; measurement of the spatial and temporal scales esps and their services operate on. recently, a technique has been developed to improve and standardize the evaluation of esp functionality by quantifying the relative importance of different species in terms of their efficiency and abundance. such parameters provide indications of how species respond to changes in the environment ( i. e. predators,
microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements. decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses. the decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. plant material decomposition is also often referred to as composting. the decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy. decomposition microbiology of plant materials the breakdown of vegetation is highly dependent on oxygen and moisture levels. during decomposition, microorganisms require oxygen for their respiration. if anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. in arid environments, bacteria and fungi dry out and are unable to take part in decomposition. in wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. this usually translates to having appropriate carbon to nitrogen ratios ( c : n ). the ideal composting carbon - to - nitrogen ratio is thought to be approximately 30 : 1. as in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. for example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities. decomposition microbiology of cadavers and carcasses the decomposition processes of cadavers and carcasses are studied within the field of forensic taphonomy in order to : aid in the estimation of post - mortem interval ( pmi ) or time since death ; aid in the location of potential clandestine graves. decomposition microbiology as applied to forensic taphonomy can be divided into 2 groups of studies : microorganisms from within the
</ctx>
The following are multiple choice questions (with answers) about STEM.
In a healthy forest, dead trees and limbs fall to the ground and decompose. Which of these statements best describes why decomposition is important to a forest ecosystem?
A. Nutrients are released when wood is broken down.
B. Worms produce oxygen used by other organisms.
C. Dead trees provide nest sites for many different species of birds.
D. Water is stored in dead trees and limbs.
Answer: | [
"Nutrients are released when wood is broken down.",
"Worms produce oxygen used by other organisms.",
"Dead trees provide nest sites for many different species of birds.",
"Water is stored in dead trees and limbs."
] | A | |
aquarat_12801 | aquarat | <ctx>
in mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average ( when the context is clear ) is the sum of a collection of numbers divided by the count of numbers in the collection. the collection is often a set of results from an experiment, an observational study, or a survey. the term " arithmetic mean " is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic. in addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. for example, per capita income is the arithmetic average income of a nation's population. while the arithmetic mean is often used to report central tendencies, it is not a robust statistic : it is greatly influenced by outliers ( values much larger or smaller than most others ). for skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of " middle ". in that case, robust statistics, such as the median, may provide a better description of central tendency. definition given a data set, the arithmetic mean ( also mean or average ), denoted ( read bar ), is the mean of the values. the arithmetic mean is a data set's most commonly used and readily understood measure of central tendency. in statistics, the term average refers to any measurement of central tendency. the arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. symbolically, for a data set consisting of the values, the arithmetic mean is defined by the formula : ( for an explanation of the summation operator, see summation. ) for example, if the monthly salaries of employees are, then the arithmetic mean is : if the data set is a statistical population ( i. e., consists of every possible observation and not just a subset of them ), then the mean of that population is called the population mean and denoted by the greek letter. if the data set is a statistical sample (
this sum by the number of data values, which is 10. the result is :
a column. let β s assume the data values are placed in cells a2 through a11. in any cell, type the excel command = average ( a2 : a11 ) and press enter. excel will calculate the arithmetic mean in this cell. then, in any other cell, type the excel command = stdev. s ( a2 : a11 ) and press enter. excel will calculate the sample standard deviation in this cell. figure a25 shows the mean and standard deviation for the 10 ages. 512 a β’ appendix a : review of excel for data science access for free at openstax. org figure a25 mean and standard deviation in ms excel ( used with permission from microsoft ) here is a more comprehensive listing of various statistical functions available within excel. the reader is encouraged to go to the formulas menu in excel and select any of the submenus such as β financial, β β math & trig, β lookup & reference, β etc. to see various excel formulas. under the option for β more functions, β select statistical to see a menu of various statistical functions available within excel. table a2 shows a sample of some statistical functions available within excel : function purpose = average ( a1 : a10 ) find the mean of a set of numbers. = median ( a1 : a10 ) find the median of a set of numbers. = stdev. s ( a1 : a10 ) find the standard deviation for a set of numbers representing a sample. = stdev. p ( a1 : a10 ) find the standard deviation for a set of numbers representing a population. = var. s ( a1 : a10 ) find the variance for a set of numbers representing a sample. = var. p ( a1 : a10 ) find the variance for a set of numbers representing a population. = min ( a1 : a10 ) find the minimum of a set of numbers. = max ( a1 : a10 ) find the maximum of a set of numbers. = max ( a1 : a10 ) β min ( a1 : a10 ) find the range of a set of numbers. = correl ( a1 : a10, b1 : b10 ) find the correlation coefficient β r β for ( x, y )
standard deviation data rather than estimating pooled standard deviation, the following is the way to exactly aggregate standard deviation when more statistical information is available. population - based statistics the populations of sets, which may overlap, can be calculated simply as follows : the populations of sets, which do not overlap, can be calculated simply as follows : standard deviations of non - overlapping ( ) sub - populations can be aggregated as follows if the size ( actual or relative to one another ) and means of each are known : for example, suppose it is known that the average american man has a mean height of 70 inches with a standard deviation of three inches and that the average american woman has a mean height of 65 inches with a standard deviation of two inches. also assume that the number of men, n, is equal to the number of women. then the mean and standard deviation of heights of american adults could be calculated as for the more general case of m non - overlapping populations, x1 through xm, and the aggregate population,, where if the size ( actual or relative to one another ), mean, and standard deviation of two overlapping populations are known for the populations as well as their intersection, then the standard deviation of the overall population can still be calculated as follows : if two or more sets of data are being added together datapoint by datapoint, the standard deviation of the result can be calculated if the standard deviation of each data set and the covariance between each pair of data sets is known : for the special case where no correlation exists between any pair of data sets, then the relation reduces to the root sum of squares : sample - based statistics standard deviations of non - overlapping ( ) sub - samples can be aggregated as follows if the actual size and means of each are known : for the more general case of m non - overlapping data sets, x1 through xm, and the aggregate data set, where if the size, mean, and standard deviation of two overlapping samples are known for the samples as well as their intersection, then the standard deviation of the aggregated sample can still be calculated. in general, see also chi - squared distribution # asymptotic properties used for calculating cohen's d ( effect size
</ctx>
The following are multiple choice questions (with answers) about STEM.
Set S contains exactly 10 numbers and has an average (arithmetic mean) of 6.2. If one of the numbers in Set S is increased by 3, while all other numbers remain the same, what is the new average of Set S?
A. A)6.5
B. B)6.7
C. C)6.8
D. D)6.85
Answer: | [
"A)6.5",
"B)6.7",
"C)6.8",
"D)6.85"
] | A | Old set S - Total is avg * no of elements = 6.2*10 = 62
If one number is increased by 3 then total increased to 62+3=65
New avg - 65/10 = 6.5.
Hence answer is A. |
medmcqa_38 | medmcqa | <ctx>
vertex of the framework and is the index of one of the cartesian coordinates of - dimensional space ; that is, the dimension of the gradient is the same as the width of the rigidity matrix. if the edges of the framework are assumed to be rigid bars that can neither expand nor contract ( but can freely rotate ) then any motion respecting this rigidity must preserve the lengths of the edges : the derivative of length, as a function of the time over which the motion occurs, must remain zero. this condition may be expressed in linear algebra as a constraint that the gradient vector of the motion of the vertices must have zero inner product with the row of the rigidity matrix that represents the given edge. thus, the family of gradients of ( infinitesimally ) rigid motions is given by the nullspace of the rigidity matrix. for frameworks that are not in generic position, it is possible that some infinitesimally rigid motions ( vectors in the nullspace of the rigidity matrix ) are not the gradients of any continuous motion, but this cannot happen for generic frameworks. a rigid motion of the framework is a motion such that, at each point in time, the framework is congruent to its original configuration. rigid motions include translations and rotations of euclidean space ; the gradients of rigid motions form a linear space having the translations and rotations as bases, of dimension, which must always be a subspace of the nullspace of the rigidity matrix. because the nullspace always has at least this dimension, the rigidity matroid can have rank at most, and when it does have this rank the only motions that preserve the lengths of the edges of the framework are the rigid motions. in this case the framework is said to be first - order ( or infinitesimally ) rigid. more generally, an edge belongs to the matroid closure operation of a set if and only if there does not exist a continuous motion of the framework that changes the length of but leaves the lengths of the edges in unchanged. although defined in different terms ( column vectors versus row vectors, or forces versus motions ) static rigidity and first - order rigidity reduce to the same properties of the underlying matrix and therefore coincide with
vectors ). a stress is a special case of a load, in which equal and opposite forces are applied to the two endpoints of each edge ( which may be imagined as a spring ) and the forces formed in this way are added at each vertex. every stress is an equilibrium load, a load that does not impose any translational force on the whole system ( the sum of its force vectors is zero ) nor any rotational force. a linear dependence among the rows of the rigidity matrix may be represented as a self - stress, an assignment of equal and opposite forces to the endpoints of each edge that is not identically zero but that adds to zero at every vertex. thus, a set of edges forms an independent set in the rigidity matroid if and only if it has no self - stress. the vector space of all possible loads, on a system of n vertices, has dimension dn, among which the equilibrium loads form a subspace of dimension. an independent set in the rigidity matroid has a system of equilibrium loads whose dimension equals the cardinality of the set, so the maximum rank that any set in the matroid can have is. if a set has this rank, it follows that its set of stresses is the same as the space of equilibrium loads. alternatively and equivalently, in this case every equilibrium load on the framework may be resolved by a stress that generates an equal and opposite set of forces, and the framework is said to be statically rigid. kinematics if the vertices of a framework are in a motion, then that motion may be described over small scales of distance by its gradient, a vector for each vertex specifying its speed and direction. the gradient describes a linearized approximation to the actual motion of the points, in which each point moves at constant velocity in a straight line. the gradient may be described as a row vector that has one real number coordinate for each pair where is a vertex of the framework and is the index of one of the cartesian coordinates of - dimensional space ; that is, the dimension of the gradient is the same as the width of the rigidity matrix. if the edges of the framework are assumed to be rigid bars that can neither expand nor contract ( but
equation above ; or for each non - trivial infinitesimal motion, there is some equilibrium stress such that. the third statement shows that for each such, is not in the column span of, i. e., it is not an edge distortion resulting from. this follows from the fredholm alternative : since the column span of is orthogonal to the kernel of, i. e., the set of equilibrium stresses, either for some acceleration or there is an equilibrium stress satisfying the third condition. the third condition can be written in terms of the stress matrix :. solving for is a non - linear problem in with no known efficient algorithm. prestress stability prestress stability is weaker than infinitesimal and static rigidity but stronger than second - order rigidity. consider the third sufficient condition for second - order rigidity. a - dimensional framework is prestress stable if there exists an equilibrium stress such that for all non - trivial velocities,. prestress stability can be verified via semidefinite programming techniques. global rigidity a - dimensional framework of a linkage is globally rigid in - dimensions if all frameworks in the configuration space are equivalent up to trivial motions, i. e., factoring out the trivial motions, there is only one framework of. theorem. global rigidity is a generic property of graphs. minimal rigidity a - dimensional framework is minimally rigid in - dimensions if is rigid and removing any edge from results in a framework that is not rigid. redundant rigidity there are two types of redundant rigidity : vertex - redundant and edge - redundant rigidity. a - dimensional framework is edge - redundantly rigid in - dimensions if is rigid and removing any edge from results in another rigid framework. vertex - redundant rigidity is defined analogously. rigidity for various types of frameworks polyhedra this section concerns the rigidity of polyhedra in - dimensions, see polyhedral systems for a definition of this type of gcs. a polyhedron is rigid if its underlying bar - joint framework is rigid. one of the earliest results for rigidity was a conjecture by euler in 1766. conjecture. a closed spatial figure allows no changes, as long as it is not
motions. a continuous motion is a continuous path in that describes the physical motion between two frameworks of that preserves all constraints. a trivial motion is a continuous motion resulting from the euclidean isometries, i. e. translations and rotations. in general, any metric space has a set of trivial motions coming from the isometric group of the space. local rigidity. a framework of a gcs is locally rigid, or just rigid, if all its continuous motions are trivial. testing for local rigidity is co - np hard. rigidity map. the rigidity map takes a framework and outputs the squared - distances between all pairs of points that are connected by an edge. rigidity matrix. the jacobian, or derivative, of the rigidity map yields a system of linear equations of the form for all edges of. the rigidity matrix is an matrix that encodes the information in these equations. each edge of corresponds to a row of and each vertex corresponds to columns of. the row corresponding to the edge is defined as follows. infinitesimal motion. an infinitesimal motion is an assignment of velocities to the vertices of a framework such that. hence, the kernel of the rigidity matrix is the space of infinitesimal motions. a trivial infinitesimal motion is defined analogously to a trivial continuous motion. stress. a stress is an assignment to the edges of a framework. a stress is proper if its entries are nonnegative and is a self stress if it satisfies. a stress satisfying this equation is also called a resolvable stress, equilibrium stress, prestress, or sometimes just a stress. stress matrix. for a stress applied to the edges of a framework with the constraint graph, define the stress matrix as. it is easily verified that for any two and any stress, the rigidity matrix as a linear transformation the information in this section can be found in. the rigidity matrix can be viewed as a linear transformation from to. the domain of this transformation is the set of column vectors, called velocity or displacements vectors, denoted by, and the image is the set of edge distortion vectors, denoted by. the entries of the vector are velocities assigned to
</ctx>
The following are multiple choice questions (with answers) about STEM.
If solid line represent the rigid height, then the following diagram with excessive inter arch space is classified in?
A. Class - 1
B. Class - 2
C. Class - 3
D. Class - 4
Answer: | [
"Class - 1",
"Class - 2",
"Class - 3",
"Class - 4"
] | B | |
aquarat_29247 | aquarat | <ctx>
of these interest and principal payments must equal the cumulative fixed payments at time t i. e. mat. evaluating the first integral on the right we obtain an expression for i ( t ), the interest paid : unsurprisingly the second integral evaluates to p0 β p ( t ) and therefore : the reader may easily verify that this expression is algebraically identical to the one above. loan cost factor the cost of a loan is simply the annual rate multiplied by loan period : let s = rt. then we may define loan cost factor c ( s ) such that c = p0c ( s ) i. e. : c ( s ) is the cost per unit of currency loaned. the function c ( s ) is characterised by having a limiting value of 1 when s is close to zero since for small values of s, exp ( βs ) β 1 β s and the denominator simplifies to s. also when s is very large, exp ( βs ) is small so c ( s ) β s and thus loan cost c β p0rt ( rt > > 0 ). by way of example, consider a loan of 1000000 at 10 % repaid over 20 years. then s = 0. 1 Γ 20 = 2. the product rt is an easily obtained but important parameter in determining loan cost according to the equation c = p0xc ( s ). this is best illustrated by plotting the cost factor function for s values in domain [ 0 ; 5 ]. the linear behaviour of the function for higher values of s is clear. equivalent simple interest cost factor for a fixed term loan of t years, we may compare the above loan cost factor against an equivalent simple interest cost factor 1 + se where se = ret and re is the equivalent simple interest rate : it is straightforward to determine se in terms of s. dividing by loan time period t will then give the equivalent simple interest rate. more challenging is the reverse determination of s given se. in his book problem solving with true basic, dr b. d. hahn has a short section on certain'hire purchase'schemes in which interest is calculated in advance in one lump sum, which is added to the
time is f ( t, st ), ito's lemma gives the term represents the change in value in time dt of the trading strategy consisting of holding an amount of the stock. if this trading strategy is followed, and any cash held is assumed to grow at the risk free rate r, then the total value v of this portfolio satisfies the sde this strategy replicates the option if v = f ( t, s ). combining these equations gives the celebrated black β scholes equation product rule for ito processes let be a two - dimensional ito process with sde : then we can use the multi - dimensional form of ito's lemma to find an expression for. we have and. we set and observe that and substituting these values in the multi - dimensional version of the lemma gives us : this is a generalisation of leibniz's product rule to ito processes, which are non - differentiable. further, using the second form of the multidimensional version above gives us so we see that the product is itself an ito drift - diffusion process. ito's formula for functions with finite quadratic variation an idea by hans follmer was to extend ito's formula to functions with finite quadratic variation. let be a real - valued function and a rcll function with finite quadratic variation. then infinite - dimensional formulas there exist a couple of extensions to infinite - dimensional spaces ( e. g. pardoux, gyongy - krylov, brzezniak - van neerven - veraar - weis ). see also wiener process ito calculus feynman β kac formula euler β maruyama method notes references kiyosi ito ( 1944 ). stochastic integral. proc. imperial acad. tokyo 20, 519 β 524. this is the paper with the ito formula ; online kiyosi ito ( 1951 ). on stochastic differential equations. memoirs, american mathematical society 4, 1 β 51. online bernt ΓΈksendal ( 2000 ). stochastic differential equations. an introduction with applications, 5th edition, corrected 2nd printing. springer.. sections 4. 1 and 4. 2.
a variance of. the payoff of an option ( or any derivative contingent to stock ) at maturity is known. to find its value at an earlier time we need to know how evolves as a function of and. by ito's lemma for two variables we have now consider a certain portfolio, called the delta - hedge portfolio, consisting of being short one option and long shares at time. the value of these holdings is over the time period, the total profit or loss from changes in the values of the holdings is ( but see note below ) : now discretize the equations for ds / s and dv by replacing differentials with deltas : and appropriately substitute them into the expression for : notice that the term has vanished. thus uncertainty has been eliminated and the portfolio is effectively riskless. the rate of return on this portfolio must be equal to the rate of return on any other riskless instrument ; otherwise, there would be opportunities for arbitrage. now assuming the risk - free rate of return is we must have over the time period if we now substitute our formulas for and we obtain : simplifying, we arrive at the celebrated black β scholes partial differential equation : with the assumptions of the black β scholes model, this second order partial differential equation holds for any type of option as long as its price function is twice differentiable with respect to and once with respect to. different pricing formulae for various options will arise from the choice of payoff function at expiry and appropriate boundary conditions. technical note : a subtlety obscured by the discretization approach above is that the infinitesimal change in the portfolio value was due to only the infinitesimal changes in the values of the assets being held, not changes in the positions in the assets. in other words, the portfolio was assumed to be self - financing. alternative derivation here is an alternative derivation that can be utilized in situations where it is initially unclear what the hedging portfolio should be. ( for a reference, see 6. 4 of shreve vol ii ). in the black β scholes model, assuming we have picked the risk - neutral probability measure, the underlying stock price s ( t ) is assumed to evolve as a
similar results as the original criterion. for single assets ( stock, index fund, etc. ), and a risk - free rate, it is easy to obtain the optimal fraction to invest through geometric brownian motion. the stochastic differential equation governing the evolution of a lognormally distributed asset at time ( ) is whose solution is where is a wiener process, and ( percentage drift ) and ( the percentage volatility ) are constants. taking expectations of the logarithm : then the expected log return is consider a portfolio made of an asset and a bond paying risk - free rate, with fraction invested in and in the bond. the aforementioned equation for must be modified by this fraction, ie, with associated solution the expected one - period return is given by for small,, and, the solution can be expanded to first order to yield an approximate increase in wealth solving we obtain is the fraction that maximizes the expected logarithmic return, and so, is the kelly fraction. thorp arrived at the same result but through a different derivation. remember that is different from the asset log return. confusing this is a common mistake made by websites and articles talking about the kelly criterion. for multiple assets, consider a market with correlated stocks with stochastic returns, and a riskless bond with return. an investor puts a fraction of their capital in and the rest is invested in the bond. without loss of generality, assume that investor's starting capital is equal to 1. according to the kelly criterion one should maximize expanding this with a taylor series around we obtain thus we reduce the optimization problem to quadratic programming and the unconstrained solution is where and are the vector of means and the matrix of second mixed noncentral moments of the excess returns. there is also a numerical algorithm for the fractional kelly strategies and for the optimal solution under no leverage and no short selling constraints. see also risk of ruin gambling and information theory proebsting's paradox merton's portfolio problem references external links optimal decisions gambling mathematics information theory wagering articles containing proofs 1956 introductions portfolio theories
</ctx>
The following are multiple choice questions (with answers) about STEM.
A, B and C enter in partnership. A invests some money at the beging, B invests double the amount after 6 months and C invests thrice the amount after 8 months. If the annual profit be Rs 39,000/-, Cβs Share is:
A. A)Rs. 12655/-
B. B)Rs. 13,000/-
C. C)Rs. 9000/-
D. D)Rs. 10800/-
Answer: | [
"A)Rs. 12655/-",
"B)Rs. 13,000/-",
"C)Rs. 9000/-",
"D)Rs. 10800/-"
] | B | Explanation:
Suppose A, B, C invest Rs. X, Rs. 2x and Rs. 3x respectively
A : B : C = (x * 12) : (2x * 6) : (3x * 4)
= 12x : 12x : 12x = 1: 1: 1
Cβ Share = Rs. (39000 * 1/3) =Rs. 13000/-
Answer: Option B |
aquarat_34214 | aquarat | <ctx>
has been conjectured to be the best possible more generally, but this remains unproven. the perimeter - halving lower bound of 2 for the square, already proven by jones, can be improved slightly, to, for any barrier that consists of at most countably many rectifiable curves, improving similar previous bounds that constrained the barrier to be placed only near to the given square. the case of the unit circle was described in a 1995 scientific american column by ian stewart, with a solution of length, optimal for a single curve or connected barrier but not for an opaque forest with multiple curves. vance faber and jan mycielski credit this single - curve solution to menachem magidor in 1974. by 1980, e. makai had already provided a better three - component solution, with length approximately, rediscovered by john day in a followup to stewart's column. the unknown length of the optimal solution has been called the beam detection constant. algorithms two published algorithms claim to generate the optimal opaque forest for arbitrary polygons, based on the idea that the optimal solution has a special structure : a steiner tree for one triangle in a triangulation of the polygon, and a segment in each remaining triangle from one vertex to the opposite side, of length equal to the height of the triangle. this structure matches the conjectured structure of the optimal solution for a square. although the optimal triangulation for a solution of this form is not part of the input to these algorithms, it can be found by the algorithms in polynomial time using dynamic programming. however, these algorithms do not correctly solve the problem for all polygons, because some polygons have shorter solutions with a different structure than the ones they find. in particular, for a long thin rectangle, the minimum steiner tree of all four vertices is shorter than the triangulation - based solution that these algorithms find. no known algorithm has been guaranteed to find a correct solution to the problem, regardless of its running time. despite this setback, the shortest single - curve barrier of a convex polygon, which is the traveling salesperson path of its vertices, can be computed exactly in polynomial time for convex polygons by a dynamic programming algorithm
thousand miles param freight ; # transport cost in thousands of dollars per case param transportcost { p in plants, m in markets } : = freight * distance [ p, m ] / 1000 ; # shipment quantities in cases var shipment { plants, markets } > = 0 ; # total transportation costs in thousands of dollars minimize cost : sum { p in plants, m in markets } transportcost [ p, m ] * shipment [ p, m ] ; # observe supply limit at plant p s. t. supply { p in plants } : sum { m in markets } shipment [ p, m ] < = capacity [ p ] ; # satisfy demand at market m s. t. demand { m in markets } : sum { p in plants } shipment [ p, m ] > = demand [ m ] ; data ; set plants : = seattle san - diego ; set markets : = new - york chicago topeka ; param capacity : = seattle 350 san - diego 600 ; param demand : = new - york 325 chicago 300 topeka 275 ; param distance : new - york chicago topeka : = seattle 2. 5 1. 7 1. 8 san - diego 2. 5 1. 8 1. 4 ; param freight : = 90 ; solvers here is a partial list of solvers supported by ampl : see also sol ( format ) gnu mathprog ( previously known as gmpl ) is a subset of ampl supported by the gnu linear programming kit references external links prof. fourer's home page at northwestern university 1990 software computer algebra systems mathematical modeling mathematical optimization software numerical programming languages scripting languages text - oriented programming languages programming languages created in 1985 proprietary cross - platform software
4. 8 β’ lagrange multipliers 411 383. find the point on the surface closest to the point 384. show that, of all the triangles inscribed in a circle of radius ( see diagram ), the equilateral triangle has the largest perimeter. 385. find the minimum distance from point to the parabola 386. find the minimum distance from the parabola to point 387. find the minimum distance from the plane to point 388. a large container in the shape of a rectangular solid must have a volume of m3. the bottom of the container costs $ 5 / m2 to construct whereas the top and sides cost $ 3 / m2 to construct. use lagrange multipliers to find the dimensions of the container of this size that has the minimum cost. 389. find the point on the line that is closest to point 390. find the point on the plane that is closest to the point 391. find the maximum value of where denote the acute angles of a right triangle. draw the contours of the function using a cas. 392. a rectangular solid is contained within a tetrahedron with vertices at and the origin. the base of the box has dimensions and the height of the box is if the sum of is 1. 0, find the dimensions that maximizes the volume of the rectangular solid. 393. [ t ] by investing x units of labor and y units of capital, a watch manufacturer can produce watches. find the maximum number of watches that can be produced on a budget of if labor costs $ 100 / unit and capital costs $ 200 / unit. use a cas to sketch a contour plot of the function. 412 4 β’ differentiation of functions of several variables access for free at openstax. org chapter review key terms boundary point a point of is a boundary point if every disk centered around contains points both inside and outside closed set a set that contains all its boundary points connected set an open set that cannot be represented as the union of two or more disjoint, nonempty open subsets constraint an inequality or equation involving one or more variables that is used in an optimization problem ; the constraint enforces a limit on the possible solutions for
matching chvatal's upper bound. proved that a placement for these guards may be computed in o ( n log n ) time in the worst case, via a divide and conquer algorithm. gave a linear time algorithm by using fisk's short proof and bernard chazelle's linear time plane triangulation algorithm. for simple polygons that do not contain holes, the existence of a constant factor approximation algorithm for vertex and edge guards was conjectured by ghosh. ghosh's conjecture was initially shown to be true for vertex guards in two special sub - classes of simple polygons, viz. monotone polygons and polygons weakly visible from an edge. presented an approximation algorithm that computes in polynomial time a vertex guard set for a monotone polygon such that the size of the guard set is at most 30 times the optimal number of vertex guards. presented an approximation algorithm that computes in o ( n2 ) time a vertex guard set for a simple polygon that is weakly visible from an edge such that the size of the guard set is at most 6 times the optimal number of vertex guards. subsequently, claimed to have settled the conjecture completely by presenting constant factor approximation algorithms for guarding general simple polygons using vertex guards and edge guards. for vertex guarding the subclass of simple polygons that are weakly visible from an edge, a polynomial - time approximation scheme was proposed by. an exact algorithm was proposed by for vertex guards. the authors conducted extensive computational experiments with several classes of polygons showing that optimal solutions can be found in relatively small computation times even for instances associated to thousands of vertices. the input data and the optimal solutions for these instances are available for download. three dimensions if a museum is represented in three dimensions as a polyhedron, then putting a guard at each vertex will not ensure that all of the museum is under observation. although all of the surface of the polyhedron would be surveyed, for some polyhedra there are points in the interior that might not be under surveillance. possible usage scenarios of the art gallery problem the art gallery problem can be used to position multiple cameras when the objective is to cover a large area in view of cameras, using the minimum number
</ctx>
The following are multiple choice questions (with answers) about STEM.
Find the cost of fencing around a circular field of diameter 28 m at the rate of Rs.1.50 a meter?
A. A)Rs.932
B. B)Rs.132
C. C)Rs.108
D. D)Rs.109
Answer: | [
"A)Rs.932",
"B)Rs.132",
"C)Rs.108",
"D)Rs.109"
] | B | 2 * 22/7 * 14 = 88
88 * 1 1/2
= Rs.132
Answer: B |
medmcqa_4106 | medmcqa | <ctx>
state of georgia announced they would move their primary election for president from 24 march to 19 may, becoming the second state to do so after louisiana. us president donald trump had his physician release a memo which suggested he tested negative for the virus. panama banned all flights from and to europe and asia for 30 days. the panamanian government also began to regulate activities involving over 50 people. cirque du soleil stops all its shows, including in las vegas. 15 march the united nations secretary - general announced that the un was putting in places measures to protect staff while affirming that it would continue normal operations. all schools in banten province of indonesia are closed until 28 march. this comes after the governor of banten wahidin halim declared the virus an " extraordinary event ". other provinces such as jakarta, central java, west kalimantan, and west java also did the same thing. brunei banned all citizens and foreign residents from leaving in response to the coronavirus pandemic. the ministry of health has also banned mass gatherings including weddings and sporting events. in addition the national football association of brunei darussalam, the tutong district amateur football association league, and the brunei basketball association have suspended all matches and games. finland has ceased testing for people returning from trips abroad, and all of the people suffering flu symptoms in the country. the tests are now reserved for health professionals only. indian prime minister narendra modi proposed a saarc fund called saarc covid - 19 emergency fund to tackle coronavirus via video conference. he proposed setting up a volunteer basis covid - 19 emergency fund with india committing usd 10 million initially for it. south african president cyril ramaphosa declared a national state of disaster in terms of the disaster management act and declared measures to be put in place that comprised imposing a travel ban on foreign nationals from high risk countries, including italy, iran, south korea, spain, germany, the united states, the united kingdom, and china ; performing high - intensity screening on travellers from medium - risk countries, such as portugal, hong kong, and singapore, as well as testing and isolation for south african citizens returning from high - risk countries ; closing 35 of the 72 land
communist party general secretary xi jinping has also announced that the chinese government will invest in various sectors including 5g, artificial intelligence, energy, and the employment of graduates. the vietnamese government eases social distancing restrictions, citing the success of its policy involving mass quarantines and expansive social distancing. 24 april australian prime minister scott morrison has confirmed that only authorised health personnel will have access to data in government smartphone software to trace the contacts and movements of people with the coronavirus. the cruise ship ruby princess has departed australian waters. the ruby princess was a major cluster for the covid - 19 outbreak in australia, accounting for ten percent of the country's 6, 000 deaths and a third of the country's 77 deaths, triggering a criminal investigation by new south wales authorities. chinese foreign ministry spokesperson geng shuang has issued a statement condemning cyber attacks on institutions fighting the coronavirus pandemic in response to a report by the us cybersecurity firm fireeye that vietnamese government - linked hackers had tried to hack into the personal and professional email accounts of staff at the chinese ministry of emergency management and the wuhan government. vietnam's foreign ministry denounced the report as " baseless. " the financial times has reported that gilead sciences's experimental anti - viral drug remdesivir has failed during its first test trial in china, based on leaked world health organization documents. president of the philippines rodrigo duterte extends metro manila's lockdown until 15 may. the extended lockdown covers several provinces on the island of luzon. three airlines including philippines airlines, cebu pacific, and the filipino unit of malaysian airliner airasia extend flight suspensions until mid - may. cargo and special recovery flights are unaffected. 25 april japan's minister of economy, trade and industry yasutoshi nishimura self - isolates after he came into contact with a staff member who had contracted the coronavirus. vanuatu, which has reported no cases of covid - 19, has announced that they will be holding their women's super league final and streaming it live on social media. 26 april the australian government has launched a new coronavirus tracing app based on singapore's tracetogether
the united kingdom secretary of state for health and social care reported the random detection of 11 cases of the variant where there was no connection to international travel. the same day, the canadian province of ontario reported the first case of the variant in the peel region, with a similar absence of travel history and no contact with anyone who had recently been abroad. on 8 february, the republic of ireland reported the detection of 2 further cases. on 8 february 2021, austria detected the greatest outbreak of beta variant in europe so far. a total of 293 confirmed cases and 200 suspected cases have been identified through sequencing, most of them will be confirmed in all probability. all of the cases were found in the tirol region, where nearly 9 % of the positive pcr tests were identified as the beta variant by sequencing. the active cases were estimated at around 140. after a week of public discussion and political pressure about a possible quarantine of tirol, the government of austria abstained from isolating the areas of concern, instead making a formal plea to reduce movement in and out of the region and go for testing after visiting tirol. tirolean officials stated their intention to relax the lockdown rules in tirol in keeping with the rest of austria. on 22 february, the israeli health ministry stated that the variant had been genetically sequenced in just under 1 % of 3, 000 community samples. later on, israel reported a total of 444 cases of the variant, making it the highest infection rate in the world outside south africa. on 25 february, the republic of ireland reported the detection of 4 further cases. by late february, turkey had 49 cases of the beta variant. on 3 march 2021, the philippines confirmed its first 6 new cases of the south african variant, with 3 patients from pasay with no travel history, and 3 with travel histories from qatar and uae. on 5 march 2021, romania reported its first two cases of the south african variant, coming from two patients in bucharest and pitesti. on 23 march 2021, lithuania confirmed its first 2 new cases of the south african strain, 1 in kaunas county and 1 in vilnius. there are 10 more suspected cases of it. the infected people said, that they didn't
. participants must be new zealand residents. in addition, ardern confirmed that the quarantine - free travel for pacific recognised seasonal employer workers from vanuatu, samoa, and tonga would resume in early october to address the agricultural and horticultural sectors'demand for migrant workers over the summer period. in october 2021, the delta variant began spreading widely in new zealand with community cases quickly outpacing cases detected at the border. by 22 october, the country had recorded a record of 129 new community cases. on 4 october, the government abandoned new zealand's elimination strategy, citing its failure to contained the delta variant outbreak that began in auckland in august 2021. on 21 october, the government confirmed that repatriation flights carrying new zealand citizens who had been deported from australia under section 501 ( 3a ) of the australian migration act 1958 would resume in november 2021 following a three - month hiatus. the government had contracted a designated miq facility to host these returnees. on 28 october, covid - 19 response minister hipkins announced that international arrivals would only have to isolate for seven days from 14 november in an effort to free up about 1, 500 rooms a month. from 8 november, fully vaccinated travellers from low - risk pacific island countries such as the cook islands would be eligible for quarantine - free travel. on 24 november, hipkins announced that miq border restrictions would be eased in a three - stage process over 2022 : from 17 january, all fully vaccinated new zealanders and other eligible travellers could travel to new zealand from australia without having to go through managed isolation and quarantine. from 14 february, all fully vaccinated new zealanders and other eligible travellers could travel to new zealand from all other countries without having to go through miq. from 30 april, all fully vaccinated foreign travellers could travel to new zealand without having to go through miq. on 21 december 2021, hipkins announced that the government would be delaying the planned reopening of the country's border to the end of february 2022 in order to combat the spread of the sars - cov - 2 omicron variant. in addition, the length of stay at miq facilities for all travellers was raised to
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which among the following country is a 'Rabies free country'?
A. USA
B. Russia
C. Australia
D. France
Answer: | [
"USA",
"Russia",
"Australia",
"France"
] | C | A "Rabies-free" area has been defined as one in which no case of indigenously acquired rabies has occurred in man or any animal species for 2 years. Australia, China (Taiwan), Cyprus, Iceland, Ireland, Japan, Malta, New Zealand, the U.K. and the islands of Western Pacific are all free of the disease. The Liberian Peninsula and Finland, Norway and Sweden are also rabies free. In India, Union Territory of Lakshadweep and Andaman and Nicobar islands are free of the disease. Ref: Textbook of Preventive and Social Medicine by K Park, 19th edition, Page 226. |
aquarat_4821 | aquarat | <ctx>
12. 2 simple math recursion learning objectives by the end of this section you should be able to β’ identify a recursive case and a base case in a recursive algorithm. β’ demonstrate how to compute a recursive solution for the factorial function. calculating a factorial the factorial of a positive integer is defined as the product of the integer and the positive integers less than the integer. ex : 5! = 5 * 4 * 3 * 2 * 1 written as a general equation for a positive integer n : n! = n * ( n - 1 ) * ( n - 2 ) *... * 1 the above formula for the factorial of n results in a recursive formula : n! = n * ( n - 1 )! thus, the factorial of n depends upon the value of the factorial at n - 1. the factorial of n can be found by repeating the factorial of n - 1 until ( n - 1 )! = 1! ( we know that 1! = 1 ). this result can be used to build the overall solution as seen in the animation below. checkpoint finding the factorial of 5 access multimedia content ( https : / / openstax. org / books / introduction - python - programming / pages / 12 - 2 - simple - math - recursion ) concepts in practice recognizing recursion can the following algorithms be written recursively? 1. the summation of 1 + 2 + 3 +... + ( n - 1 ) + n where n is a positive integer. 2. listing the odd numbers greater than 0 and less than a given number n.
as follows. sort ( s ) ; for i = 0 to n - 2 do a = s [ i ] ; start = i + 1 ; end = n - 1 ; while ( start < end ) do b = s [ start ] c = s [ end ] ; if ( a + b + c = = 0 ) then output a, b, c ; / / continue search for all triplet combinations summing to zero. / / we need to update both end and start together since the array values are distinct. start = start + 1 ; end = end - 1 ; else if ( a + b + c > 0 ) then end = end - 1 ; else start = start + 1 ; end end the following example shows this algorithm's execution on a small sorted array. current values of a are shown in red, values of b and c are shown in magenta. - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = - 25 ) - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = - 22 )... - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = - 7 ) - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = - 7 ) - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = - 3 ) - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = 2 ) - 25 - 10 - 7 - 3 2 4 8 10 ( a + b + c = = 0 ) the correctness of the algorithm can be seen as follows. suppose we have a solution a + b + c = 0. since the pointers only move in one direction, we can run the algorithm until the leftmost pointer points to a. run the algorithm until either one of the remaining pointers points to b or c, whichever occurs first. then the algorithm will run until the last pointer points to the remaining term, giving the affirmative solution. variants non - zero sum instead of looking for numbers whose sum is 0, it is
- ( 1 + 2 + 4 + 8 + 16 + 32 + 64 + 128 ) | 1 + 2 + 4... = formula for geometric sequence sn = a1 * ( q ^ i - 1 ) / ( n - 1 ), n is number of items, a1 is first item on = 8 * n - 1 * ( 2 ^ 8 - 1 ) / ( 2 - 1 ) on = 8 * n - ( 2 ^ 8 - 1 ) | 2 ^ 8 = n on = 8 * n - ( n - 1 ) on = ( 8 - 1 ) * n + 1 | 8 = ln ( n ) / ln ( 2 ) = ln ( 256 ) / ln ( 2 ) on = ( ln ( n ) / ln ( 2 ) - 1 ) * n + 1 example : n = 2 ^ 4 = 16, on ~ = 3 * n n = 2 ^ 8 = 256, on ~ = 7 * n n = 2 ^ 10 = 1. 024, on ~ = 9 * n n = 2 ^ 20 = 1. 048. 576, on ~ = 19 * n sorting a pre - sorted list if a list is already close to sorted, according to some measure of sortedness, the number of comparisons required to sort it can be smaller. an adaptive sort takes advantage of this " presortedness " and runs more quickly on nearly - sorted inputs, often while still maintaining an worst case time bound. an example is adaptive heap sort, a sorting algorithm based on cartesian trees. it takes time, where is the average, over all values in the sequence, of the number of times the sequence jumps from below to above or vice versa. notes references donald knuth. the art of computer programming, volume 3 : sorting and searching, second edition. addison - wesley, 1997.. section 5. 3. 1 : minimum - comparison sorting, pp. 180 β 197. comparison _ sorts sorting algorithms
sum of the digits of 6721 is 16. hint : there are 10 base cases, which can be checked easily with the right condition. access multimedia content ( https : / / openstax. org / books / introduction - python - programming / pages / 12 - 2 - simple - math - recursion )
</ctx>
The following are multiple choice questions (with answers) about STEM.
Each term of a certain sequence is 4 less than the previous term. The first term of this sequence is 19. If the sum of the first n terms of the sequence is n, what is the value of positive integer n?
A. A)1
B. B)13
C. C)15
D. D)19
Answer: | [
"A)1",
"B)13",
"C)15",
"D)19"
] | C | We know that it is a decreasing sequence i.e. the terms keep decreasing till 0 and then negative terms start.
What does this imply?If the sum of the first n terms of the sequence is n
Since number of terms will definitely be positive, we are looking for a positive sum.
19 + 16 + 13 + 10 + 7 + 4 + 1 -2 -5 -7 -10 .... and so on
Note that the first 7 terms are positive and all others negative. Every negative term has greater absolute value than the corresponding positive terms i.e. -2 absolute value is greater than 1 absolute value, -5 absolute value is greater than 4 absolute value, and so on...
Since we have 7 positive terms, we must have less than 7 negative terms to get the sum as positive. If we have 6 negative terms, we will have a total of 13 terms. Of the given options, only 13 is possible and hence it must be the answer.
Answer (C) |
m1_6636 | m1 | <ctx>
##de the incorrectly transmitted codeword ). now suppose that a codeword is sent over the channel and the error pattern occurs. then is received. ordinary minimum distance decoding would lookup the vector in a table of size for the nearest match - i. e. an element ( not necessarily unique ) with for all. syndrome decoding takes advantage of the property of the parity matrix that : for all. the syndrome of the received is defined to be : to perform ml decoding in a binary symmetric channel, one has to look - up a precomputed table of size, mapping to. note that this is already of significantly less complexity than that of a standard array decoding. however, under the assumption that no more than errors were made during transmission, the receiver can look up the value in a further reduced table of size list decoding information set decoding this is a family of las vegas - probabilistic methods all based on the observation that it is easier to guess enough error - free positions, than it is to guess all the error - positions. the simplest form is due to prange : let be the generator matrix of used for encoding. select columns of at random, and denote by the corresponding submatrix of. with reasonable probability will have full rank, which means that if we let be the sub - vector for the corresponding positions of any codeword of for a message, we can recover as. hence, if we were lucky that these positions of the received word contained no errors, and hence equalled the positions of the sent codeword, then we may decode. if errors occurred, the probability of such a fortunate selection of columns is given by. this method has been improved in various ways, e. g. by stern and canteaut and sendrier. partial response maximum likelihood partial response maximum likelihood ( prml ) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal. viterbi decoder a viterbi decoder uses the viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code. the hamming distance is used as a metric for hard
##1,..., yn ) β ( an ) n. then the decoding algorithm is a two - step process : use the mld of the inner code cin to reconstruct a set of inner code words y'= ( y'1,..., y'n ), with y'i = mldcin ( yi ), 1 β€ i β€ n. run the unique decoding algorithm for cout on y '. now, the time complexity of the first step is o ( nβ
exp ( n ) ), where n = o ( log ( n ) ) is the inner block length. in other words, it is no ( 1 ) ( i. e., polynomial - time ) in terms of the outer block length n. as the outer decoding algorithm in step two is assumed to run in polynomial time the complexity of the overall decoding algorithm is polynomial - time as well. remarks the decoding algorithm described above can be used to correct all errors up to less than dd / 4 in number. using minimum distance decoding, the outer decoder can correct all inputs y'with less than d / 2 symbols y'i in error. similarly, the inner code can reliably correct an input yi if less than d / 2 inner symbols are erroneous. thus, for an outer symbol y'i to be incorrect after inner decoding at least d / 2 inner symbols must have been in error, and for the outer code to fail this must have happened for at least d / 2 outer symbols. consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at least d / 2β
d / 2 = dd / 4. the algorithm also works if the inner codes are different, e. g., for justesen codes. the generalized minimum distance algorithm, developed by forney, can be used to correct up to dd / 2 errors. it uses erasure information from the inner code to improve performance of the outer code, and was the first example of an algorithm using soft - decision decoding. applications although a simple concatenation scheme was implemented already for the 1971 mariner
[ dec, ~, ~, ~, ~ ] = rsdecoder ( enc _ msg, 8, 301, 12, numel ( msg ) ) max _ errors = floor ( ( n - k ) / 2 ) ; orig _ vals = encoded. x ; % initialize the error vector errors = zeros ( 1, n ) ; g = [ ] ; s = [ ] ; % get the alpha alpha = gf ( 2, m, prim _ poly ) ; % find the syndromes ( check if dividing the message by the generator % polynomial the result is zero ) synd = polyval ( encoded, alpha. ^ ( 1 : n - k ) ) ; syndromes = trim ( synd ) ; % if all syndromes are zeros ( perfectly divisible ) there are no errors if isempty ( syndromes. x ) decoded = orig _ vals ( 1 : k ) ; error _ pos = [ ] ; error _ mag = [ ] ; g = [ ] ; s = synd ; return ; end % prepare for the euclidean algorithm ( used to find the error locating % polynomials ) r0 = [ 1, zeros ( 1, 2 * max _ errors ) ] ; r0 = gf ( r0, m, prim _ poly ) ; r0 = trim ( r0 ) ; size _ r0 = length ( r0 ) ; r1 = syndromes ; f0 = gf ( [ zeros ( 1, size _ r0 - 1 ) 1 ], m, prim _ poly ) ; f1 = gf ( zeros ( 1, size _ r0 ), m, prim _ poly ) ; g0 = f1 ; g1 = f0 ; % do the euclidean algorithm on the polynomials r0 ( x ) and syndromes ( x ) in % order to find the error locating polynomial while true % do a long division [ quotient, remainder ] = deconv ( r0, r1 ) ; % add some zeros quotient = pad ( quotient, length ( g1 ) ) ; % find quotie
( \ ref { eq : 1st - typ - prop } ). also, the projectors are " good detectors " for the states ( on average ) because the following condition holds from conditional quantum typicality : error analysis. the probability of detecting the codeword correctly under our sequential decoding scheme is equal to where we make the abbreviation. ( observe that we project into the average typical subspace just once. ) thus, the probability of an incorrect detection for the codeword is given by and the average error probability of this scheme is equal to instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code : our first step is to apply sen's bound to the above quantity. but before doing so, we should rewrite the above expression just slightly, by observing that substituting into ( ) ( and forgetting about the small term for now ) gives an upper bound of we then apply sen's bound to this expression with and the sequential projectors as,,...,. this gives the upper bound due to concavity of the square root, we can bound this expression from above by where the second bound follows by summing over all of the codewords not equal to the codeword ( this sum can only be larger ). we now focus exclusively on showing that the term inside the square root can be made small. consider the first term : where the first inequality follows from ( ) and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. consider now the second term and the following chain of inequalities : the first equality follows because the codewords and are independent since they are different. the second equality follows from ( ). the first inequality follows from ( \ ref { eq : 3rd - typ - prop } ). continuing, we have the first inequality follows from and exchanging the trace with the expectation. the second inequality follows from ( \ ref { eq : 2nd - cond - typ } ). the next two are straightforward. putting everything together, we get our final bound on the expectation of the average error probability : thus,
</ctx>
The following are multiple choice questions (with answers) about STEM.
Check the correct statement concerning a standard-array-based decoder.
A. For the same input, the decoder output depends on the choice of the coset leader.
B. The syndrome of a specific coset depends on the choice of the coset leader.
C. All the other statements are incorrect.
Answer: | [
"For the same input, the decoder output depends on the choice of the coset leader.",
"The syndrome of a specific coset depends on the choice of the coset leader.",
"All the other statements are incorrect."
] | A | The correct statement concerning a standard-array-based decoder is: **For the same input, the decoder output depends on the choice of the coset leader.** |
qasc_6016 | qasc | <ctx>
4. 36 resolving weight into components figure 4. 13 an object rests on an incline that makes an angle ΞΈ with the horizontal. when an object rests on an incline that makes an angle with the horizontal, the force of gravity acting on the object is divided into two components : a force acting perpendicular to the plane,, and a force acting parallel to the plane,. the perpendicular force of weight,, is typically equal in magnitude and opposite in direction to the normal force,. the force acting parallel to the plane,, causes the object to accelerate down the incline. the force of friction,, opposes the motion of the object, so it acts upward along the plane. it is important to be careful when resolving the weight of the object into components. if the angle of the incline is at an angle to the horizontal, then the magnitudes of the weight components are and instead of memorizing these equations, it is helpful to be able to determine them from reason. to do this, draw the right triangle formed by the three weight vectors. notice that the angle of the incline is the same as the angle formed between and. knowing this property, you can use trigonometry to determine the magnitude of the weight components : 4. 37
attached ) if the weights are also pushed to the side with a pencil? newton β s first law of motion a body at rest remains at rest, or, if in motion, remains in motion at a constant velocity unless acted on by a net external force. 150 4 β’ dynamics : force and newton's laws of motion access for free at openstax. org mass the property of a body to remain at rest or to remain in motion with constant velocity is called inertia. newton β s first law is often called the law of inertia. as we know from experience, some objects have more inertia than others. it is obviously more difficult to change the motion of a large boulder than that of a basketball, for example. the inertia of an object is measured by its mass. roughly speaking, mass is a measure of the amount of β stuff β ( or matter ) in something. the quantity or amount of matter in an object is determined by the numbers of atoms and molecules of various types it contains. unlike weight, mass does not vary with location. the mass of an object is the same on earth, in orbit, or on the surface of the moon. in practice, it is very difficult to count and identify all of the atoms and molecules in an object, so masses are not often determined in this manner. operationally, the masses of objects are determined by comparison with the standard kilogram. check your understanding which has more mass : a kilogram of cotton balls or a kilogram of gold? solution they are equal. a kilogram of one substance is equal in mass to a kilogram of another substance. the quantities that might differ between them are volume and density.
4. 8 common misconceptions : mass vs. weight mass and weight are often used interchangeably in everyday language. however, in science, these terms are distinctly different from one another. mass is a measure of how much matter is in an object. the typical measure of mass is the kilogram ( or the β slug β in english units ). weight, on the other hand, is a measure of the force of gravity acting on an object. weight is equal to the mass of an object ( ) multiplied by the acceleration due to gravity ( ). like any other force, weight is measured in terms of newtons ( or pounds in english units ). assuming the mass of an object is kept intact, it will remain the same, regardless of its location. however, 154 4 β’ dynamics : force and newton's laws of motion access for free at openstax. org example 4. 1 what acceleration can a person produce when pushing a lawn mower? suppose that the net external force ( push minus friction ) exerted on a lawn mower is 51 n ( about 11 lb ) parallel to the ground. the mass of the mower is 24 kg. what is its acceleration? figure 4. 7 the net force on a lawn mower is 51 n to the right. at what rate does the lawn mower accelerate to the right? strategy since and are given, the acceleration can be calculated directly from newton β s second law as stated in. solution the magnitude of the acceleration is. entering known values gives substituting the units for n yields because weight depends on the acceleration due to gravity, the weight of an object can change when the object enters into a region with stronger or weaker gravity. for example, the acceleration due to gravity on the moon is ( which is much less than the acceleration due to gravity on earth, ). if you measured your weight on earth and then measured your weight on the moon, you would find that you β weigh β much less, even though you do not look any skinnier. this is because the force of gravity is weaker on the moon. in fact, when people say that they are β losing weight, β they really mean that they are losing β mass β ( which in
and specialized analytical instruments used to detect trace amounts of metals. 14 1 β’ essential ideas access for free at openstax. org figure 1. 7 a plasma torch can be used to cut metal. ( credit : β hypertherm β / wikimedia commons ) link to learning in a tiny cell in a plasma television, the plasma emits ultraviolet light, which in turn causes the display at that location to appear a specific color. the composite of these tiny dots of color makes up the image that you see. watch this video ( http : / / openstax. org / l / 16plasma ) to learn more about plasma and the places you encounter it. some samples of matter appear to have properties of solids, liquids, and / or gases at the same time. this can occur when the sample is composed of many small pieces. for example, we can pour sand as if it were a liquid because it is composed of many small grains of solid sand. matter can also have properties of more than one state when it is a mixture, such as with clouds. clouds appear to behave somewhat like gases, but they are actually mixtures of air ( gas ) and tiny particles of water ( liquid or solid ). the mass of an object is a measure of the amount of matter in it. one way to measure an object β s mass is to measure the force it takes to accelerate the object. it takes much more force to accelerate a car than a bicycle because the car has much more mass. a more common way to determine the mass of an object is to use a balance to compare its mass with a standard mass. although weight is related to mass, it is not the same thing. weight refers to the force that gravity exerts on an object. this force is directly proportional to the mass of the object. the weight of an object changes as the force of gravity changes, but its mass does not. an astronaut β s mass does not change just because she goes to the moon. but her weight on the moon is only one - sixth her earth - bound weight because the moon β s gravity is only one - sixth that of the earth β s. she may feel β weightless β during her trip
</ctx>
The following are multiple choice questions (with answers) about STEM.
What determines the weight of an object?
A. Force of gravity pulling it
B. Inches or fractions of inches.
C. Burning fuel and air
D. a hemisphere is tilted away from the sun
Answer: | [
"Force of gravity pulling it",
"Inches or fractions of inches.",
"Burning fuel and air",
"a hemisphere is tilted away from the sun"
] | A | |
qasc_6408 | qasc | <ctx>
. the cells of the spongy layer are more branched and not so tightly packed, so that there are large intercellular air spaces between them. the pores or stomata of the epidermis open into substomatal chambers, which are connected to the intercellular air spaces between the spongy and palisade mesophyll cell, so that oxygen, carbon dioxide and water vapor can diffuse into and out of the leaf and access the mesophyll cells during respiration, photosynthesis and transpiration. leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. some plants have leaves of different colours due to the presence of accessory pigments such as carotenoids in their mesophyll cells. vascular tissue the veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. the pattern of the veins is called venation. in angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad - leaved plants. they were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field. a vein is made up of a vascular bundle. at the core of each bundle are clusters of two distinct types of conducting cells : xylem cells that bring water and minerals from the roots into the leaf. phloem cells that usually move sap, with dissolved sucrose ( glucose to sucrose ) produced by photosynthesis in the leaf, out of the leaf. the xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue. leaf development according to agnes arber's partial - shoot theory of the leaf, leaves are partial shoots, being derived from leaf primordia of the shoot apex. early in development they are dorsiventrally flattened with both dorsal and
in botany, a plant shoot consists of any plant stem together with its appendages like, leaves and lateral buds, flowering stems, and flower buds. the new growth from seed germination that grows upward is a shoot where leaves will develop. in the spring, perennial plant shoots are the new growth that grows from the ground in herbaceous plants or the new stem or flower growth that grows on woody plants. in everyday speech, shoots are often synonymous with stems. stems, which are an integral component of shoots, provide an axis for buds, fruits, and leaves. young shoots are often eaten by animals because the fibers in the new growth have not yet completed secondary cell wall development, making the young shoots softer and easier to chew and digest. as shoots grow and age, the cells develop secondary cell walls that have a hard and tough structure. some plants ( e. g. bracken ) produce toxins that make their shoots inedible or less palatable. shoot types of woody plants many woody plants have distinct short shoots and long shoots. in some angiosperms, the short shoots, also called spur shoots or fruit spurs, produce the majority of flowers and fruit. a similar pattern occurs in some conifers and in ginkgo, although the " short shoots " of some genera such as picea are so small that they can be mistaken for part of the leaf that they have produced. a related phenomenon is seasonal heterophylly, which involves visibly different leaves from spring growth and later lammas growth. whereas spring growth mostly comes from buds formed the previous season, and often includes flowers, lammas growth often involves long shoots. see also bud crown ( botany ) heteroblasty ( botany ), an abrupt change in the growth pattern of some plants as they mature lateral shoot phyllotaxis, the arrangement of leaves along a plant stem seedling sterigma, the " woody peg " below the leaf of some conifers thorn ( botany ), true thorns, as distinct from spines or prickles, are short shoots references plant morphology
mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. systematic botany systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent β grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical
support and distribution network for leaves and are correlated with leaf shape. for instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point. in evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. veins appeared in the permian period ( 299 β 252 mya ), prior to the appearance of angiosperms in the triassic ( 252 β 201 mya ), during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions. although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. a pseudo - reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some melanthiaceae, which are monocots ; e. g., paris quadrifolia ( true - lover's knot ). in leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves. morphology changes within a single plant homoblasty characteristic in which a plant has small changes in leaf size, shape, and growth habit between juvenile and adult stages, in contrast to ; heteroblasty characteristic in which a plant has marked changes in leaf size, shape, and growth habit between juvenile and adult stages. anatomy medium - scale features leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. many leaves are covered in trichomes ( small hairs ) which have diverse structures and functions. small - scale features the major tissue systems present are the epidermis, which covers the upper and lower surfaces the mesophyll tissue, which consists of photosynthetic cells rich in chloroplasts
</ctx>
The following are multiple choice questions (with answers) about STEM.
What forms secondary vascular tissues in trees?
A. allow growth
B. primary growth
C. secondary growth
D. In their leaves
Answer: | [
"allow growth",
"primary growth",
"secondary growth",
"In their leaves"
] | C | |
medmcqa_4723 | medmcqa | <ctx>
of infection. antibiotics are not necessary as the wound is left open to close by secondary intention. postoperative management involves soaking of the toe in warm water 3 times / day for 15 β 20 minutes. the wound is healed in 4 β 6 weeks. no cases of osteomyelitis have been reported. after healing, the nail fold skin remains low and tight at the side of the nail. the rationale is that the nail itself is usually healthy, but overgrown by skin ; when walking, the bilateral nail folds are pressed upwards, which is why narrowing the nail causes excessive recurrences, contrary to narrowing the nail fold. the syme procedure in difficult or recurrent cases of onychocryptosis ( ingrown toenail ) the patient's symptoms persist necessitating a permanent operative solution. the " avulsion procedure " is simple but the surgeon must be skilled enough to accomplish total destruction, and removal of, the nail matrix. another disadvantage is the long healing and recovery time ( > 2 months ). in these cases, the best method is the syme procedure, that means total nail matrix removal + skin flap transfer + phalanx partial osteotomy + stitching. phenolisation following injection of a local anaesthetic at the base of the toenail and perhaps application of a tourniquet, the surgeon will remove ( ablate ) the edge of the nail growing into the flesh and destroy the matrix area with phenol to permanently and selectively ablate the matrix that is producing the ingrown portion of the nail ( i. e., the nail margin ). this is known as a partial matrixectomy, phenolisation, phenol avulsion or partial nail avulsion with matrix phenolisation. also, any infection is surgically drained. after this procedure, other suggestions on aftercare will be made, such as salt water bathing of the toe. the purpose of the procedure is to prevent re - growth where the matrix was cauterized. after the procedure, the nail is slightly narrower ( usually one millimeter or so ) and is barely noticeable a year later. the surgery is advantageous because it can be performed in the
nail avulsion ( removal ) except under extreme circumstances. in most cases, these physicians will remove both sides of a toenail ( even if one side is not currently ingrown ) and coat the nail matrix on both sides with a chemical or acid ( usually phenol ) to prevent re - growth. this leaves most of the nail intact, but ensures that the problem of ingrowth will not recur. there are possible disadvantages if the nail matrix is not coated with the applicable chemical or acid ( phenol ) and is allowed to re - grow ; this method is prone to failure. also, the underlying condition can still become symptomatic if the nail grows back within a year : the nail matrix could be growing a nail that is too curved, thick, wide or otherwise irregular to allow normal growth. furthermore, the flesh can become injured by concussion, tight socks, quick twisting motions while walking, or simply because the nail is growing incorrectly ( likely too wide ). this hypersensitivity to continued injury can mean chronic ingrowth ; the solution is nearly always edge avulsion by the more effective procedure of phenolisation. avulsion procedure in case of recurrence after complete removal, and if the patient never felt any pain before inflammation occurred, the condition is more likely to be onychia which is often confused for an ingrown or ingrowing nail ( onychocryptosis ). complete removal of the whole nail is a simple procedure. anaesthetic is injected and the nail is removed quickly by pulling it outward from the toe. the patient can function normally right after the procedure and most of the discomfort goes away in a few days. the entire procedure can be performed in approximately 20 minutes and is less complex than the wedge resection. the nail often grows back, however, and in most cases it can cause more problems by becoming ingrown again. it can get injured by concussion and in some cases grows back too thick, too wide or deformed. this procedure can result in chronic ingrown nails causing more pain. accordingly, in some cases as determined by a doctor, the nail matrix is coated with a chemical ( usually phenol ) so none
their use of cosmetics. nearly half of these consumers felt that a product claiming to be " natural " should contain all natural ingredients. however, although the united states department of agriculture ( usda ) has designated within its certain requirements within its specific area of regulation for organic products, the u. s. food and drug administration ( fda ) does not recognize a definition for natural products. accordingly, there are no legal definitions in the u. s. for the advertising terms " natural " or " organic " in personal care products. the fda prohibits certain ingredients in cosmetics. some organic products which are designated organic may be intensely modified, sometimes considerably more so than conventional products. history plant extracts and herbs have been used by many cultures as cosmetics and perfumes since ancient times. research is scientifically assessing natural products, selected based on experience in the ancient era. validated use of these materials and products awaits further assessment. cosmetics consumer preference consumers often express a preference for skin products with organic and natural ingredients. the skin care market based on natural products has shown strong growth. clinical and laboratory studies have identified activities in many natural ingredients that have potential beneficial activities for the skin. dermatologists may feel that there is enough scientific evidence to assist in the selection of particular natural ingredients. consumers expected products to perform as advertised. industry response voluntary discontinuation of precursor substances that release small quantities of formaldehyde, which is a carcinogen, as well as reducing levels of the potentially carcinogenic impurity 1, 4 - dioxane. polycyclic musk fragrance ingredients, which have raised concerns as persistent and bioaccumulative endocrine disruptors, are being discontinued. alternative medicine there are significant reservations about complementary and alternative medicine ( cam ) including a " shortage of evidence supporting the efficacy and safety of cam " for skin problems. however, patients express a desire to utilize natural ingredients as treatment. a literature search found a growing prevalence of cam use for skin conditions. a number of textbooks address cam perspectives of skin care. the purpose of this section is to review botanical compounds in skin care ; a broader review the history and theory behind other cam modalities such as psychocutaneous therapies, acupuncture
surgical treatments of ingrown toenails include a number of different options. if conservative treatment of a minor ingrown toenail does not succeed or if the ingrown toenail is severe, surgical management by a podiatrist is recommended. the initial surgical approach is typically a partial avulsion of the nail plate known as a wedge resection or a complete removal of the toenail. if the ingrown toenail recurs despite this treatment, destruction of the germinal matrix with phenol is recommended. as an alternative, one may use 10 % sodium hydroxide which is less toxic or trichloroacetic acid which may give faster healing time. antibiotics are not needed if surgery is performed. wedge resection the physician will perform an onychectomy in which the nail along the edge that is growing into the skin is cut away ( ablated ) and the offending piece of nail is pulled out. any infection is surgically drained. this process is referred to as a " wedge resection " or simple surgical ablation and is not permanent ( i. e., the nail will re - grow from the matrix ). the entire procedure may be performed in a physician's office in approximately thirty to forty - five minutes depending on the extent of the problem. the patient is allowed to go home the same day and the recovery time is anywhere from two weeks to two months barring any complications such as infection. as a follow - up, a physician may prescribe an oral or topical antibiotic or a special soak to be used for about a week after the surgery. some use " lateral onychoplasty, " or " wedge resection, " as the method of choice for ingrown toenails. a wide wedge resection, with a total cleaning ( removal ) of nail matrix, has a nearly 100 % success rate. some physicians will not perform a complete nail avulsion ( removal ) except under extreme circumstances. in most cases, these physicians will remove both sides of a toenail ( even if one side is not currently ingrown ) and coat the nail matrix on both sides with a chemical or acid ( usually phenol ) to prevent re -
</ctx>
The following are multiple choice questions (with answers) about STEM.
Local anesthetic that is not used topically is?
A. Lignocaine
B. Dibucaine
C. Tetracaine
D. Bupivacaine
Answer: | [
"Lignocaine",
"Dibucaine",
"Tetracaine",
"Bupivacaine"
] | D | Bupivacaine REF: Miller 6th e p. 590 Topical ( surface ) anesthetics Not used as surface (topical) ) anesthetics * Lignocaine * Procaine * Tetracaine * Bupivacaine * Dibucaine * Mepivacaine * Prilocaine * Cocaine |
m1_6742 | m1 | <ctx>
cannot reliably be made to equal. also argued that the condition caused acceptable - functions to be implausibly sparse and isolated from one another. one way to put the point : standardly, any weighted average of two probability function is itself a probability function, so that between any two - functions there will be a continuum of weighted - average - functions along which one of the original - functions gradually transforms into the other. but these continua disappear if the added condition is imposed. now an average of two acceptable - functions will in general not be an acceptable - function. possible rejoinders assuming that holds for a minimally nontrivial set of events and for any - function leads to a contradiction. thus can hold for any - function only for trivial sets of events β that is the triviality result. however, the proof relies on background assumptions that may be challenged. it may be proposed, for instance, that the referent event of an expression like β β is not fixed for a given and, but instead changes as the probability function changes. or it may be proposed that conditioning on should follow a rule other than. but the most common response, among proponents of the condition, has been to explore ways to model conditional events as something other than subsets of a universe set of outcomes. even before lewis published his result, had modeled conditional events as ordered pairs of sets of outcomes. with that approach and others in the same spirit, conditional events and their associated combination and complementation operations do not constitute the usual algebra of sets of standard probability theory, but rather a more exotic type of structure, known as a conditional event algebra. notes references probability articles needing expert attention logic conditional probability
corresponding event ). therefore, if we take the limit in this expression as n β β, the second term will go to zero since { ynβxn } converges to zero in probability ; and the third term will also converge to zero, by the portmanteau lemma and the fact that xn converges to x in distribution. thus since Ξ΅ was arbitrary, we conclude that the limit must in fact be equal to zero, and therefore e [ f ( yn ) ] β e [ f ( x ) ], which again by the portmanteau lemma implies that { yn } converges to x in distribution. qed. convergence of one sequence in distribution and another to a constant implies joint convergence in distribution provided c is a constant. proof : we will prove this statement using the portmanteau lemma, part a. first we want to show that ( xn, c ) converges in distribution to ( x, c ). by the portmanteau lemma this will be true if we can show that e [ f ( xn, c ) ] β e [ f ( x, c ) ] for any bounded continuous function f ( x, y ). so let f be such arbitrary bounded continuous function. now consider the function of a single variable g ( x ) : = f ( x, c ). this will obviously be also bounded and continuous, and therefore by the portmanteau lemma for sequence { xn } converging in distribution to x, we will have that e [ g ( xn ) ] β e [ g ( x ) ]. however the latter expression is equivalent to β e [ f ( xn, c ) ] β e [ f ( x, c ) ] β, and therefore we now know that ( xn, c ) converges in distribution to ( x, c ). secondly, consider | ( xn, yn ) β ( xn, c ) | = | yn β c |. this expression converges in probability to zero because yn converges in probability to c. thus we have demonstrated two facts : by the property proved earlier, these two facts imply that
of probabilities is monotone non decreasing and hence it converges to its supremum. the definition of events implies and ( 2. 2 ) implies. in the following it will always be assumed, so is well defined. ( 2. 3 ) in fact, by definition of it is, so. in a similar way, since by definition of it is, ( 2. 4 ) is valid : ( 2. 4 ) ( 2. 5 ) the above is explained by the fact that the random variable has a symmetric probability density compared to its mean which is zero. by applying in sequence relationships ( 2. 3 ), ( 2. 5 ) and ( 2. 4 ) we get ( 2. 6 ) : ( 2. 6 ) with the same procedure used to obtain ( 2. 3 ), ( 2. 4 ) and ( 2. 5 ) taking advantage this time by the relationship we get ( 2. 7 ) : ( 2. 7 ) by applying in sequence ( 2. 6 ) and ( 2. 7 ) we get : ( 2. 8 ) from, considering the continuity of and the intermediate value theorem we get, which implies. replacing the above in ( 2. 8 ) and passing to the limits : and for, event converges to ( 2. 9 ), by substituting with in ( 2. 9 ) we get the equivalent relationship : ( 2. 10 ) applying the bayes'theorem to the joint event ( 2. 11 ) let : from the above definitions it follows : ( 2. 12 ) substituting ( 2. 12 ) into ( 2. 11 ), we get the equivalent : ( 2. 13 ) substituting ( 2. 9 ) and ( 2. 10 ) into ( 2. 13 ) : ( 2. 14 ) it can be observed that in the second member of ( 2. 14 ) appears the probability distribution of the random variable, normal with mean e variance. the realizations and of the random variable match respectively the probability densities : ( 2. 15 ) ( 2. 16 ) substituting ( 2. 15 ) e ( 2. 16 ) into ( 2. 14 ) and taking the limit for the thesis is proved : bibliography a versatile
these binary operations are straightforward to evaluate. for example, if the probability of an event a is in the interval p ( a ) = a = [ 0. 2, 0. 25 ], and the probability of the event b is in p ( b ) = b = [ 0. 1, 0. 3 ], then the probability of the conjunction is surely in the interval p ( a & b ) = a Γ b = [ 0. 2, 0. 25 ] Γ [ 0. 1, 0. 3 ] = [ 0. 2 Γ 0. 1, 0. 25 Γ 0. 3 ] = [ 0. 02, 0. 075 ] so long as a and b can be assumed to be independent events. if they are not independent, we can still bound the conjunction using the classical frechet inequality. in this case, we can infer at least that the probability of the joint event a & b is surely within the interval p ( a & b ) = env ( max ( 0, a + bβ1 ), min ( a, b ) ) = env ( max ( 0, [ 0. 2, 0. 25 ] + [ 0. 1, 0. 3 ] β1 ), min ( [ 0. 2, 0. 25 ], [ 0. 1, 0. 3 ] ) ) = env ( [ max ( 0, 0. 2 + 0. 1 β 1 ), max ( 0, 0. 25 + 0. 3 β 1 ) ], [ min ( 0. 2, 0. 1 ), min ( 0. 25, 0. 3 ) ] ) = env ( [ 0, 0 ], [ 0. 1, 0. 25 ] ) = [ 0, 0. 25 ] where env ( [ x1, x2 ], [ y1, y2 ] ) is [ min ( x1, y1 ), max ( x2, y2 ) ]. likewise, the probability of the disjunction is surely in the interval p ( a v b ) = a + b β a Γ b = 1 β ( 1 β a ) Γ ( 1 β b ) = 1 β
</ctx>
The following are multiple choice questions (with answers) about STEM.
Let $E$ and $F$ be two events. Suppose that they satisfy $p(E|F)=p(E) > 0.$ True or false: Then we must have $p(F|E)=p(F).$
A. False
B. True
Answer: | [
"False",
"True"
] | B | To determine whether the statement is true or false, let's analyze the relationship between the events \( E \) and \( F \) given the condition \( p(E|F) = p(E) > 0 \).
1. The expression \( p(E|F) \) represents the conditional probability of event \( E \) given that event \( F \) has occurred. According to the definition of conditional probability, we have:
\[
p(E|F) = \frac{p(E \cap F)}{p(F)}
\]
2. Given \( p(E|F) = p(E) \), we can set up the equation:
\[
\frac{p(E \cap F)}{p(F)} = p(E)
\]
3. Rearranging this equation gives us:
\[
p(E \cap F) = p(E) \cdot p(F)
\]
4. This result tells us that the events \( E \) and \( F \) are independent because the occurrence of \( E \) does not affect the probability of \( F \) and vice versa. In probability theory, two events \( E \) and \( F \) are independent if:
\[
p(E \cap F) = p(E) \cdot p(F)
\]
5. Since we have established that \( E \) and \( F \) are independent, we can now find \( p(F|E) \):
\[
p(F|E) = \frac{p(F \cap E)}{p(E)}
\]
Since \( p(F \cap E) = p(E) \cdot p(F) \), we substitute this into the expression:
\[
p(F|E) = \frac{p(E) \cdot p(F)}{p(E)} = p(F)
\]
Thus, we conclude that if \( p(E|F) = p(E) > 0 \), it indeed follows that \( p(F|E) = p(F) \).
Therefore, the statement is true.
The answer is:
B. True |
medmcqa_1866 | medmcqa | <ctx>
. this completes the stress response process. if the response has the desired effect, the situation should resolve, otherwise further thought and another response will be needed. the dive maxim, " stop, breathe, think, act " is generally a good response, but it is not appropriate for all diving emergencies. this response assumes that both time and an adequate supply of breathing gas are available, and though this is often true, some situations require immediate learned responses which must be habituated by education, training and repetitive practice to overcome inappropriate instinctive and natural reflexive responses. for example, a diver should exhale whenever ascending to prevent lung overexpansion injuries, and if the diver is subject to a collision or sudden upwelling underwater, the natural reaction may be to tense up and hold his breath, particularly if the breathing gas supply is interrupted at the same time. this reaction could prove fatal if the diver is lifted sufficiently to cause lung overexpansion. only through education training and practice, and perhaps proper selection, will the diver reflexively exhale as a response to a pressure reduction. other factors suggested by bachrach to prevent panic are listed below : physical fitness : having a reserve capacity. training which emphasizes in - water skills and comfort medical examinations to ensure no hidden contraindications to diving fatigue prevention or avoidance age limits panic the most frequently cited cause of diver injury or death is panic, or a loss of control analysis of the human factors associated with diving can identify the primary influences which lead to panic, and suggest methods to promote safety. dive safety is primarily controlled by the individual diver and their ability to cope with stress underwater. the development of a diving accident may begin with a diver in a normal psychological and physiological state. the presence of a stressor may alter the diver's psychological and physiological state, and if the stress becomes excessive the diver's skills will diminish. stressors may arise from human factors, the environment, equipment, procedures, organizational factors, or interactions between any of these, and these stress effects are cumulative. a diver is normally able to cope with applied stressors and perform the dive safely, and while the diver has sufficient capacity for coping, the stress is relieved or controlled and the operation
0. 1 mm in diameter. ( see figure 11. 28. ) you can exhale without muscle action by allowing surface tension to contract these sacs. medical patients whose breathing is aided by a positive pressure respirator have air blown into the lungs, but are generally allowed to exhale on their own. even if there is paralysis, surface tension in the alveoli will expel air from the lungs. since pressure increases as the radii of the alveoli decrease, an occasional deep cleansing breath is needed to fully reinflate the alveoli. respirators are programmed to do this and we find it natural, as do our companion dogs and cats, to take a cleansing breath before settling into a nap. 11. 49
the property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis. neuropsychology neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study ( that is, its operation resists easy reduction ). context - dependent memory and state - dependent memory show hysteretic aspects of neurocognition. respiratory physiology lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. the difference in compliance ( Ξ΄volume / Ξ΄pressure ) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli. the transpulmonary pressure vs volume curve of inhalation is different from the pressure vs volume curve of exhalation, the difference being described as hysteresis. lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation. voice and speech physiology a hysteresis effect may be observed in voicing onset versus offset. the threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. in utterances of vowel - voiceless consonant - vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller. ecology and epidemiology hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. notable examples include the theory of spruce budworm outbreaks and behavioral - effects on disease transmission. it is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion
in keeping the demand valve mechanism free to move, but the ice still forms in the regulator and has to go somewhere when it breaks loose. if inhaled, a piece of ice can cause laryngospasm or a serious coughing spell. with most second stage scuba regulators, ice forms and builds up on internal components such as the valve actuating lever, valve housing tube, and the inlet valve poppet, the gap between the lever and fulcrum point is reduced and eventually filled by the build - up of ice that forms, preventing the inlet from fully closing during exhalation once the valve starts leaking, the second stage components get even colder due to the cooling effect of the continuous flow, creating more ice and an even greater free flow. with some regulators the refrigerating effect is so great, that water around the exhaust valve freezes, reducing exhaust flow, and increasing exhalation effort and producing positive pressure in the valve body, making it difficult to exhale through the regulator. this may cause the diver to loosen their grip on the mouthpiece and exhale around the mouthpiece. with some regulators, once the regulator starts free - flowing the flow escalates into a full free - flow, and delivers air to the diver at temperatures cold enough to freeze mouth tissue in a short time. the effect increases with depth, and the deeper the diver is, the faster the breathing gas will be lost. in some cold water fatalities, by the time the diver β s body is recovered there is no gas left in the cylinder, and the regulator has warmed up and melted the ice, destroying the evidence, and leading to a finding of death by drowning due to running out of gas. mechanism of icing when the high pressure gas passes through the regulator first stage, the pressure drop from cylinder pressure to inter - stage pressure causes a temperature drop as the gas expands. the higher the cylinder pressure the greater the drop in pressure and the colder the gas gets in the low pressure hose to the second stage. an increase in flow will increase the amount of heat lost and the gas will get colder, as heat transfer from the surrounding water is limited. if the breathing rate is low to moderate ( 15 to 30 lpm ) the risk
</ctx>
The following are multiple choice questions (with answers) about STEM.
More resistance in expiration is due to:
A. Increased compression of airway
B. Due to change from linear to turbulent flow
C. Saturation with moisture
D. Increased rate of flow during expiration
Answer: | [
"Increased compression of airway",
"Due to change from linear to turbulent flow",
"Saturation with moisture",
"Increased rate of flow during expiration"
] | A | A i.e. Increased compression of airways |
arc_challenge_564 | arc_challenge | <ctx>
about freely. neutral atoms and their associated ions have very different physical and chemical properties. sodium atoms form sodium metal, a soft, silvery - white metal that burns vigorously in air and reacts explosively with water. chlorine atoms form chlorine gas, cl2, a yellow - green gas that is extremely corrosive to most metals and very poisonous to animals and plants. the vigorous reaction between the elements sodium and chlorine forms the white, crystalline compound sodium chloride, common table salt, which contains sodium cations and chloride anions ( figure 7. 2 ). the compound composed of these ions exhibits properties entirely different from the properties of the elements sodium and chlorine. chlorine is poisonous, but sodium chloride is essential to life ; sodium atoms react vigorously with water, but sodium chloride simply dissolves in water. figure 7. 2 ( a ) sodium is a soft metal that must be stored in mineral oil to prevent reaction with air or water. ( b ) chlorine is a pale yellow - green gas. ( c ) when combined, they form white crystals of sodium chloride ( table salt ). ( credit a : modification of work by β jurii β / wikimedia commons ) the formation of ionic compounds binary ionic compounds are composed of just two elements : a metal ( which forms the cations ) and a nonmetal ( which forms the anions ). for example, nacl is a binary ionic compound. we can think about the formation of such compounds in terms of the periodic properties of the elements. many metallic elements have relatively low ionization potentials and lose electrons easily. these elements lie to the left in a period or near the bottom of a group on the periodic table. nonmetal atoms have relatively high electron affinities and thus readily gain electrons lost by metal atoms, thereby filling their valence shells. nonmetallic elements are found in the upper - right corner of the periodic table. as all substances must be electrically neutral, the total number of positive charges on the cations of an ionic compound must equal the total number of negative charges on its anions. the formula of an ionic compound represents the simplest ratio of the numbers of ions necessary to give
a salt is an ionic compound consisting of cations and anions. 858 18 β’ representative metals, metalloids, and nonmetals access for free at openstax. org figure 18. 2 the location of the representative metals is shown in the periodic table. nonmetals are shown in green, metalloids in purple, and the transition metals and inner transition metals in blue. most of the representative metals do not occur naturally in an uncombined state because they readily react with water and oxygen in the air. however, it is possible to isolate elemental beryllium, magnesium, zinc, cadmium, mercury, aluminum, tin, and lead from their naturally occurring minerals and use them because they react very slowly with air. part of the reason why these elements react slowly is that these elements react with air to form a protective coating. the formation of this protective coating is passivation. the coating is a nonreactive film of oxide or some other compound. elemental magnesium, aluminum, zinc, and tin are important in the fabrication of many familiar items, including wire, cookware, foil, and many household and personal objects. although beryllium, cadmium, mercury, and lead are readily available, there are limitations in their use because of their toxicity. group 1 : the alkali metals the alkali metals lithium, sodium, potassium, rubidium, cesium, and francium constitute group 1 of the periodic table. although hydrogen is in group 1 ( and also in group 17 ), it is a nonmetal and deserves separate consideration later in this chapter. the name alkali metal is in reference to the fact that these metals and their oxides react with water to form very basic ( alkaline ) solutions. the properties of the alkali metals are similar to each other as expected for elements in the same family. the alkali metals have the largest atomic radii and the lowest first ionization energy in their periods. this combination makes it very easy to remove the single electron in the outermost ( valence ) shell of each. the easy loss of this valence electron means that these metals readily form stable cations with a charge of 1 +. their
in the isolation of reactive metals in their pure forms. sodium, aluminum, and magnesium are typical examples. the preparation of sodium the most important method for the production of sodium is the electrolysis of molten sodium chloride ; the set - up is a downs cell, shown in figure 18. 10. the reaction involved in this process is : the electrolysis cell contains molten sodium chloride ( melting point 801 Β°c ), to which calcium chloride has been added to lower the melting point to 600 Β°c ( a colligative effect ). the passage of a direct current through the cell causes the sodium ions to migrate to the negatively charged cathode and pick up electrons, reducing the ions to sodium metal. chloride ions migrate to the positively charged anode, lose electrons, and undergo oxidation to chlorine gas. the overall cell reaction comes from adding the following reactions : separation of the molten sodium and chlorine prevents recombination. the liquid sodium, which is less dense than molten sodium chloride, floats to the surface and flows into a collector. the gaseous chlorine goes to storage tanks. chlorine is also a valuable product.
##valent molecular compounds. chlorine generally reacts with compounds that contain only carbon and hydrogen ( hydrocarbons ) by adding to multiple bonds or by substitution. 916 18 β’ representative metals, metalloids, and nonmetals access for free at openstax. org in cold water, chlorine undergoes a disproportionation reaction : half the chlorine atoms oxidize to the 1 + oxidation state ( hypochlorous acid ), and the other half reduce to the 1β oxidation state ( chloride ion ). this disproportionation is incomplete, so chlorine water is an equilibrium mixture of chlorine molecules, hypochlorous acid molecules, hydronium ions, and chloride ions. when exposed to light, this solution undergoes a photochemical decomposition : the nonmetal chlorine is more electronegative than any other element except fluorine, oxygen, and nitrogen. in general, very electronegative elements are good oxidizing agents ; therefore, we would expect elemental chlorine to oxidize all of the other elements except for these three ( and the nonreactive noble gases ). its oxidizing property, in fact, is responsible for its principal use. for example, phosphorus ( v ) chloride, an important intermediate in the preparation of insecticides and chemical weapons, is manufactured by oxidizing the phosphorus with chlorine : a great deal of chlorine is also used to oxidize, and thus to destroy, organic or biological materials in water purification and in bleaching. the chemical properties of bromine are similar to those of chlorine, although bromine is the weaker oxidizing agent and its reactivity is less than that of chlorine. iodine is the least reactive of the halogens. it is the weakest oxidizing agent, and the iodide ion is the most easily oxidized halide ion. iodine reacts with metals, but heating is often required. it does not oxidize other halide ions. compared with the other halogens, iodine reacts only slightly with water. traces of iodine in water react with a mixture of starch
</ctx>
The following are multiple choice questions (with answers) about STEM.
What compound is formed when the soft putty-like metal sodium (Na) reacts with the green-colored poisonous gas chlorine (Cl_{2})?
A. sugar
B. table salt
C. sulfuric acid
D. sodium hydroxide
Answer: | [
"sugar",
"table salt",
"sulfuric acid",
"sodium hydroxide"
] | B | |
m1_3640 | m1 | <ctx>
probabilistic latent semantic analysis ( plsa ), also known as probabilistic latent semantic indexing ( plsi, especially in information retrieval circles ) is a statistical technique for the analysis of two - mode and co - occurrence data. in effect, one can derive a low - dimensional representation of the observed variables in terms of their affinity to certain hidden variables, just as in latent semantic analysis, from which plsa evolved. compared to standard latent semantic analysis which stems from linear algebra and downsizes the occurrence tables ( usually via a singular value decomposition ), probabilistic latent semantic analysis is based on a mixture decomposition derived from a latent class model. model considering observations in the form of co - occurrences of words and documents, plsa models the probability of each co - occurrence as a mixture of conditionally independent multinomial distributions : with being the words'topic. note that the number of topics is a hyperparameter that must be chosen in advance and is not estimated from the data. the first formulation is the symmetric formulation, where and are both generated from the latent class in similar ways ( using the conditional probabilities and ), whereas the second formulation is the asymmetric formulation, where, for each document, a latent class is chosen conditionally to the document according to, and a word is then generated from that class according to. although we have used words and documents in this example, the co - occurrence of any couple of discrete variables may be modelled in exactly the same way. so, the number of parameters is equal to. the number of parameters grows linearly with the number of documents. in addition, although plsa is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents. their parameters are learned using the em algorithm. application plsa may be used in a discriminative setting, via fisher kernels. plsa has applications in information retrieval and filtering, natural language processing, machine learning from text, bioinformatics, and related areas. it is reported that the aspect model used in the probabilistic latent semantic analysis has severe overfitting problems
has the following limitations : long documents are poorly represented because they have poor similarity values ( a small scalar product and a large dimensionality ) search keywords must precisely match document terms ; word substrings might result in a " false positive match " semantic sensitivity ; documents with similar context but different term vocabulary won't be associated, resulting in a " false negative match ". the order in which the terms appear in the document is lost in the vector space representation. theoretically assumes terms are statistically independent. weighting is intuitive but not very formal. many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as singular value decomposition and lexical databases such as wordnet. models based on and extending the vector space model models based on and extending the vector space model include : generalized vector space model latent semantic analysis term rocchio classification random indexing search engine optimization software that implements the vector space model the following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them. free open source software apache lucene. apache lucene is a high - performance, open source, full - featured text search engine library written entirely in java. opensearch ( software ) and solr : the 2 most famous search engine software ( many smaller exist ) based on lucene. gensim is a python + numpy framework for vector space modelling. it contains incremental ( memory - efficient ) algorithms for term frequency - inverse document frequency, latent semantic indexing, random projections and latent dirichlet allocation. weka. weka is a popular data mining package for java including wordvectors and bag of words models. word2vec. word2vec uses vector spaces for word embeddings. further reading g. salton ( 1962 ), " some experiments in the generation of word and document associations " proceeding afips'62 ( fall ) proceedings of the december 4 β 6, 1962, fall joint computer conference, pages 234 β 250. ( early paper of salton using the term - document matrix formalization ) g. salton, a. wong, and c. s. yang ( 1975 )
##r ( lcs ) and the two compared concepts. maguitman, menczer, roinestad and vespignani generalizes lin's similarity to arbitrary ontologies ( graphs ). jiang and conrath based on resnik's similarity. considers the information content of lowest common subsumer ( lcs ) and the two compared concepts to calculate the distance between the two concepts. the distance is later used in computing the similarity measure. align, disambiguate, and walk : random walks on semantic networks node - and - relation - content - based applicable to ontology consider properties ( content ) of nodes consider types ( content ) of relations based on etvsm based on resnik's similarity pairwise maximum of the pairwise similarities composite average in which only the best - matching pairs are considered ( best - match average ) groupwise jaccard index statistical similarity statistical similarity approaches can be learned from data, or predefined. similarity learning can often outperform predefined similarity measures. broadly speaking, these approaches build a statistical model of documents, and use it to estimate similarity. lsa ( latent semantic analysis ) ( + ) vector - based, adds vectors to measure multi - word terms ; ( β ) non - incremental vocabulary, long pre - processing times pmi ( pointwise mutual information ) ( + ) large vocab, because it uses any search engine ( like google ) ; ( β ) cannot measure relatedness between whole sentences or documents soc - pmi ( second - order co - occurrence pointwise mutual information ) ( + ) sort lists of important neighbor words from a large corpus ; ( β ) cannot measure relatedness between whole sentences or documents glsa ( generalized latent semantic analysis ) ( + ) vector - based, adds vectors to measure multi - word terms ; ( β ) non - incremental vocabulary, long pre - processing times ican ( incremental construction of an associative network ) ( + ) incremental, network - based measure, good for spreading activation, accounts for second - order relatedness ; ( β ) cannot measure relatedness between multi - word terms, long pre - processing times ngd
similar accuracy results. overall, accuracy increases with the number of words used and the number of dimensions. mikolov et al. report that doubling the amount of training data results in an increase in computational complexity equivalent to doubling the number of vector dimensions. altszyler and coauthors ( 2017 ) studied word2vec performance in two semantic tests for different corpus size. they found that word2vec has a steep learning curve, outperforming another word - embedding technique, latent semantic analysis ( lsa ), when it is trained with medium to large corpus size ( more than 10 million words ). however, with a small training corpus, lsa showed better performance. additionally they show that the best parameter setting depends on the task and the training corpus. nevertheless, for skip - gram models trained in medium size corpora, with 50 dimensions, a window size of 15 and 10 negative samples seems to be a good parameter setting. see also semantle autoencoder document - term matrix feature extraction feature learning neural network language models vector space model thought vector fasttext glove normalized compression distance references external links wikipedia2vec ( introduction ) implementations c c # python ( spark ) python ( tensorflow ) python ( gensim ) java / scala r free science software natural language processing toolkits artificial neural networks semantic relations
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
A. The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot
B. LSI does not depend on the order of words in the document, whereas WE does
C. LSI is deterministic (given the dimension), whereas WE is not
D. LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not
Answer: | [
"The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot",
"LSI does not depend on the order of words in the document, whereas WE does",
"LSI is deterministic (given the dimension), whereas WE is not",
"LSI does take into account the frequency of words in the documents, whereas WE wit... | D | The false statement among the options provided is:
**4. LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not.**
### Justification:
1. **LSI and Word Frequencies**: Latent Semantic Indexing (LSI) is based on Singular Value Decomposition (SVD) of the term-document matrix, which inherently considers the frequency of terms (the term's weight in the matrix is often based on TF-IDF). So, LSI does take word frequency into account.
2. **Word Embeddings and Negative Sampling**: Word embeddings, particularly those trained with techniques that use negative sampling (like Word2Vec), do consider word context and frequency in a different manner. While negative sampling is a technique to speed up training and might give less weight to infrequent words, the overall model is still influenced by frequency during training, as it focuses on relationships between words based on their co-occurrences.
3. **Interpretability of Dimensions**: The first statement is generally considered true because LSI dimensions can often be interpreted as latent concepts, while word embeddings tend to have less direct interpretability of dimensions.
4. **Determinism**: The third statement is true as well; LSI is deterministic given the same input and dimensionality, whereas WE models may yield different results due to randomness in initialization or stochastic training processes.
5. **Order of Words**: The second statement is true; LSI treats documents as bags of words (ignoring order), while word embeddings can capture order through context in sequence models or n-grams.
### Reflection:
This analysis seems sound because it accurately assesses the nature of LSI and WE in relation to word frequency. The answer is confident due to the established principles of these techniques in natural language processing.
Improvements could be made in terms of providing more context or examples of how each method accounts for frequencies and concepts, which could enhance understanding for those less familiar with the topic. Nonetheless, the identification of the false statement is clear and justified. |
openbookqa_836 | openbookqa | <ctx>
animalia ) and the protozoa, single - celled organisms no longer considered animals. in modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. humans make use of many animal species, such as for food ( including meat, milk, and eggs ), for materials ( such as leather and wool ), as pets, and as working animals including for transport. dogs have been used in hunting, as have birds of prey, while many terrestrial and aquatic animals were hunted for sports. nonhuman animals have appeared in art since the earliest times and are featured in mythology and religion. etymology the word " animal " comes from the latin, meaning'having breath ','having soul'or'living being '. the biological definition includes all members of the kingdom animalia. in colloquial usage, the term animal is often used to refer only to nonhuman animals. the term " metazoa " is derived from the ancient greek ΞΌΞ΅ΟΞ± ( meta, meaning " later " ) and ΞΆΟΞ± ( zoia, plural of ΞΆΟΞΏΞ½ zoion, meaning animal ). characteristics animals have several characteristics that set them apart from other living things. animals are eukaryotic and multicellular. unlike plants and algae, which produce their own nutrients, animals are heterotrophic, feeding on organic material and digesting it internally. with very few exceptions, animals respire aerobically. all animals are motile ( able to spontaneously move their bodies ) during at least part of their life cycle, but some animals, such as sponges, corals, mussels, and barnacles, later become sessile. the blastula is a stage in embryonic development that is unique to animals, allowing cells to be differentiated into specialised tissues and organs. structure all animals are composed of cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. during development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised, making the formation of complex structures possible. this may
a feral ( ) animal or plant is one that lives in the wild but is descended from domesticated individuals. as with an introduced species, the introduction of feral animals or plants to non - native regions may disrupt ecosystems and has, in some cases, contributed to extinction of indigenous species. the removal of feral species is a major focus of island restoration. animals a feral animal is one that has escaped from a domestic or captive status and is living more or less as a wild animal, or one that is descended from such animals. other definitions include animals that have changed from being domesticated to being wild, natural, or untamed. some common examples of animals with feral populations are horses, dogs, goats, cats, rabbits, camels, and pigs. zoologists generally exclude from the feral category animals that were genuinely wild before they escaped from captivity : neither lions escaped from a zoo nor the white - tailed eagles re - introduced to the uk are regarded as feral. plants domesticated plants that revert to wild are referred to as escaped, introduced, naturalized, or sometimes as feral crops. individual plants are known as volunteers. large numbers of escaped plants may become a noxious weed. the adaptive and ecological variables seen in plants that go wild closely resemble those of animals. feral populations of crop plants, along with hybridization between crop plants and their wild relatives, brings a risk that genetically engineered characteristics such as pesticide resistance could be transferred to weed plants. the unintended presence of genetically modified crop plants or of the modified traits in other plants as a result of cross - breeding is known as " adventitious presence ( ap ) ". variables certain familiar animals go feral easily and successfully, while others are much less inclined to wander and usually fail promptly outside domestication. some species will detach readily from humans and pursue their own devices, but do not stray far or spread readily. others depart and are gone, seeking out new territory or range to exploit and displaying active invasiveness. whether they leave readily and venture far, the ultimate criterion for success is longevity. persistence depends on their ability to establish themselves and reproduce reliably in the new environment. neither the duration nor the intensity with which a species has been domesticated offers a useful
terrestrial animals are animals that live predominantly or entirely on land ( e. g. cats, chickens, ants, spiders ), as compared with aquatic animals, which live predominantly or entirely in the water ( e. g. fish, lobsters, octopuses ), and amphibians, which rely on aquatic and terrestrial habitats ( e. g. frogs and newts ). some groups of insects are terrestrial, such as ants, butterflies, earwigs, cockroaches, grasshoppers and many others, while other groups are partially aquatic, such as mosquitoes and dragonflies, which pass their larval stages in water. terrestrial classes the term " terrestrial " is typically applied to species that live primarily on the ground, in contrast to arboreal species, which live primarily in trees. there are other less common terms that apply to specific groups of terrestrial animals : saxicolous creatures are rock dwelling. " saxicolous " is derived from the latin word saxum, meaning a rock. arenicolous creatures live in the sand. troglofauna predominantly live in caves. taxonomy terrestrial invasion is one of the most important events in the history of life. terrestrial lineages evolved in several animal phyla, among which arthropods, vertebrates and mollusks are representatives of more successful groups of terrestrial animals. terrestrial animals do not form a unified clade ; rather, they share only the fact that they live on land. the transition from an aquatic to terrestrial life by various groups of animals has occurred independently and successfully many times. most terrestrial lineages originated under a mild or tropical climate during the paleozoic and mesozoic, whereas few animals became fully terrestrial during the cenozoic. if internal parasites are excluded, free living species in terrestrial environments are represented by the following eleven phyla : gastrotrichs ( hairy - backs ) live in transient terrestrial water and go dormant during desiccation rotifers ( wheel animals ) live in transient terrestrial water and go dormant during desiccation nematodes ( roundworms ) by going dormant during desiccation eutardigrades ( water bears ) live in transient terrestrial water and go dormant during desiccation flatworms ( land
in zoology, mammalogy is the study of mammals β a class of vertebrates with characteristics such as homeothermic metabolism, fur, four - chambered hearts, and complex nervous systems. mammalogy has also been known as " mastology, " " theriology, " and " therology. " the archive of number of mammals on earth is constantly growing, but is currently set at 6, 495 different mammal species including recently extinct. there are 5, 416 living mammals identified on earth and roughly 1, 251 have been newly discovered since 2006. the major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. the approximate salary of a mammalogist varies from $ 20, 000 to $ 60, 000 a year, depending on their experience. mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. mammalogy branches off into other taxonomically - oriented disciplines such as primatology ( study of primates ), and cetology ( study of cetaceans ). like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. research purposes mammalogists have stated that there are multiple reasons for the study and observation of mammals. knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. mammals are often used in business industries, agriculture, and kept for pets. studying mammals habitats and source of energy has led to aiding in survival. the domestication of some small mammals has also helped discover several different diseases, viruses, and cures. mammalogist a mammalogist studies and observes mammals. in studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. a mammalogist can do a broad variety of things within the realm of mammals. a mammalogist on average can make roughly $ 58, 000 a year. this depends on employer and state. history the first people recorded to have researched mammals were the ancient greeks with records on mammals that were not even native to greece and others that were. aristotle was one of the first to recognize whales and
</ctx>
The following are multiple choice questions (with answers) about STEM.
What animal is cold-blooded?
A. cats
B. dogs
C. tree hopper
D. rats
Answer: | [
"cats",
"dogs",
"tree hopper",
"rats"
] | C | |
aquarat_47433 | aquarat | <ctx>
data to be more carefully and fully analyzed : pixel by pixel or unit by unit, rather than on a larger scale. where it would not have been possible or practical to simulate the impact of nuclear detonation on every building, car, and their contents ( including furniture, structure strength, etc. ) because such a calculation would have taken more time than was available to provide an answer, the increase in computing power will prompt researchers to add more data to more fully simulate more variables, giving a more accurate result. application in everyday computer systems amdahl's law reveals a limitation in, for example, the ability of multiple cores to reduce the time it takes for a computer to boot to its operating system and be ready for use. assuming the boot process was mostly parallel, quadrupling computing power on a system that took one minute to load might reduce the boot time to just over fifteen seconds. but greater and greater parallelization would eventually fail to make bootup go any faster, if any part of the boot process were inherently sequential. gustafson's law argues that a fourfold increase in computing power would instead lead to a similar increase in expectations of what the system will be capable of. if the one - minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system. the time taken to boot to the operating system will be the same, i. e. one minute, but the new system would include more graphical or user - friendly features. limitations some problems do not have fundamentally larger datasets. as an example, processing one data point per world citizen gets larger at only a few percent per year. the principal point of gustafson's law is that such problems are not likely to be the most fruitful applications of parallelism. algorithms with nonlinear runtimes may find it hard to take advantage of parallelism " exposed " by gustafson's law. snyder points out an algorithm means that double the concurrency gives only about a 26 % increase in problem size. thus, while it may be possible to occupy vast concurrency, doing so may bring little advantage over the original, less concurrent solution β however in practice there have still been
% said it was " likely ", 21 % said it was " about even ", 24 % said it was " unlikely " and 26 % said it was " quite unlikely ". speed improvements both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. an analogy to moore's law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months ; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. some upper limit on speed may eventually be reached. jeff hawkins has stated that a self - improving computer system would inevitably run into upper limits on computing power : " in the end there are limits to how big and fast computers can run. we would end up in the same place ; we'd just get there a bit faster. there would be no singularity. " it is difficult to directly compare silicon - based hardware with neurons. but notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0. 01 % of the volume of the brain. this analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain. exponential growth the exponential growth in computing technology suggested by moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of moore's law. computer scientist and futurist hans moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. ray kurzweil postulates a law of accelerating returns in which the speed of technological change ( and more generally, all evolutionary processes ) increases exponentially, generalizing moore's law in the same manner as moravec's proposal, and also including material technology ( especially as applied to nanotechnology ), medical technology and others. between 1986 and 2007, machines'application - specific capacity to compute information per capita roughly doubled every 14 months ; the per capita capacity of the world's general - purpose computers has doubled every 18 months ; the global telecommunication capacity per capita doubled every 34 months ; and the
constant. for m = n, after a sufficiently long time, with high probability the maximum load is similar to the finite version, both with random allocation and with partially random allocation. repeated balls - into - bins in a repeated variant of the process, balls are initially distributed in bins in an arbitrary way and then, in every subsequent step of a discrete - time process, one ball is chosen from each non - empty bin and re - assigned to one of the bins uniformly at random. when, it has been shown that with high probability the process converges to a configuration with maximum load after steps. applications online load balancing : consider a set of n identical computers. there are n users who need computing services. the users are not coordinated - each users comes on his own and selects which computer to use. each user would of course like to select the least loaded computer, but this requires to check the load on each computer, which might take a long time. another option is to select a computer at random ; this leads, with high probability, to a maximum load of. a possible compromise is that the user will check only two computers, and use the lesser loaded of the two. this leads, with high probability, to a much smaller maximum load of. hashing : consider a hash table in which all keys mapped to the same location are stored in a linked list. the efficiency of accessing a key depends on the length of its list. if we use a single hash function which selects locations with uniform probability, with high probability the longest chain has keys. a possible improvement is to use two hash functions, and put each new key in the shorter of the two lists. in this case, with high probability the longest chain has only elements. fair cake - cutting : consider the problem of creating a partially proportional division of a heterogeneous resource among people, such that each person receives a part of the resource which that person values as at least of the total, where is some sufficiently large constant. the edmonds β pruhs protocol is a randomized algorithm whose analysis make use of balls - into - bins arguments. references probability problems
introduces several new classes of potential software bugs, of which race conditions are the most common. communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance. a theoretical upper bound on the speed - up of a single program as a result of parallelization is given by amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised. background traditionally, computer software has been written for serial computation. to solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. these instructions are executed on a central processing unit on one computer. only one instruction may execute at a time β after that instruction is finished, the next one is executed. parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. the processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural and engineering sciences, such as meteorology. this led to the design of parallel hardware and software, as well as high performance computing. frequency scaling was the dominant reason for improvements in computer performance from the mid - 1980s until 2004. the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. an increase in frequency thus decreases runtime for all compute - bound programs. however, power consumption p by a chip is given by the equation p = c Γ v 2 Γ f, where c is the capacitance being switched per clock cycle ( proportional to the number of transistors whose inputs change ), v is voltage, and f is the processor frequency ( cycles per second ). increases in frequency increase the amount of power used in a processor. increasing processor power consumption led ultimately to intel's may 8, 2004 cancellation
</ctx>
The following are multiple choice questions (with answers) about STEM.
Six computers, each working at the same constant rate, together can process a certain amount of data in 12 days. How many additional computers, each working at the same constant rate, will be needed to process the same amount of data in 6 days?
A. A)3
B. B)5
C. C)6
D. D)9
Answer: | [
"A)3",
"B)5",
"C)6",
"D)9"
] | C | Explanation: If six computers require 12 days to process the data, thats
a total of 72 computer-days the product of 6 and 12. If you change the number
of computers or the number of days, 72 will have to remain the product, whether
that means 72 days of one computer or one day with 72 computers.
In 6 days, the number of computers is:
6c = 72
c = 12
12 computers is 6 more than the 6 that it took to do the job in 12 days, so
the correct choice is (C). |
arc_easy_1669 | arc_easy | <ctx>
field of view, which is preferable for looking at some objects in the night sky. recent models of iphones have introduced a β night mode β option when taking pictures as well, that allows you to increase exposure, which is a period of time the picture is being taken for. this optimizes focus on light in the frame which is why it is used primarily at night. amateur astronomers also use star charts that, depending on experience and intentions, may range from simple planispheres through to detailed charts of very specific areas of the night sky. a range of astronomy software is available and used by amateur astronomers, including software that generates maps of the sky, software to assist with astrophotography, observation scheduling software, and software to perform various calculations pertaining to astronomical phenomena. amateur astronomers often like to keep records of their observations, which usually takes the form of an observing log. observing logs typically record details about which objects were observed and when, as well as describing the details that were seen. sketching is sometimes used within logs, and photographic records of observations have also been used in recent times. the information gathered is used to help studies and interactions between amateur astronomers in yearly gatherings. although not professional information or credible, it is a way for the hobby lovers to share their new sightings and experiences. the popularity of imaging among amateurs has led to large numbers of web sites being written by individuals about their images and equipment. much of the social interaction of amateur astronomy occurs on mailing lists or discussion groups. discussion group servers host numerous astronomy lists. a great deal of the commerce of amateur astronomy, the buying and selling of equipment, occurs online. many amateurs use online tools to plan their nightly observing sessions, using tools such as the clear sky chart. common techniques while a number of interesting celestial objects are readily identified by the naked eye, sometimes with the aid of a star chart, many others are so faint or inconspicuous that technical means are necessary to locate them. although many methods are used in amateur astronomy, most are variations of a few specific techniques. star hopping star hopping is a method often used by amateur astronomers with low - tech equipment such as binoculars or a manually driven telescope. it involves the use of maps (
##hlfs, k., & wilson, t. l. ( 2004 ). tools of radio astronomy. astronomy and astrophysics library. berlin : springer. asimov, i. ( 1979 ). isaac asimov's book of facts ; sky watchers. new york : grosset & dunlap. pp. 390 β 399. american inventions astronomical imaging astronomical instruments
the public. notable examples include using the crowd to create speech and language databases, to conduct user studies, and to run behavioral science surveys and experiments. crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally. artists have also used crowdsourcing systems. in a project called the sheep market, aaron koblin used mechanical turk to collect 10, 000 drawings of sheep from contributors around the world. artist sam brown leveraged the crowd by asking visitors of his website explodingdog to send him sentences to use as inspirations for his paintings. art curator andrea grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized. as with other types of uses, artists use crowdsourcing systems to generate and collect data. the crowd also can be used to provide inspiration and to collect financial support for an artist's work. in navigation systems, crowdsourcing from 100 million drivers were used by inrix to collect users'driving times to provide better gps routing and real - time traffic updates. in science astronomy crowdsourcing in astronomy was used in the early 19th century by astronomer denison olmsted. after being awakened in a late november night due to a meteor shower taking place, olmsted noticed a pattern in the shooting stars. olmsted wrote a brief report of this meteor shower in the local newspaper. " as the cause of'falling stars'is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible ", olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. responses came pouring in from many states, along with scientists'observations sent to the american journal of science and arts. these responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. the responses also allowed him to approximate a velocity for the meteors. a more recent version of crowdsourcing in astronomy is nasa's photo organizing project, which asked internet users to brows
skymap online worldwide telescope xephem, for unix - like systems stellarmap. com β online map of the stars star walk and kepler explorer openlab : 2 celestial cartography apps for smartphones spaceengine free and printable from files the triatlas project toshimi taki star atlases deepsky hunter star atlas andrew johnson mag 7 see also star chart astrometry cosmography cheonsang yeolcha bunyajido history of cartography planetarium pp3 references external links star maps from ian ridpath's star tales website. the mag - 7 star atlas project historical celestial atlases on the web felice stoppa's atlas coelestis, an extensive collection of 51 star maps and other astronomy related books stored as a multitude of images. monthly star maps for every location on earth easy to use monthly star maps for northern and southern hemispheres. helpful target lists for naked eye, binocular, or telescope viewing. collection of rare star atlases, charts, and maps available in full digital facsimile at linda hall library. navigable online map of the stars, stellarmap. com. the digital collections of the linda hall library include : " astronomy : star atlases, charts, and maps ", a collection of more than 60 star atlas volumes. " astronomy : selected images, a collection of high - resolution star map images. " history of cosmology : views of the stars ", high - resolution scans of prints relating to the study of the structure of the cosmos. works about astronomy el : Ξ±ΟΞ»Ξ±Ο ΞΏΟ
ΟΞ±Ξ½ΞΏΟ
</ctx>
The following are multiple choice questions (with answers) about STEM.
For many years, people have been curious about the stars. In the past, people have shared what they knew about the stars through stories and drawings. Which pair of tools would be most helpful for people today to gather and share information about stars?
A. a notebook and a calculator
B. a telescope and a computer
C. a calculator and a microscope
D. a microscope and a telescope
Answer: | [
"a notebook and a calculator",
"a telescope and a computer",
"a calculator and a microscope",
"a microscope and a telescope"
] | B | |
medmcqa_6011 | medmcqa | <ctx>
events ( aes ) related to the treatment, and among them, 86 patients ( 51. 5 % ) encountered grade 3 or 4 treatment - related aes. the most prevalent nonhematologic grade 3 or 4 treatment - related aes included febrile neutropenia ( 9. 0 % ), elevated levels of alanine aminotransferase ( 7. 8 % ), and nausea ( 5. 4 % ). the authors have also claimed that in total, 21 patients ( 12. 6 % ) discontinued the use of jzp - 458 due to treatment - related aes. the reasons for discontinuation were pancreatitis ( 6. 0 % ), allergic reactions ( 5. 4 % ), including anaphylaxis ( 1. 8 % ), increased alanine aminotransferase ( 0. 6 % ), and hyperammonemia ( 0. 6 % ). it is important to note that the aes leading to patient deaths were sepsis ( cohort 1a, n = 1 ), aspiration pneumonia ( cohort 1b, n = 1 ), and multiorgan failure ( cohort 1b, n = 1 ). however, it was determined that none of these deaths were directly related to the administration of jzp - 458. the findings indicate that the administration of jzp - 458 exhibits effectiveness and a safety profile that aligns with other asparaginases. therefore, the fda considered the observed and simulated data as sufficient evidence to fulfill the required efficacy target, forming the basis for their decision. significantly, jzp - 458 offers a solution to one of the prominent challenges in patient care for individuals with acute lymphoblastic leukemia / lymphoblastic lymphoma, which is the scarcity of reliable drugs. its dependable manufacturing process, along with its proven efficacy and safety showcased in aall1931, makes jzp - 458 a promising candidate to address this critical issue. uses asparaginases can be used for different industrial and pharmaceutical purposes. medical e. coli strains are the main source of medical asparaginase. branded formulations ( with different
, india, the netherlands, germany, brazil, ivory coast, nigeria, indonesia, burkina faso, and the united states. in 2014, rapid growth of cashew cultivation in ivory coast made this country the top african exporter. fluctuations in world market prices, poor working conditions, and low pay for local harvesting have caused discontent in the cashew nut industry. toxicity some people are allergic to cashews, but they are a less frequent allergen than tree nuts or peanuts. for up to 6 % of children and 3 % of adults, consuming cashews may cause allergic reactions, ranging from mild discomfort to life - threatening anaphylaxis. these allergies are triggered by the proteins found in tree nuts, and cooking often does not remove or change these proteins. reactions to cashew and tree nuts can also occur as a consequence of hidden nut ingredients or traces of nuts that may inadvertently be introduced during food processing, handling, or manufacturing. the shell of the cashew nut contains oil compounds that can cause contact dermatitis similar to poison ivy, primarily resulting from the phenolic lipids, anacardic acid, and cardanol. due to the possible dermatitis, cashews are typically not sold in the shell to consumers. readily and inexpensively extracted from the waste shells, cardanol is under research for its potential applications in nanomaterials and biotechnology. uses nutrition raw cashews are 5 % water, 30 % carbohydrates, 44 % fat, and 18 % protein ( table ). in a 100 - gram reference amount, raw cashews provide 553 kilocalories, 67 % of the daily value ( dv ) in total fats, 36 % dv of protein, 13 % dv of dietary fiber and 11 % dv of carbohydrates. cashews are rich sources ( 20 % or more of the dv ) of dietary minerals, including particularly copper, manganese, phosphorus, and magnesium ( 79 β 110 % dv ), and of thiamin, vitamin b6 and vitamin k ( 32 β 37 % dv ). iron, potassium, zinc, and selenium are present in
the treatment - emergent aes reported, nausea was the most frequently observed in two or more healthy volunteers within each dosing cohort. a further study has examined the incidence and severity of aes in a cohort of 199 patients with acute lymphoblastic leukemia and treated with erwinase. the study found that the most common aes were allergic reactions, pancreatitis, hepatotoxicity, and coagulation disorders. however, the majority of these aes were manageable with appropriate monitoring and intervention strategies. furthermore, the incidence of treatment - emergent aes, particularly nausea, aligns with the expected side effect profile associated with asparaginase therapy. nausea has been reported as a common adverse event in previous studies investigating asparaginase - based treatments. on the other hand, a study claimed that in clinical trials, approximately 25 % of patients treated with asparaginase encountered hypersensitivity reactions, with 2 % experiencing severe reactions. the onset of the first hypersensitivity event was observed at a median time of 27 days after the initial administration of asparaginase ( erwinia chrysanthemi ) ( recombinant ) - rywn, ranging from 1 to 171 days. among the reported reactions, rash was the most frequently observed, occurring in 17 % of patients. notably, none of the patients experienced a severe rash. the median time from the first dose to the onset of rash was 33. 5 days, with a range of 1 to 127 days. these results provide valuable insights into the safety of jzp - 458 and support its potential as a well - tolerated treatment option for the targeted indication. further investigations, including larger - scale clinical trials, are warranted to confirm these findings and assess the overall efficacy and safety of jzp - 458 in a broader patient population. efficacy and safety in clinical trial phase ii and iii efficacy was evaluated in study jzp458 - 201 ( nct04145531 ), an open - label, multi - cohort, multicenter trial in 102 patients with acute lymphoblastic leukemia or lymphoblastic lymphoma with hypersensit
of drugs and enzyme inhibitors recommendations of the nomenclature committee of the international union of biochemistry ( nc - iub ) on enzyme inhibition terminology medicinal chemistry metabolism
</ctx>
The following are multiple choice questions (with answers) about STEM.
Drug of choice in anaphylactic shock is?
A. Subcutaneous Adrenalin
B. Intravenous Adrenaline
C. Steroids
D. Atropine
Answer: | [
"Subcutaneous Adrenalin",
"Intravenous Adrenaline",
"Steroids",
"Atropine"
] | B | Intravenous Adrenaline REF: Harrison's Internal Medicinel7th ed> Chapter 311. Allergies, Anaphylaxis, and Systemic M astocytosis Anaphylaxis: Treatment Mild symptoms such as pruritus and uicaria can be controlled by administration of 0.3 to 0.5 mL of 1:1000 (1.0 mg/mL) epinephrine SC or IM, with repeated doses as required at 5- to 20-min intervals for a severe reaction An IV infusion should be initiated to provide a route for administration of 2.5 mL epinephrine, diluted 1:10,000, at 5- to 10-min intervals, volume expanders such as normal saline, and vasopressor agents such as dopamine if intractable hypotension occurs. When epinephrine fails to control the anaphylactic reaction, hypoxia due to airway obstruction or related to a cardiac arrhythmia, or both, must be considered Oxygen alone a nasal catheter or with nebulized albuterol may be helpful, but either endotracheal intubation or a tracheostomy is mandatory for oxygen delivery if progressive hypoxia develops. Ancillary agents such as the antihistamine diphenhydramine, 50 to 100 mgIM or IV, and aminophylline , 0.25 to 0.5 g IV, are appropriate for uicaria-angioedema and bronchospasm, respectively. Intravenous glucocoicoids, 0.5-1.0 mg/kg of medrol, are not effective for the acute event but may allete later recurrence of bronchospasm, hypotension, or uicaria. |
mmlu_aux_8041 | mmlu_auxiliary | <ctx>
to attempt to unify the recently rediscovered laws of mendelian inheritance with darwin's theory of evolution. still, it would be almost thirty years until the theoretical framework for evolution of complex traits would be widely formalized. in an early summary of the theory of evolution of continuous variation, sewall wright, a graduate student who trained under castle, summarized contemporary thinking about the genetic basis of quantitative natural variation : " as genetic studies continued, ever smaller differences were found to mendelize, and any character, sufficiently investigated, turned out to be affected by many factors. " wright and others formalized population genetics theory that had been worked out over the preceding 30 years explaining how such traits can be inherited and create stably breeding populations with unique characteristics. quantitative trait genetics today leverages wright's observations about the statistical relationship between genotype and phenotype in families and populations to understand how certain genetic features can affect variation in natural and derived populations. quantitative traits polygenic inheritance refers to inheritance of a phenotypic characteristic ( trait ) that is attributable to two or more genes and can be measured quantitatively. multifactorial inheritance refers to polygenic inheritance that also includes interactions with the environment. unlike monogenic traits, polygenic traits do not follow patterns of mendelian inheritance ( discrete categories ). instead, their phenotypes typically vary along a continuous gradient depicted by a bell curve. an example of a polygenic trait is human skin color variation. several genes factor into determining a person's natural skin color, so modifying only one of those genes can change skin color slightly or in some cases, such as for slc24a5, moderately. many disorders with genetic components are polygenic, including autism, cancer, diabetes and numerous others. most phenotypic characteristics are the result of the interaction of multiple genes. examples of disease processes generally considered to be results of many contributing factors : congenital malformation cleft palate congenital dislocation of the hip congenital heart defects neural tube defects pyloric stenosis talipes adult onset diseases diabetes mellitus cancer glaucoma hypertension ischaemic heart disease bipolar disorder schizophrenia psoriasis thyroid diseases alzheimer's disease multifactorial
all gray horses must have at least one gray parent. if a gray horse is homozygous ( gg ), meaning that it has a gray allele from both parents, it will always produce gray offspring no matter the color genetics of the other parent. however, if a gray horse is heterozygous ( gg ), meaning it inherits one copy of the recessive gene ( g ), that animal may produce offspring who are not gray, depending on the genetics of the other parent and mendelian inheritance principles. genetic testing is now possible to determine whether a horse is homozygous or heterozygous for gray. < ref > [ http : / / www. animalgenetics. us / gray. htm " gray - horse coat color dna testing. " animal genetics, incorporated.']. web page accessed august 29, 2008 < / ref > the gray gene does not affect skin or eye color, so grays typically have dark skin and eyes, as opposed to the unpigmented pink skin of white horses. in 2008, researchers at uppsala university in sweden identified the genetic mutation that governs the graying process. the study revealed that all gray horses carry an identical mutation that can be traced back to a common ancestor that lived at least two thousand years ago. the discovery that gray can be linked to a single animal provides an example of how humans have " cherry - picked " attractive mutations in domestic animals. gray is controlled by a single dominant allele of a gene that regulates specific kinds of stem cells. homozygous grays turn white faster, are more likely to develop melanomas, and are less prone to develop the " fleabitten " speckling than heterozygous grays. researchers suggest the pigmented speckles of the " fleabitten " gray, as well as more intense reddish - brown colored areas called " blood " markings, may be caused by a loss or inactivation of the gray allele in some of the somatic cells as that would explain why the speckles are more common on heterozygous grays than homozygotes. melanoma in gray horses the identification of
genetics is the study of genes and tries to explain what they are and how they work. genes are how living organisms inherit features or traits from their ancestors ; for example, children usually look like their parents because they have inherited their parents'genes. genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation. some traits are part of an organism's physical appearance, such as eye color, height or weight. other sorts of traits are not easily seen and include blood types or resistance to diseases. some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. the way our genes and environment interact to produce a trait can be complicated. for example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle. genes are made from a long molecule called dna, which is copied and inherited across generations. dna is made of simple units that line up in a particular order within it, carrying genetic information. the language used by dna is called genetic code, which lets organisms read the information in the genes. this information is the instructions for the construction and operation of a living organism. the information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. each unique form of a single gene is called an allele. as an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. mutations are random changes in genes and can create new alleles. mutations can also produce new traits, such as when mutations to an allele for black hair produce a new allele for white hair. this appearance of new traits is important in evolution. genes and inheritance genes are pieces of dna that contain information for the synthesis of ribonucleic acids ( rnas ) or
a fundamental part of dna that is aligned linearly on a eukaryotic chromosome. chemical information that is transported and encoded by each gene is referred to as a trait. many organisms possess two genes for each individual trait that is present within that particular individual. these paired genes that control the same trait is classified as an allele. in an individual, the allelic genes that are expressed can be either homozygous, meaning the same, or heterozygous, meaning different. many pairs of alleles have differing effects that are portrayed in an offspring's phenotype and genotype. the phenotype is a general term that defines an individual's visible, physical traits. the genotype of an offspring is known as its genetic makeup. the alleles of genes can either be dominant or recessive. a dominant allele needs only one copy to be expressed while a recessive allele needs two copies ( homozygous ) in a diploid organism to be expressed. dominant and recessive alleles help to determine the offspring's genotypes, and therefore phenotypes. history classical genetics is often referred to as the oldest form of genetics, and began with gregor mendel's experiments that formulated and defined a fundamental biological concept known as mendelian inheritance. mendelian inheritance is the process in which genes and traits are passed from a set of parents to their offspring. these inherited traits are passed down mechanistically with one gene from one parent and the second gene from another parent in sexually reproducing organisms. this creates the pair of genes in diploid organisms. gregor mendel started his experimentation and study of inheritance with phenotypes of garden peas and continued the experiments with plants. he focused on the patterns of the traits that were being passed down from one generation to the next generation. this was assessed by test - crossing two peas of different colors and observing the resulting phenotypes. after determining how the traits were likely inherited, he began to expand the amount of traits observed and tested and eventually expanded his experimentation by increasing the number of different organisms he tested. about 150 years ago, gregor mendel published his first experiments with the test crossing of
</ctx>
The following are multiple choice questions (with answers) about STEM.
The color of what is an inherited characteristic?
A. vision organ
B. wallpaper
C. blood
D. dirt
Answer: | [
"vision organ",
"wallpaper",
"blood",
"dirt"
] | A | null |
medmcqa_6441 | medmcqa | <ctx>
the retrograde menstruation theory to explain why cases of endometriosis show up in the brain and lungs. researchers are investigating the possibility that the immune system may not be able to cope with the cyclic onslaught of retrograde menstrual fluid. in this context there is interest in studying the relationship of endometriosis to autoimmune disease, allergic reactions, and the impact of toxic materials. it is still unclear what, if any, causal relationship exists between toxic materials or autoimmune disease and endometriosis. there are immune system changes in people with endometriosis, such as an increase of macrophage - derived secretion products, but it is unknown if these are contributing to the disorder or are reactions from it. endometriotic lesions differ in their biochemistry, hormonal response, immunology, inflammatory response when compared to endometrium. this is likely because the cells that give rise to endometriosis are a side population of cells. similarly, there are changes in, for example, the mesothelium of the peritoneum in people with endometriosis, such as loss of tight junctions, but it is unknown if these are causes or effects of the disorder. in rare cases where imperforate hymen does not resolve itself prior to the first menstrual cycle and goes undetected, blood and endometrium are trapped within the uterus until such time as the problem is resolved by surgical incision. many health care practitioners never encounter this defect, and due to the flu - like symptoms it is often misdiagnosed or overlooked until multiple menstrual cycles have passed. by the time a correct diagnosis has been made, endometrium and other fluids have filled the uterus and fallopian tubes with results similar to retrograde menstruation resulting in endometriosis. the initial stage of endometriosis may vary based on the time elapsed between onset and surgical procedure. the theory of retrograde menstruation as a cause of endometriosis was first proposed by john a. sampson. other theories stem cells : endometriosis may
overall. moreover, there is no one reason why some cp cases come from prenatal brain damage, and it is not known if those cases have a genetic basis. cerebellar hypoplasia is sometimes genetic and can cause ataxic cerebral palsy. early childhood after birth, other causes include toxins, severe jaundice, lead poisoning, physical brain injury, stroke, abusive head trauma, incidents involving hypoxia to the brain ( such as near drowning ), and encephalitis or meningitis. others infections in the mother, even those not easily detected, can triple the risk of the child developing cerebral palsy. infection of the fetal membranes known as chorioamnionitis increases the risk. intrauterine and neonatal insults ( many of which are infectious ) increase the risk. rh blood type incompatibility can cause the mother's immune system to attack the baby's red blood cells. it has been hypothesised that some cases of cerebral palsy are caused by the death in very early pregnancy of an identical twin. diagnosis the diagnosis of cerebral palsy has historically rested on the person's history and physical examination and is generally assessed at a young age. a general movements assessment, which involves measuring movements that occur spontaneously among those less than four months of age, appears most accurate. children who are more severely affected are more likely to be noticed and diagnosed earlier. abnormal muscle tone, delayed motor development and persistence of primitive reflexes are the main early symptoms of cp. symptoms and diagnosis typically occur by the age of two, although depending on factors like malformations and congenital issues, persons with milder forms of cerebral palsy may be over the age of five, if not in adulthood, when finally diagnosed. cognitive assessments and medical observations are also useful to help confirm a diagnosis. additionally, evaluations of the child's mobility, speech and language, hearing, vision, gait, feeding and digestion are also useful to determine the extent of the disorder. early diagnosis and intervention are seen as being a key part of managing cerebral palsy. machine learning algorithms facilitate automatic early diagnosis, with methods such as deep neural network and geometric feature fusion producing
. the us national library of medicine publishes " lactmed, " an up - to - date online database of information on drugs and lactation. geared to both healthcare practitioners and nursing mothers, lactmed contains over 450 drug records with information such as potential drug effects and alternative drugs to consider. some substances in the mother's food and drink are passed to the baby through breast milk, including mercury ( found in some carnivorous fish ), caffeine, and bisphenol a. medical conditions undiagnosed maternal celiac disease may cause a short duration of the breastfeeding period. treatment with the gluten - free diet can increase its duration and restore it to the average value of the healthy women. mothers with all types of diabetes mellitus normally use insulin to control their blood sugar, as the safety of other antidiabetic drugs while breastfeeding is unknown. women with polycystic ovary syndrome, which is associated with some hormonal differences and obesity, may have greater difficulty with producing a sufficient supply to support exclusive breastfeeding, especially during the first weeks. ethnicity and socioeconomic status the rates of breastfeeding in the african - american community remain much lower than any other race, for a variety of proposed reasons. these include the legacy of wet nursing during slavery, higher rates of poor perinatal health, higher stress levels, less access to support, and less flexibility in the workplace. while for other races as socio - economic class raises rates of breastfeeding also go up, for the african - american community breastfeeding rates remain consistently low regardless of socio - economic class. there are also racial disparities in access to maternity care practices that support breastfeeding. in the us, primarily african - american neighborhoods are more likely to have facilities ( such as hospitals and female healthcare clinics ) that do not support breastfeeding, contributing to the low rate of breastfeeding in the african - american community. comparing facilities in primarily african american neighborhoods to ones in primarily white neighborhoods, the rates of practices that support or discourage breastfeeding were : limited use of supplements ( 13. 1 % compared with 25. 8 % ) and rooming - in ( 27.
and the most common symptom reported by women of all races is shortness of breath. other symptoms more commonly reported by women than men are extreme fatigue, sleep disturbances, indigestion, and anxiety. however, some women do experience irregular heartbeat, dizziness, sweating, and nausea. burning, pain, or pressure in the chest or upper abdomen that can travel to the arm or jaw can also be experienced in women, but it is less commonly reported by women than men. on average, women experience symptoms 10 years later than men. women are less likely to recognize symptoms and seek treatment. risk factors coronary artery disease is characterized by heart problems that result from atherosclerosis. atherosclerosis is a type of arteriosclerosis which is the " chronic inflammation of the arteries which causes them to harden and accumulate cholesterol plaques ( atheromatous plaques ) on the artery walls ". cad has a number of well determined risk factors that contribute to atherosclerosis. these risk factors for cad include " smoking, diabetes, high blood pressure ( hypertension ), abnormal ( high ) amounts of cholesterol and other fat in the blood ( dyslipidemia ), type 2 diabetes and being overweight or obese ( having excess body fat ) " due to lack of exercise and a poor diet. some other risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, family history, psychological stress and excessive alcohol. about half of cases are linked to genetics. smoking and obesity are associated with about 36 % and 20 % of cases, respectively. smoking just one cigarette per day about doubles the risk of cad. lack of exercise has been linked to 7 β 12 % of cases. exposure to the herbicide agent orange may increase risk. rheumatologic diseases such as rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and psoriatic arthritis are independent risk factors as well. job stress appears to play a minor role accounting for about 3 % of cases. in one study, women who were free of stress from work life saw an
</ctx>
The following are multiple choice questions (with answers) about STEM.
A 4-year-old boy is admitted to the hospital with pneumonia and respiratory distress. The nurses report that the child's bowel movements are greasy and have a pungent odor. A sweat-chloride test is positive. Which of the following mechanisms of disease is the most likely cause of steatorrhea in this child?
A. Abnormal dietary intake
B. Bacterial overgrowth
C. Hyperbilirubinemia with kernicterus
D. Lack of pancreatic enzyme secretion
Answer: | [
"Abnormal dietary intake",
"Bacterial overgrowth",
"Hyperbilirubinemia with kernicterus",
"Lack of pancreatic enzyme secretion"
] | D | Cystic fibrosis (CF) is an autosomal recessive disorder affecting children, which is characterized by (1) chronic pulmonary disease, (2) deficient exocrine pancreatic function, and (3) other complications of inspissated mucus in a number of organs, including the small intestine, the liver, and the reproductive tract. The diagnosis of CF is most reliably made by the demonstration of increased concentrations of electrolytes in the sweat. The decreased chloride conductance characteristic of CF results in a failure of chloride reabsorption by the cells of the sweat gland ducts and, hence, to the accumulation of sodium chloride in the sweat. All of the pathologic consequences of CF can be attributed to the presence of abnormally thick mucus. Lack of pancreatic enzyme secretion in patients with CF causes malabsorption and foul-smelling fatty stools (steatorrhea). The other choices do not address the underlying cause of malabsorption in patients with CF.Diagnosis: Cystic fibrosis |
aquarat_38602 | aquarat | <ctx>
of these interest and principal payments must equal the cumulative fixed payments at time t i. e. mat. evaluating the first integral on the right we obtain an expression for i ( t ), the interest paid : unsurprisingly the second integral evaluates to p0 β p ( t ) and therefore : the reader may easily verify that this expression is algebraically identical to the one above. loan cost factor the cost of a loan is simply the annual rate multiplied by loan period : let s = rt. then we may define loan cost factor c ( s ) such that c = p0c ( s ) i. e. : c ( s ) is the cost per unit of currency loaned. the function c ( s ) is characterised by having a limiting value of 1 when s is close to zero since for small values of s, exp ( βs ) β 1 β s and the denominator simplifies to s. also when s is very large, exp ( βs ) is small so c ( s ) β s and thus loan cost c β p0rt ( rt > > 0 ). by way of example, consider a loan of 1000000 at 10 % repaid over 20 years. then s = 0. 1 Γ 20 = 2. the product rt is an easily obtained but important parameter in determining loan cost according to the equation c = p0xc ( s ). this is best illustrated by plotting the cost factor function for s values in domain [ 0 ; 5 ]. the linear behaviour of the function for higher values of s is clear. equivalent simple interest cost factor for a fixed term loan of t years, we may compare the above loan cost factor against an equivalent simple interest cost factor 1 + se where se = ret and re is the equivalent simple interest rate : it is straightforward to determine se in terms of s. dividing by loan time period t will then give the equivalent simple interest rate. more challenging is the reverse determination of s given se. in his book problem solving with true basic, dr b. d. hahn has a short section on certain'hire purchase'schemes in which interest is calculated in advance in one lump sum, which is added to the
a mortgage ( 1 million for 20 years @ r = 10 % ) calculated firstly according to the above time continuous model and secondly using the excel pv function. as may be seen the curves are virtually indistinguishable β calculations effected using the model differ from those effected using the excel pv function by a mere 0. 3 % ( max ). the data from which the graph ( s ) were derived can be viewed comparison with similar physical systems define the " reverse time " variable z = t β t. ( t = 0, z = t and t = t, z = 0 ). then : this may be recognized as a solution to the " reverse time " differential equation : electrical / electronic engineers and physicists will be familiar with an equation of this nature : it is an exact analogue of the type of differential equation which governs ( for example ) the charging of a capacitor in an rc circuit. the key characteristics of such equations are explained in detail at rc circuits. for home owners with mortgages the important parameter to keep in mind is the time constant of the equation which is simply the reciprocal of the annual interest rate r. so ( for example ) the time constant when the interest rate is 10 % is 10 years and the period of a home loan should be determined β within the bounds of affordability β as a minimum multiple of this if the objective is to minimise interest paid on the loan. mortgage difference and differential equation the conventional difference equation for a mortgage loan is relatively straightforward to derive - balance due in each successive period is the previous balance plus per period interest less the per period fixed payment. given an annual interest rate r and a borrower with an annual payment capability mn ( divided into n equal payments made at time intervals Ξ΄t where Ξ΄t = 1 / n years ), we may write : if n is increased indefinitely so that Ξ΄t β 0, we obtain the continuous time differential equation : note that for there to be a continually diminishing mortgage balance, the following inequality must hold : p0 is the same as p ( 0 ) β the original loan amount or loan balance at time t = 0. solving the difference equation we begin by re - writing the difference equation in rec
of r that makes this equation true is 0. 2, or 20 %. this means that the total return over the 2 - year period is the same as if there had been 20 % growth each year. the order of the years makes no difference β the average percentage returns of + 60 % and β10 % is the same result as that for β10 % and + 60 %. this method can be generalized to examples in which the periods are not equal. for example, consider a period of a half of a year for which the return is β23 % and a period of two and a half years for which the return is + 13 %. the average percentage return for the combined period is the single year return, r, that is the solution of the following equation :, giving an average return r of 0. 0600 or 6. 00 %. moving average given a time series, such as daily stock market prices or yearly temperatures, people often want to create a smoother series. this helps to show underlying trends or perhaps periodic behavior. an easy way to do this is the moving average : one chooses a number n and creates a new series by taking the arithmetic mean of the first n values, then moving forward one place by dropping the oldest value and introducing a new value at the other end of the list, and so on. this is the simplest form of moving average. more complicated forms involve using a weighted average. the weighting can be used to enhance or suppress various periodic behavior and there is very extensive analysis of what weightings to use in the literature on filtering. in digital signal processing the term " moving average " is used even when the sum of the weights is not 1. 0 ( so the output series is a scaled version of the averages ). the reason for this is that the analyst is usually interested only in the trend or the periodic behavior. history origin the first recorded time that the arithmetic mean was extended from 2 to n cases for the use of estimation was in the sixteenth century. from the late sixteenth century onwards, it gradually became a common method to use for reducing errors of measurement in various areas. at the time, astronomers wanted to know a real value from noisy measurement, such as the position of a
functions for simple and compound interest are if, then these two functions are the same. continuous compounding as n, the number of compounding periods per year, increases without limit, the case is known as continuous compounding, in which case the effective annual rate approaches an upper limit of, where is a mathematical constant that is the base of the natural logarithm. continuous compounding can be thought of as making the compounding period infinitesimally small, achieved by taking the limit as n goes to infinity. see definitions of the exponential function for the mathematical proof of this limit. the amount after t periods of continuous compounding can be expressed in terms of the initial amount p0 as force of interest as the number of compounding periods tends to infinity in continuous compounding, the continuous compound interest rate is referred to as the force of interest. in mathematics, the accumulation functions are often expressed in terms of e, the base of the natural logarithm. this facilitates the use of calculus to manipulate interest formulae. for any continuously differentiable accumulation function a ( t ), the force of interest, or more generally the logarithmic or continuously compounded return is a function of time defined as follows : this is the logarithmic derivative of the accumulation function. conversely : ( since ; this can be viewed as a particular case of a product integral. ) when the above formula is written in differential equation format, then the force of interest is simply the coefficient of amount of change : for compound interest with a constant annual interest rate r, the force of interest is a constant, and the accumulation function of compounding interest in terms of force of interest is a simple power of e : or the force of interest is less than the annual effective interest rate, but more than the annual effective discount rate. it is the reciprocal of the e - folding time. see also notation of interest rates. a way of modeling the force of inflation is with stoodley's formula : where p, r and s are estimated. compounding basis to convert an interest rate from one compounding basis to another compounding basis, so that use where r1 is the interest rate with compounding frequency n1, and r2 is
</ctx>
The following are multiple choice questions (with answers) about STEM.
At a certain rate of simple interest per annum the amount due in 3 & 1/2 yrs and 5 & 1/2 yrs are respectively Rs4560 & Rs4840. Find the rate of Simple Interest ?
A. A)1.43
B. B)4.43
C. C)2.43
D. D)3.43
Answer: | [
"A)1.43",
"B)4.43",
"C)2.43",
"D)3.43"
] | D | let the amount p & rate r
now 4560-p=(p*r*7/2)/100 .....(1)
4840-p=(p*r*11/2)/100 .....(2)
by solving the eq. we get the p=4070 and r=3.43
so the ans is 3.43
ANSWER:D |
mmlu_aux_7933 | mmlu_auxiliary | <ctx>
following a major disturbance. following a large disturbance in the synchronous alternator the machine power ( load ) angle changes due to sudden acceleration of the rotor shaft. the objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance. the ability of a power system to maintain stability under continuous small disturbances is investigated under the name of dynamic stability ( also known as small - signal stability ). these small disturbances occur due to random fluctuations in loads and generation levels. in an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily. steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady - state characteristics. periodic steady - state solution is also a prerequisite for small signal dynamic modeling. steady - state analysis is therefore an indispensable component of the design process. in some cases, it is useful to consider constant envelope vibration β vibration that never settles down to motionlessness, but continues to move at constant amplitude β a kind of steady - state condition. chemical engineering in chemistry, thermodynamics, and other chemical engineering, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. for an entire system to be at steady state, i. e. for all state variables of a system to be constant, there must be a flow through the system ( compare mass balance ). one of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug : after a certain time the water flows in and out at the same rate, so the water level ( the state variable being volume ) stabilizes and the system is at steady state. of course the volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain. a steady state flow process requires conditions at all points in an apparatus remain constant as
over time. for example, a rock may wear away or have pieces broken off it. the object will be regarded as the same object after the addition or removal of material, if the system may be more simply described with the continued existence of the object, than in any other way. the addition or removal of material may discontinuously change the boundary of the object. the continuation of the object's identity is then based on the description of the system by continued identity being simpler than without continued identity. for example, a particular car might have all its wheels changed, and still be regarded as the same car. the identity of an object may not split. if an object is broken into two pieces at most one of the pieces has the same identity. an object's identity may also be destroyed if the simplest description of the system at a point in time changes from identifying the object to not identifying it. also an object's identity is created at the first point in time that the simplest model of the system consistent with perception identifies it. an object may be composed of components. a component is an object completely within the boundary of a containing object. a living thing may be an object, and is distinguished from non - living things by the designation of the latter as inanimate objects. inanimate objects generally lack the capacity or desire to undertake actions, although humans in some cultures may tend to attribute such characteristics to non - living things. classical mechanics in classical mechanics a physical body is collection of matter having properties including mass, velocity, momentum and energy. the matter exists in a volume of three - dimensional space. this space is its extension. interactions between objects are partly described by orientation and external shape. in continuum mechanics an object may be described as a collection of sub objects, down to an infinitesimal division, which interact with each other by forces that may be described internally by pressure and mechanical stress. quantum mechanics in quantum mechanics an object is a particle or collection of particles. until measured, a particle does not have a physical position. a particle is defined by a probability distribution of finding the particle at a particular position. there is a limit to the accuracy with which the position and velocity may be measured. a particle or collection
global values, valid for the system as a whole. when boundaries impose to the system different local conditions, ( e. g. temperature differences ), there are intensive variables representing the average value and others representing gradients or higher moments. the latter are the thermodynamic forces driving fluxes of extensive properties through the system. it may be shown that the legendre transformation changes the maximum condition of the entropy ( valid at equilibrium ) in a minimum condition of the extended massieu function for stationary states, no matter whether at equilibrium or not. stationary states, fluctuations, and stability in thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. the fluctuations are due to the system's internal sub - processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process. if the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. the reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes : the fluctuation cannot be reproduced with a significant level of probability. fluctuations about stable stationary states are extremely small except near critical points ( kondepudi and prigogine 1998, page 323 ). the stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. there are theorems about the irreversible dissipation of fluctuations. here'local'means local with respect to the abstract space of thermodynamic coordinates of state of the system. if the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. this can be accompanied by increased export of entropy. local thermodynamic equilibrium the scope of present - day non - equilibrium thermodynamics does not cover all physical processes. a condition for the validity of many studies in non - equilibrium thermodynamics of matter is that they deal with what is known as
loses information about which state preceded it. thus, a complex system exhibits alternating information gain and loss as it evolves in time. the alternation or fluctuation of information is equivalent to remembering and forgetting β temporary information storage or memory β an essential feature of non - trivial computation. the gain or loss of information associated with transitions between states can be related to state information. the net information gain of a transition from state to state is the information gained when leaving state less the information lost when entering state : here is the forward conditional probability that if the present state is then the next state is and is the reverse conditional probability that if the present state is then the previous state was. the conditional probabilities are related to the transition probability, the probability that a transition from state to state occurs, by : eliminating the conditional probabilities : therefore the net information gained by the system as a result of the transition depends only on the increase in state information from the initial to the final state. it can be shown that this is true even for multiple consecutive transitions. is reminiscent of the relation between force and potential energy. is like potential and is like force in. external information β pushes β a system β uphill β to a state of higher information potential to accomplish memory storage, much like pushing a mass uphill to a state of higher gravitational potential stores energy. the amount of energy storage depends only on the final height, not the path up the hill. likewise, the amount of information storage does not depend on the transition path between two states in state space. once a system reaches a rare state with high information potential, it may " fall " to a more common state, losing the previously stored information. it may be useful to compute the standard deviation of about its mean ( which is zero ), namely the fluctuation of net information gain, but takes into account multi - transition memory loops in state space and therefore should be a better indicator of the computational power of a system. moreover, is easier to calculate because there can be many more transitions than states. chaos and order a dynamic system that is sensitive to external information ( unstable ) exhibits chaotic behavior whereas one that is insensitive to external information ( stable ) exhibits orderly behavior. a complex
</ctx>
The following are multiple choice questions (with answers) about STEM.
As a matters state changes, what will remain constant?
A. acceleration resistance
B. appearance
C. temperature
D. mass amount
Answer: | [
"acceleration resistance",
"appearance",
"temperature",
"mass amount"
] | D | null |
aquarat_3037 | aquarat | <ctx>
as values. for example, the quadratic form is universal, because every positive integer can be written as a sum of 4 squares, by lagrange's four - square theorem. by the 15 theorem, to verify this, it is sufficient to check that every positive integer up to 15 is a sum of 4 squares. ( this does not give an alternative proof of lagrange's theorem, because lagrange's theorem is used in the proof of the 15 theorem. ) on the other hand, is a positive definite quadratic form with integral matrix that takes as values all positive integers other than 15. the 290 theorem says a positive definite integral quadratic form is universal if it takes the numbers from 1 to 290 as values. a more precise version states that, if an integer valued integral quadratic form represents all the numbers 1, 2, 3, 5, 6, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 34, 35, 37, 42, 58, 93, 110, 145, 203, 290, then it represents all positive integers, and for each of these 29 numbers, there is such a quadratic form representing all other 28 positive integers with the exception of this one number. bhargava has found analogous criteria for a quadratic form with integral matrix to represent all primes ( the set { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 67, 73 } ) and for such a quadratic form to represent all positive odd integers ( the set { 1, 3, 5, 7, 11, 15, 33 } ). expository accounts of these results have been written by hahn and moon ( who provides proofs ). references additive number theory theorems in number theory quadratic forms
48 not just 25 because the square of the next prime is 49, and below n = 25 just 2 and 3 are sufficient. should the square root of n be an integer, then it is a factor and n is a perfect square. an example of the trial division algorithm, using successive integers as trial factors, is as follows ( in python ) : def trial _ division ( n : int ) - > list [ int ] : " " " return a list of the prime factors for a natural number. " " " a = [ ] # prepare an empty list. f = 2 # the first possible factor. while n > 1 : # while n still has remaining factors... if n % f = = 0 : # the remainder of n divided by f might be zero. a. append ( f ) # if so, it divides n. add f to the list. n / / = f # divide that factor out of n. else : # but if f is not a factor of n, f + = 1 # add one to f and try again. return a # prime factors may be repeated : 12 factors to 2, 2, 3. or 2x more efficient : def trial _ division ( n : int ) - > list [ int ] : a = [ ] while n % 2 = = 0 : a. append ( 2 ) n / / = 2 f = 3 while f * f < = n : if n % f = = 0 : a. append ( f ) n / / = f else : f + = 2 if n! = 1 : a. append ( n ) # only odd number is possible return a these versions of trial division are guaranteed to find a factor of n if there is one since they check all possible factors of n and if n is a prime number, this means trial factors all the way up to n. thus, if the algorithm finds one factor only, n, it is proof that n is a prime. if more than one factor is found, then n is a composite integer. a more computationally advantageous way of saying this is, if any prime whose square does not exceed n divides it without a remainder, then n
/ all x's odd y's if n mod 60 β { 1, 13, 17, 29, 37, 41, 49, 53 } : is _ prime ( n ) β Β¬is _ prime ( n ) / / toggle state / / algorithm step 3. 2 : for n β€ limit, n β 3xΒ² + yΒ² where x β { 1, 3,... } and y β { 2, 4,... } / / only odd x's if n mod 60 β { 7, 19, 31, 43 } : / / and even y's is _ prime ( n ) β Β¬is _ prime ( n ) / / toggle state / / algorithm step 3. 3 : for n β€ limit, n β 3xΒ² - yΒ² where x β { 2, 3,... } and y β { x - 1, x - 3,..., 1 } / / all even / odd if n mod 60 β { 11, 23, 47, 59 } : / / odd / even combos is _ prime ( n ) β Β¬is _ prime ( n ) / / toggle state / / eliminate composites by sieving, only for those occurrences on the wheel : for nΒ² β€ limit, n β 60 Γ w + x where w β { 0, 1,... }, x β s, n β₯ 7 : if is _ prime ( n ) : / / n is prime, omit multiples of its square ; this is sufficient / / because square - free composites can't get on this list for c β€ limit, c β nΒ² Γ ( 60 Γ w + x ) where w β { 0, 1,... }, x β s : is _ prime ( c ) β false / / one sweep to produce a sequential list of primes up to limit : output 2, 3, 5 for 7 β€ n β€ limit, n β 60 Γ w + x where w β { 0, 1,... }, x β s : if is _ prime ( n ) : output n this pseudocode is written for clarity ; although some redundant computation
subset of the equations as a matrix yields : a solution to the equation is given by the left null space, simply thus the product of all 3 equations yields a square ( mod n ). and so the algorithm found testing the result yields gcd ( 3070860 - 22678, 15347 ) = 103, a nontrivial factor of 15347, the other being 149. this demonstration should also serve to show that the quadratic sieve is only appropriate when n is large. for a number as small as 15347, this algorithm is overkill. trial division or pollard rho could have found a factor with much less computation. multiple polynomials in practice, many different polynomials are used for y, since only one polynomial will not typically provide enough ( x, y ) pairs that are smooth over the factor base. the polynomials used must have a special form, since they need to be squares modulo n. the polynomials must all have a similar form to the original y ( x ) = x2 β n : assuming is a multiple of a, so that the polynomial y ( x ) can be written as. if a is a square, then only the factor has to be considered. this approach ( called mpqs, multiple polynomial quadratic sieve ) is ideally suited for parallelization, since each processor involved in the factorization can be given n, the factor base and a collection of polynomials, and it will have no need to communicate with the central processor until it is finished with its polynomials. large primes one large prime if, after dividing by all the factors less than a, the remaining part of the number ( the cofactor ) is less than a2, then this cofactor must be prime. in effect, it can be added to the factor base, by sorting the list of relations into order by cofactor. if y ( a ) = 7 * 11 * 23 * 137 and y ( b ) = 3 * 5 * 7 * 137, then y ( a ) y ( b ) = 3 * 5 * 11 * 23 * 72 * 1372. this works by reducing the threshold of entries in the sieving array above which a full factorization is performed. more large prime
</ctx>
The following are multiple choice questions (with answers) about STEM.
If x, y, and z are all factors of 45, which of the following must be true?
A. A)The product xyz > 45
B. B)Either x = y, y = z, or x = z
C. C)x β y β z = even
D. D)The product xyz is odd
Answer: | [
"A)The product xyz > 45",
"B)Either x = y, y = z, or x = z",
"C)x β y β z = even",
"D)The product xyz is odd"
] | D | All the factors of 45 are odd numbers.
The product of odd numbers is odd.
The answer is D. |
arc_easy_801 | arc_easy | <ctx>
##dulator in an fm radio receiver, the error feedback voltage serves as the demodulated output signal. if there is a frequency divider between the generated waveform and the phase comparator, the device acts as a frequency multiplier. in organisms, feedback enables various measures ( e. g. body temperature, or blood sugar level ) to be maintained within a desired range by homeostatic processes. detailed implementations error - controlled regulation one use of feedback is to make a system ( say t ) self - regulating to minimize the effect of a disturbance ( say d ). using a negative feedback loop, a measurement of some variable ( for example, a process variable, say e ) is subtracted from a required value ( the'set point') to estimate an operational error in system status, which is then used by a regulator ( say r ) to reduce the gap between the measurement and the required value. the regulator modifies the input to the system t according to its interpretation of the error in the status of the system. this error may be introduced by a variety of possible disturbances or'upsets ', some slow and some rapid. the regulation in such systems can range from a simple'on - off'control to a more complex processing of the error signal. in this framework, the physical form of a signal may undergo multiple transformations. for example, a change in weather may cause a disturbance to the heat input to a house ( as an example of the system t ) that is monitored by a thermometer as a change in temperature ( as an example of an'essential variable'e ). this quantity, then, is converted by the thermostat ( a'comparator') into an electrical error in status compared to the'set point's, and subsequently used by the regulator ( containing a'controller'that commands gas control valves and an ignitor ) ultimately to change the heat provided by a furnace ( an'effector') to counter the initial weather - related disturbance in heat input to the house. error controlled regulation is typically carried out using a proportional - integral - derivative controller ( pid controller ). the regulator signal is derived from a weighted sum of the error signal
, etc. consequently, it is possible to develop processes in order to install and foster changes in organizations. decision - making system ( ds ) the decision - making system describes where, how, to whom, on which level, at which spot and by means of which tools decisions are made. it furthermore describes which mechanisms, processes, rules etc. guide the decision processes. information system ( is ) the information system describes who receives or does not receive which information when, from whom and by which means. it also analyzes why this is so. reward and control system ( rcs ) amplifying and reduction systems of material and immaterial as well as formal and informal character are described through the reward and control system of an organization. as such it analyzes mechanisms and procedures which observe and guide human and technical behavior, results and processes. this includes amongst others remuneration structures and the unwritten rules of a company. development and renewal system ( drs ) by means of the development and renewal system the flexibility as well as the performance and adaption capacities of an organization are maintained and increased. this can also revolve around a group of employees within the company which have the mandate to develop the company internally and externally. generally, this can be subsumed under the term innovation management. side effects β " feedback loops " according to the model, feedback is of crucial importance for the survival of a system, i. e. of an organization that is confronted with turbulent environments. as systems are to be understood as open systems they rely on feedback loops in order to remain existent. the feedback consists of loops that have a guiding, stabilizing and renewing effect on the system. according to literature there are four types of feedback. quality feedback quality feedbacks are reactions to the quality of the output. usually, all types of quality management use quality feedback as a starting point of their methods. renewal feedback renewal feedbacks report reaction of the environment with regard to the reason for existing. this form of feedback is to analyze the demands of the environment ( e. g. development of new markets ). responsibility feedback responsibility feedbacks question the mission of the system. they refer to the long - term chances of survival of the system and its
negative feedback ( or balancing feedback ) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. a classic example of negative feedback is a heating system thermostat β when the temperature gets high enough, the heater is turned off. when the temperature gets too cold, the heat is turned back on. in each case the " feedback " generated by the thermostat " negates " the trend. the opposite tendency β called positive feedback β is when a trend is positively reinforced, creating amplification, such as the squealing " feedback " loop that can occur when a mic is brought too close to a speaker which is amplifying the very sounds the mic is picking up, or the runaway heating and ultimate meltdown of a nuclear reactor. whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. negative feedback loops in which just the right amount of correction is applied with optimum timing, can be very stable, accurate, and responsive. negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. general negative feedback systems are studied in control systems engineering. negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on earth. one such feedback system is the interaction between solar radiation, cloud cover, and planet temperature. general description in many physical and biological systems, qualitatively different influences can oppose each other. for example, in biochemistry, one set of chemicals drives the system in a given direction, whereas another set of chemicals drives it in an opposing direction. if one or both of these opposing influences are non - linear, equilibrium point ( s ) result. in biology, this process ( in general, biochemical ) is often referred to as homeostasis ; whereas in mechanics, the more common term is equilibrium. in engineering, mathematics
the payload in the desired position based on knowing the deflections in the arm from the mathematical model's interpretation of the disturbance caused by the payload. systems that plan actions and then pass the plan to a different system for execution do not satisfy the above definition of feedforward control. unless the system includes a means to detect a disturbance or receive an input and process that input through the mathematical model to determine the required modification to the control action, it is not true feedforward control. open system in systems theory, an open system is a feed forward system that does not have any feedback loop to control its output. in contrast, a closed system uses on a feedback loop to control the operation of the system. in an open system, the output of the system is not fed back into the input to the system for control or operation. applications physiological feed - forward system in physiology, feed - forward control is exemplified by the normal anticipatory regulation of heartbeat in advance of actual physical exertion by the central autonomic network. feed - forward control can be likened to learned anticipatory responses to known cues ( predictive coding ). feedback regulation of the heartbeat provides further adaptiveness to the running eventualities of physical exertion. feedforward systems are also found in biological control of other variables by many regions of animals brains. even in the case of biological feedforward systems, such as in the human brain, knowledge or a mental model of the plant ( body ) can be considered to be mathematical as the model is characterized by limits, rhythms, mechanics and patterns. a pure feed - forward system is different from a homeostatic control system, which has the function of keeping the body's internal environment'steady'or in a'prolonged steady state of readiness.'a homeostatic control system relies mainly on feedback ( especially negative ), in addition to the feedforward elements of the system. gene regulation and feed - forward feed - forward loops ( ffls ), a three - node graph of the form a affects b and c and b affects c, are frequently observed in transcription networks in several organisms including e. coli and s. cerevisiae, suggesting that they perform functions
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which function is a role of a feedback system?
A. stimulation of appetite
B. identification of gender
C. determination of blood type
D. regulation of growth in the skeletal system
Answer: | [
"stimulation of appetite",
"identification of gender",
"determination of blood type",
"regulation of growth in the skeletal system"
] | D | |
m1_1493 | m1 | <ctx>
who have not defaulted. statistically, estimation techniques such as logistic regression or probit are used to create estimates of the probability of default for observations based on this historical data. this model can be used to predict the probability of default for new clients using the same observation characteristics ( e. g. age, income, house owner ). the default probabilities are then scaled to a " credit score. " this score ranks clients by riskiness without explicitly identifying their probability of default. there are a number of credit scoring techniques such as hazard rate modeling, reduced form credit models, the weight of evidence models, linear or logistic regression. the primary differences involve the assumptions required about the explanatory variables and the ability to model continuous versus binary outcomes. some of these techniques are superior to others indirectly estimating the probability of default. despite much research from academics and industry, no single technique has been proven superior for predicting default in all circumstances. a typical mistaken belief about credit scoring is that the only trait that matters is whether you have actually made payments on time as well as satisfied your monetary obligations in a prompt way. while payment background is essential, however, it still just composes just over one - third of the credit rating score. furthermore, the repayment background is only shown in your credit history. one of the major areas of income for banks is the lending business and these loans can be secured or unsecured. banks would not want to give credit to customers or businesses that will not be able to repay the loan in the future. the process of scoring an applicant based on their creditworthiness determines who should get credit and by how much. this is where credit scorecards come into play which helps banks and financial situations minimize risk and less the delinquency rate. modelling methods the methodologies that are used to create a credit scorecard broadly fall under two categories namely a statistical - based method and an artificial intelligence / machine learning method. statistical based credit scoring model models which are usually less complex and whose output can be easily interpreted fall under this category. simple techniques like logistic regression, linear regression, and decision trees are some examples of simple statistical techniques. many banks prefer this category because if a customer is denied a
easiest way would be to count the number of " positive " and " negative " words in each relevant tweet and construct a combined indicator based on this data. nasseri et al. ( 2014 ) reports the predictive power of stocktwits ( twitter - like platform specialized on exchanging trading - related opinions ) data with respect to behavior of stock prices. an alternative, but more demanding, way is to engage human experts to annotate a large number of tweets with the expected stock moves, and then construct a machine learning model for prediction. the application of the event study methodology to twitter mood shows significant correlation to cumulative abnormal returns ( sprenger et al. ( 2014 ), ranco et al. ( 2015 ), gabrovsek et al. ( 2017 ) ). karabulut ( 2013 ) reports facebook to be a good source of information about investors β mood. overall, most popular social networks, finance - related media platforms, magazines, and journals can be a valuable source of sentiment data, summarized in peterson ( 2016 ). however, important to notice that it is relatively more difficult to collect such type of data ( in most cases a researcher needs a special software ). in addition, analysis of such data can also require deep machine learning and data mining knowledge ( hotho et al. ( 2005 ) ). fourth road the fourth road is an important source of information about investor attention is the internet search behavior of households. this approach is supported by results from simon ( 1955 ), who concludes that people start their decision making process by gathering relevant information. publicly available data on search volumes for most internet search services starts from the year 2004. since that time many authors showed the usefulness of such data in predicting investor attention and market returns ( da et al. ( 2014 ), preis et al. ( 2013 ), and curme et al. ( 2014 ) ). most studies are using google trends ( gt ) service in order to extract search volume data and investigate investor attention. the usefulness of internet search data was also proved based on yahoo! corporation data ( bordino et al. ( 2012 ) ). the application of internet search data gives promising results in solving different financial
machine learning method. statistical based credit scoring model models which are usually less complex and whose output can be easily interpreted fall under this category. simple techniques like logistic regression, linear regression, and decision trees are some examples of simple statistical techniques. many banks prefer this category because if a customer is denied a loan then a reason for denial needs to be given and that can be easily interpreted from these models. artificial intelligence / machine learning based credit scoring model the techniques used here are broadly called black boxes in the analytics world because interpreting them is difficult. banks generally use this type of scoring model for upselling or cross - selling different products of a bank to its customers. these techniques usually outperform the statistical - based credit scoring models but fall behind because of their interpretability issues. types of scorecards application scorecard - this is used when a customer applies for a new loan. this type of scorecard predicts if a customer will default on the loan. here the type of data that is used mainly comes from historical loan applications and if the customer has any existing loan then that data is extracted from one of the credit bureaus. if for example, the product that is getting launched is new then in that case data is taken is credit bureaus. this type of scorecard helps the business to make automated, accurate, and consistent decisions on whether to approve, review or decline an applicant. some of the advantages of this type of scorecard are that the organization can automate the whole decision - making process which in turn reduces the turnaround time of the underwriting process. it also provides the business to make data - backed and accurate decisions. behavioral scorecard - this is used in predicting if an existing customer who has a loan is going to default. here the data includes the customer's transactional details as well as bureau - related information. this type of scorecard is also used as an alternate credit score for internal purposes of the institute along with the credit score obtained from the credit bureau. this type of scorecard is also used for identifying the bank's most valuable customers. collection scorecard - this is used to predict customers'responses to different strategies for collecting owed money. the data involved here is similar to the behavioural scorecard
the rating values and to the probability of the game outcome, we cannot say what the probability of the win, the loss, or the draw is. to address these difficulties, and to derive the elo rating in the ternary games, we will define the explicit probabilistic model of the outcomes. next, we will minimize the log loss via stochastic gradient. since the loss, the draw, and the win are ordinal variables, we should adopt the model which takes their ordinal nature into account, and we use the so - called adjacent categories model which may be traced to the davidson's work where and is a parameter. introduction of a free parameter should not be surprising as we have three possible outcomes and thus, an additional degree of freedom should appear in the model. in particular, with we recover the model underlying the logistic regression where. using the ordinal model defined above, the log loss is now calculated as which may be compactly written as where iff wins, iff wins, and iff draws. as before, we need the derivative of which is given by, where thus, the derivative of the log loss with respect to the rating is given by where we used the relationships and. then, the stochastic gradient descent applied to minimize the log loss yields the following update for the rating where and. of course, if wins, if draws, and if loses. to recognize the origin in the model proposed by davidson, this update is called an elo - davidson rating. the update for is derived in the same manner as, where. we note that and thus, we obtain the rating update may be written as, where and we obtained practically the same equation as in the elo rating except that the expected score is given by instead of. of course, as noted above, for, we have and thus, the elo - davidson rating is exactly the same as the elo rating. however, this is of no help to understand the case when the draws are observed ( we cannot use which would mean that the probability of draw is null ). on the other hand, if we use, we have which means that, using, the elo - davidson rating is
</ctx>
The following are multiple choice questions (with answers) about STEM.
For a user that has not done any ratings, which method can make a prediction?
A. User-based collaborative RS
B. Item-based collaborative RS
C. Content-based RS
D. None of the above
Answer: | [
"User-based collaborative RS",
"Item-based collaborative RS",
"Content-based RS",
"None of the above"
] | D | When considering a user who has not provided any ratings, it's important to evaluate how each recommendation system operates.
User-based collaborative recommendation systems rely on similarities between users. They analyze the ratings of similar users to suggest items. If a user hasn't rated anything, there won't be any data to find similarities, rendering this method ineffective.
Item-based collaborative recommendation systems, on the other hand, analyze the similarities between items based on ratings from other users. However, if the user has not rated any items, there's no initial data to base recommendations on, making this method also ineffective for the user in question.
Content-based recommendation systems focus on the characteristics of items and a user's past preferences. Without any ratings or prior interactions, thereΓ’ΒΒs no basis to understand the user's preferences, thus this method will also fail to provide predictions.
Given this analysis, none of the provided methods can make a prediction for a user without any ratings. Therefore, the correct choice is 'None of the above'. |
m1_6243 | m1 | <ctx>
, then so is Ζ'{ 0, 1 } 2nβ { 0, 1 } 2n, where Ζ'( x, y ) = Ζ ( x ). y by definition. then b ( x, y ) = xβ
y is a hard - core predicate for Ζ ', where β
is a vector dot product. to prove that it is indeed hard - core let's assume otherwise, and show a contradiction with the hypothesis of Ζ being one - way. if b is not a hard - core predicate, then there exists a circuit c that predicts it, so probx, y [ c ( Ζ ( x ), y ) = xβ
y ] β₯ 1 / 2 + Ξ΅. that fact can be used to recover x by cleverly constructing permutations y that isolate bits in x. in fact, for a constant fraction of x, there exists a polynomial time algorithm that lists o ( 1 / Ξ΅2 ) candidates that include all valid x. thus, an algorithm can invert Ζ ( x ) in polynomial time for a non - negligible fraction of x, which contradicts the hypothesis. references w. diffie, m. e. hellman. " new directions in cryptography. " ieee transactions on information theory, it - 22, pp. 644 β 654, 1976. a. c. yao. " theory and application of trapdoor functions. " 23rd ieee symposium on foundations of computer science, pp. 80 β 91, 1982. m. blum and s. micali " how to generate cryptographically strong sequences of pseudo - random bits. " siam journal on computing, v13, pp. 850 β 864, 1984. j. hastad, r. impagliazzo, l. a. levin and m. luby. " a pseudorandom generator from any one - way function. " siam journal on computing, v28 n4, pp. - 1364 - 1396, 1999. pseudorandomness theorems in computational complexity theory
> > 0 ) & 0x1f a = ( aek > > 64 ) & m b = ( aek > > 32 ) & m c = ( aek > > 0 ) & m kr = ( k > > ( 32 + 64 * ( 3 - ri % 4 ) ) ) & m kl = ( k > > ( 0 + 64 * ( 3 - ri % 4 ) ) ) & m x = op [ 0 ] ( l, kl ) y = op [ 2 ] ( op [ 1 ] ( rol ( x, s1 ), x ), a ) z = op [ 5 ] ( op [ 4 ] ( op [ 3 ] ( rol ( y, s2 ), y ), b ), kr ) return op [ 8 ] ( op [ 7 ] ( op [ 6 ] ( rol ( z, s3 ), z ), c ), r ), l def m8 _ keyexpand ( dk, kek, adks, aeks ) : " " " key expansion. dk : 64 - bit data key kek : 256 - bit key expansion key adks : algorithm decision keys aeks : algorithm expansion keys " " " l = ( dk > > 32 ) & m r = ( dk > > 0 ) & m k = 0 for i in range ( 8 ) : l, r = m8 _ round ( l, r, i, kek, adks [ i ], aeks [ i ] ) k | = ( l < < ( 32 * ( 7 - i ) ) ) return k def m8 _ encrypt ( data, n, dk, kek, adks, aeks ) : " " " encrypt one block with m8. data : 64 - bit input block n : number of rounds ( must be > = 8 ) dk : 64 - bit data key kek : 256 - bit key expansion key adks : a list of n 24 - bit algorithm decision keys aeks : a list of n 96 - bit algorithm expansion keys " " " ek = m8 _ keyexpan
in cryptography, the fast syndrome - based hash functions ( fsb ) are a family of cryptographic hash functions introduced in 2003 by daniel augot, matthieu finiasz, and nicolas sendrier. unlike most other cryptographic hash functions in use today, fsb can to a certain extent be proven to be secure. more exactly, it can be proven that breaking fsb is at least as difficult as solving a certain np - complete problem known as regular syndrome decoding so fsb is provably secure. though it is not known whether np - complete problems are solvable in polynomial time, it is often assumed that they are not. several versions of fsb have been proposed, the latest of which was submitted to the sha - 3 cryptography competition but was rejected in the first round. though all versions of fsb claim provable security, some preliminary versions were eventually broken. the design of the latest version of fsb has however taken this attack into account and remains secure to all currently known attacks. as usual, provable security comes at a cost. fsb is slower than traditional hash functions and uses quite a lot of memory, which makes it impractical on memory constrained environments. furthermore, the compression function used in fsb needs a large output size to guarantee security. this last problem has been solved in recent versions by simply compressing the output by another compression function called whirlpool. however, though the authors argue that adding this last compression does not reduce security, it makes a formal security proof impossible. description of the hash function we start with a compression function with parameters such that and. this function will only work on messages with length ; will be the size of the output. furthermore, we want and to be natural numbers, where denotes the binary logarithm. the reason for is that we want to be a compression function, so the input must be larger than the output. we will later use the merkle β damgard construction to extend the domain to inputs of arbitrary lengths. the basis of this function consists of a ( randomly chosen ) binary matrix which acts on a message of bits by matrix multiplication. here we encode the - bit message as a vector in, the - dimensional
such a set necessarily has an xor sum of 0, and the xor sums of the corresponding sets of ciphertexts provide information about the cipher's operation. this contrast between the differences between pairs of texts and the sums of larger sets of texts inspired the name " integral cryptanalysis ", borrowing the terminology of calculus. other techniques in addition to linear and differential cryptanalysis, there is a growing catalog of attacks : truncated differential cryptanalysis, partial differential cryptanalysis, integral cryptanalysis, which encompasses square and integral attacks, slide attacks, boomerang attacks, the xsl attack, impossible differential cryptanalysis, and algebraic attacks. for a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks. provable security when a block cipher is used in a given mode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ecb ( discussed above ) emphatically lacks this property : regardless of how secure the underlying block cipher is, ecb mode can easily be attacked. on the other hand, cbc mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to " be secure ". this section describes two common notions for what properties a block cipher should have. each corresponds to a mathematical model that can be used to prove properties of higher - level algorithms, such as cbc. this general approach to cryptography β proving higher - level algorithms ( such as cbc ) are secure under explicitly stated assumptions regarding their components ( such as a block cipher ) β is known as provable security. standard model informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher ( equipped with a random key ) and a random permutation. to be a bit more precise, let e be an n - bit block cipher. we imagine the following game : the person running the game flips a coin. if the coin lands on heads, he chooses a random key k and defines the function f = ek. if the coin lands on tails, he chooses a random per
</ctx>
The following are multiple choice questions (with answers) about STEM.
Tick the \textbf{false} assertion. A cipher with a good decorrelation of order 2 protects against \ldots
A. \ldots non-adaptive distinguishers limited to two queries.
B. \ldots unbounded attacks.
C. \ldots differential cryptanalysis.
D. \ldots linear cryptanalysis.
Answer: | [
"\\ldots non-adaptive distinguishers limited to two queries.",
"\\ldots unbounded attacks.",
"\\ldots differential cryptanalysis.",
"\\ldots linear cryptanalysis."
] | B | The false assertion is **"\ldots unbounded attacks."** While a cipher with good decorrelation of order 2 can indeed protect against non-adaptive distinguishers limited to two queries, differential cryptanalysis, and linear cryptanalysis, it does not inherently protect against all forms of attacks, particularly unbounded attacks. An unbounded attack implies that an adversary has no limits on the number of queries they can make, which means that the security guarantees provided by decorrelation of order 2 could be insufficient. Therefore, this statement is misleading as it suggests a level of security that is not guaranteed against more sophisticated or unlimited attack strategies. |
aquarat_49973 | aquarat | <ctx>
this equation and entering the known values gives 10. 28 10. 29 10. 30 10. 31 10. 32 10. 33
. 50 4. 51 4. 52 4. 53 4. 54 4. 55
get 31. 55 31. 56 Γ Γ Γ
out : solving for, we find that 5. 10
</ctx>
The following are multiple choice questions (with answers) about STEM.
Solution for 4.75+.303+.432
A. A)2.91
B. B)5.911
C. C)5.485
D. D)5.986
Answer: | [
"A)2.91",
"B)5.911",
"C)5.485",
"D)5.986"
] | C | 4.75+.303+.432=0
0=0-4.75-0.303-0.432
0=-5.485
answer :C |
mmlu_aux_2462 | mmlu_auxiliary | <ctx>
the uk's private sector, including satellite design and manufacture, developing designs for space planes and catering to the new market in space tourism. project juno project juno was a privately funded campaign, which selected helen sharman to be the first briton in space. a private consortium was formed to raise money to pay the ussr for a seat on a soyuz mission to the mir space station. the ussr had recently flown toyohiro akiyama, a japanese journalist, by a similar arrangement. a call for applicants was publicised in the uk resulting in the selection of four astronauts : helen sharman, major timothy mace, clive smith and surgeon lieutenant commander gordon brooks. sharman was eventually chosen for the first of what was hoped to be a number of flights with major timothy mace as her backup. the cost of the flight was to be funded by various innovative schemes, including sponsoring by private british companies and a lottery system. corporate sponsors included british aerospace, memorex, and interflora, and television rights were sold to itv. ultimately the juno consortium failed to raise the entire sum and the ussr considered canceling the mission. it is believed that mikhail gorbachev directed the mission to proceed at soviet cost. sharman was launched aboard soyuz tm - 12 on 18 may 1991, and returned aboard soyuz tm - 11 on 26 may 1991. surrey satellite technology surrey satellite technology ltd ( sstl ) is a large spin - off company of the university of surrey, now fully owned by airbus defence & space, that builds and operates small satellites. sstl works with the uk space agency and takes on a number of tasks for the uksa that would be done in - house by a traditional large government space agency. virgin galactic virgin galactic, a us company within the british - based virgin group owned by sir richard branson, is taking reservations for suborbital space flights from the general public. its operations will use spaceshiptwo space planes designed by scaled composites, which has previously developed the ansari x - prize winning spaceshipone. british contribution to other space programmes communication and tracking of rockets and satellites in orbit is achieved using stations such as jodrell bank. during the space race, jodrell bank and other stations were
. 2 ( 920 mph ), this was less than half the 2, 000 mph speed predicted by richard branson. spaceshiptwo's second supersonic flight achieved a speed of 1, 100 mph for 20 seconds ; while this was an improvement, it fell far short of the 2, 500 mph for 70 seconds required to carry six passengers into space. however, branson still announced his spaceship would be capable of " launching 100 satellites every day. " in addition to the suborbital passenger business, virgin galactic intended to market spaceshiptwo for suborbital space science missions and market white knight two for " small satellite " launch services. it had planned to initiate rfps for the satellite business in early 2010, but flights had not materialized as of 2014. on 14 may 2013, richard branson stated on virgin radio dubai's kris fade morning show that he would be aboard the first public flight of spaceshiptwo, which had again been rescheduled, this time to december 25, 2013. " maybe i'll dress up as father christmas ", branson said. the third rocket - powered test flight of spaceshiptwo took place on 10 january 2014 and successfully tested the spaceship's reaction control system ( rcs ) and the newly installed thermal protection coating on the vehicle's tail booms. virgin galactic ceo george whitesides said " we are progressively closer to our target of starting commercial service in 2014 ". interviewed by the observer at the time of her 90th birthday in july 2014, branson's mother, eve, told reporter elizabeth day of her intention of going to space herself. asked when that might be, she replied : " i think it's the end of the year ", adding after a pause, " it's always'the end of the year'". in february 2014, cracks in whiteknighttwo, where the spars connect with the fuselage, were discovered during an inspection conducted after virgin galactic took possession of the aircraft from builder scaled composites. in september 2014, richard branson described the intended date for the first commercial flight as february or march 2015 ; by the time of this announcement, a new plastic - based fuel had yet to be
planes designed by scaled composites, which has previously developed the ansari x - prize winning spaceshipone. british contribution to other space programmes communication and tracking of rockets and satellites in orbit is achieved using stations such as jodrell bank. during the space race, jodrell bank and other stations were used to track several satellites and probes including sputnik and pioneer 5. as well as providing tracking facilities for other nations, scientists from the united kingdom have participated in other nation's space programmes, notably contributing to the development of nasa's early space programmes, and co - operation with australian launches. the royal aircraft establishment at farnborough, invented carbon fibre composite material. the saunders - roe sr. 53 rocket / jet plane in 1957 used the newly invented silver peroxide catalyst rocket engine. the concept of the communications satellite was by arthur c. clarke. british astronauts because the british government has never developed a crewed spaceflight programme and initially did not contribute funding to the crewed space flight part of esa's activities, the first six british astronauts launched with either the american or soviet / russian space programmes. despite this, on 9 october 2008, british science and innovation minister lord drayson spoke favourably of the idea of a british astronaut. army air corps test pilot tim peake became a member of the european astronaut corps in 2009, and then in 2015 the first astronaut funded by the british government when he reached the international space station aboard a soyuz rocket launched from baikonur in kazakhstan. to date, six uk - born british citizens and one non - uk - born british citizen have flown in space : potential astronauts us air force colonel gregory h. johnson served as pilot on two endeavour missions ( sts - 123 and sts - 134 ). although born in the uk while his father was stationed at a us air force base, he has never been a british citizen and is not otherwise associated with the uk. he is sometimes incorrectly listed as a british astronaut. anthony llewellyn ( born in cardiff, wales ) was selected as a scientist - astronaut by nasa during august 1967 but resigned during september 1968, having never flown in space. army lieutenants - colonel anthony boyle ( born in kidderminster ) and richard farrim
for the growing small satellite market. the projected cost is less than us $ 5 million per launch. the rutherford engine uses pumps that are uniquely driven by battery - powered electric motors rather than a gas generator, expander, or preburner. the engine is fabricated largely by 3d printing, via electron beam melting, whereby layers of metal powder are melted in a high vacuum by an electron beam rather than a laser. by march 2016, the second - stage rutherford engine had completed firing tests. the first test flight took place on 25 may 2017 at 04 : 20 utc from mahia peninsula in new zealand's north island. after reaching an altitude of about, the rocket was performing nominally but telemetry was lost so the decision to destroy it was made by range safety. on 21 january 2018 at 01 : 43 utc, their second rocket, on a flight named " still testing ", launched, reached orbit and deployed three cubesats for customers planet labs and spire global. the rocket also carried an additional satellite payload called humanity star, a of wide carbon fibre geodesic sphere made up of 65 panels that reflect the sun's light. humanity star re - entered earth's atmosphere and burned up in march 2018. on 11 november 2018, the first commercial launch ( third launch in total ) of electron occurred from mahia peninsula carrying satellites for spire global, geooptics, a cubesat built by high school students, and a prototype of a dragsail. on 4 july 2020, an issue during the second - stage burn of flight 13, named " pics or it didn't happen ", caused electron to fail to get into orbit and its payloads were lost. on 19 november 2020, a successful launch mission named " return to sender " deployed its payload of 29 small sats into orbit. in addition to satellite deployment, a new method of the first stage recovery was successfully implemented. " after stage separation, the first stage will reorient itself for reentry, then deploy a drogue parachute and a larger main parachute before splashing down in the pacific ocean about 400 kilometers from the launch site ". on 15 may 2021, the company launched the mission " running out of toes " which successfully
</ctx>
The following are multiple choice questions (with answers) about STEM.
A teddy bear from Cumbria is launching into space to raise cash for charity . Terence, an experienced traveller who has been to Iraq, will be the guest of honour on aviation legend Burt Rutan's Spaceship One when it flies above Earth. The mission takes off from California on September 29, and on his return the cuddly toy will be auctioned off in aid of the North Air Ambulance Appeal . Spaceship One is the world's first private spacecraft, and is competing for a prestigious space travel prize. Chief executive of the Great North Air Ambulance Service (GNAAS), Graham Pickering, said "flying officer" Terence had been handed over to the RAF six months ago and staff had been receiving postcards from him ever since. He said, "Terence was a fundraising idea that really took off. We have received pictures of him in a U2 craft, trying parachuting and even looking drunk and disorderly. When the RAF finally discharge him he will be a very rare bear indeed--we just hope he does not burn up on re-entry to Earth." GNAAS, which needs charitable donations of more than PS2m a year, has three air ambulances . Peter Bond, spokesman for the Royal Astronomical Society, said Terence's safety was not guaranteed. He said, "This is a new and experimental craft and this will only be the second time it has flown. During its first voyage it developed technical problems but hopefully they have now been resolved." Since May, Terence has spent time with members of 100 Squadron based at RAF Leeming in Basra, Iraq, and at air shows with performing fighter planes. Spaceship One will fly 100km (62 miles) above the Earth's surface, just breaking through the planet's atmosphere. If it repeats the feat inside two weeks, it will claim the $10m Ansari X-Prize set up to encourage the private space flight business. Which of the following is true of Spaceship One?
A. Spaceship One is the world's first spacecraft.
B. The purpose of its flight is to collect money for the charity.
C. The purpose of its flight is to compete for a prestigious space travel prize.
D. Spaceship One has never flown.
Answer: | [
"Spaceship One is the world's first spacecraft.",
"The purpose of its flight is to collect money for the charity.",
"The purpose of its flight is to compete for a prestigious space travel prize.",
"Spaceship One has never flown."
] | C | null |
m1_6433 | m1 | <ctx>
in graph theory, graph coloring is a special case of graph labeling ; it is an assignment of labels traditionally called " colors " to elements of a graph subject to certain constraints. in its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color ; this is called a vertex coloring. similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. for example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. however, non - vertex coloring problems are often stated and studied as - is. this is partly pedagogical, and partly because some problems are best studied in their non - vertex form, as in the case of edge coloring. the convention of using colors originates from coloring the countries of a map, where each face is literally colored. this was generalized to coloring the faces of a graph embedded in the plane. by planar duality it became coloring the vertices, and in this form it generalizes to all graphs. in mathematical and computer representations, it is typical to use the first few positive or non - negative integers as the " colors ". in general, one can use any finite set as the " color set ". the nature of the coloring problem depends on the number of colors but not on what they are. graph coloring enjoys many practical applications as well as theoretical challenges. beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. it has even reached popularity with the general public in the form of the popular number puzzle sudoku. graph coloring is still a very active field of research. note : many terms used in this article are defined in glossary of graph theory. history the first results about graph coloring
not use all five colors on the five neighboring vertices of, it can be colored in with a color not used by the neighbors. so now look at those five vertices,,,, that were adjacent to in cyclic order ( which depends on how we write g ). so we can assume that,,,, are colored with colors 1, 2, 3, 4, 5 respectively. now consider the subgraph of consisting of the vertices that are colored with colors 1 and 3 only and the edges connecting them. to be clear, each edge connects a color 1 vertex to a color 3 vertex ( this is called a kempe chain ). if and lie in different connected components of, we can swap the 1 and 3 colors on the component containing without affecting the coloring of the rest of. this frees color 1 for completing the task. if on the contrary and lie in the same connected component of, we can find a path in joining them that consists of only color 1 and 3 vertices. now turn to the subgraph of consisting of the vertices that are colored with colors 2 and 4 only and the edges connecting them, and apply the same arguments as before. then either we are able to reverse the 2 - 4 coloration on the subgraph of containing and paint color 2, or we can connect and with a path that consists of only color 2 and 4 vertices. such a path would intersect the 1 - 3 colored path we constructed before since through were in cyclic order. this is clearly absurd as it contradicts the planarity of the graph. so can in fact be five - colored, contrary to the initial presumption. linear time five - coloring algorithm in 1996, robertson, sanders, seymour, and thomas described a quadratic four - coloring algorithm in their " efficiently four - coloring planar graphs ". in the same paper they briefly describe a linear - time five - coloring algorithm, which is asymptotically optimal. the algorithm as described here operates on multigraphs and relies on the ability to have multiple copies of edges between a single pair of vertices. it is based on wernicke's theorem, which states the following : wernicke's theorem : assume g is planar,
that is chosen independently of the graph edges leads to a coloring whose number of colors is close to twice the optimal value, with high probability. it remains unknown whether there is any polynomial time method for finding significantly better colorings of these graphs. degeneracy because optimal vertex orderings are hard to find, heuristics have been used that attempt to reduce the number of colors while not guaranteeing an optimal number of colors. a commonly used ordering for greedy coloring is to choose a vertex of minimum degree, order the subgraph with removed recursively, and then place last in the ordering. the largest degree of a removed vertex that this algorithm encounters is called the degeneracy of the graph, denoted. in the context of greedy coloring, the same ordering strategy is also called the smallest last ordering. this vertex ordering, and the degeneracy, may be computed in linear time. it can be viewed as an improved version of an earlier vertex ordering method, the largest - first ordering, which sorts the vertices in descending order by their degrees. with the degeneracy ordering, the greedy coloring will use at most colors. this is because, when colored, each vertex will have at most already - colored neighbors, so one of the first colors will be free for it to use. greedy coloring with the degeneracy ordering can find optimal colorings for certain classes of graphs, including trees, pseudoforests, and crown graphs. define a graph to be - perfect if, for and every induced subgraph of, the chromatic number equals the degeneracy plus one. for these graphs, the greedy algorithm with the degeneracy ordering is always optimal. every - perfect graph must be an even - hole - free graph, because even cycles have chromatic number two and degeneracy two, not matching the equality in the definition of - perfect graphs. if a graph and its complement graph are both even - hole - free, they are both - perfect. the graphs that are both perfect graphs and - perfect graphs are exactly the chordal graphs. on even - hole - free graphs more generally, the degeneracy ordering approximates the optimal coloring to within at most twice the
method works by repeating ( 1 ) random coloring a graph and ( 2 ) finding colorful copy of the target subgraph, and eventually the target subgraph can be found if the process is repeated a sufficient number of times. suppose a copy of in becomes colorful with some non - zero probability. it immediately follows that if the random coloring is repeated times, then this copy is expected to become colorful once. note that though is small, it is shown that if, is only polynomially small. suppose again there exists an algorithm such that, given a graph and a coloring which maps each vertex of to one of the colors, it finds a copy of colorful, if one exists, within some runtime. then the expected time to find a copy of in, if one exists, is. sometimes it is also desirable to use a more restricted version of colorfulness. for example, in the context of finding cycles in planar graphs, it is possible to develop an algorithm that finds well - colored cycles. here, a cycle is well - colored if its vertices are colored by consecutive colors. example an example would be finding a simple cycle of length in graph. by applying random coloring method, each simple cycle has a probability of to become colorful, since there are ways of coloring the vertices on the cycle, among which there are colorful occurrences. then an algorithm ( described next ) can be used to find colorful cycles in the randomly colored graph in time, where is the matrix multiplication constant. therefore, it takes overall time to find a simple cycle of length in. the colorful cycle - finding algorithm works by first finding all pairs of vertices in that are connected by a simple path of length, and then checking whether the two vertices in each pair are connected. given a coloring function to color graph, enumerate all partitions of the color set into two subsets of size each. note that can be divided into and accordingly, and let and denote the subgraphs induced by and respectively. then, recursively find colorful paths of length in each of and. suppose the boolean matrix and represent the connectivity of each pair of vertices in and by a colorful path, respectively, and let be the matrix describing the adjacency relations between vertices of and
</ctx>
The following are multiple choice questions (with answers) about STEM.
Graph coloring consist of coloring all vertices \ldots
A. \ldots with a unique color.
B. \ldots with a different color when they are linked with an edge.
C. \ldots with a random color.
D. \ldots with a maximum number of colors.
Answer: | [
"\\ldots with a unique color.",
"\\ldots with a different color when they are linked with an edge.",
"\\ldots with a random color.",
"\\ldots with a maximum number of colors."
] | B | The correct answer is: B. \ldots with a different color when they are linked with an edge. This is because the definition of graph coloring involves assigning colors to the vertices of a graph such that no two adjacent vertices (those connected by an edge) share the same color. |
aquarat_44788 | aquarat | <ctx>
sum of the digits of 6721 is 16. hint : there are 10 base cases, which can be checked easily with the right condition. access multimedia content ( https : / / openstax. org / books / introduction - python - programming / pages / 12 - 2 - simple - math - recursion )
β 1 + 2 β 3 + 2 β 4 = 1. 6875. β’ this makes the whole number = β 1. 6875 Γ 2 β 9. before we finish our discussion about floating points, there is the question of 0. how to present the 0? even if we make all bits 0, the hidden 1 makes the final value a non - zero one. how can we deal with this problem? the representation approach that we have learned so far is called the normalized encoding of the ieee 754 format. this is used if the exponent is non - zero and is not 11111111. if the exponent is 0 ( i. e., 00000000 ) we are in denormalized encoding ( also called subnormal ). when we are in this special case, there are some differences in the translation to decimal : β’ the exponent is 1 - bias instead of 0 - bias. the bias is 127 in single precision and 1023 in double precision. β’ there is no hidden 1, so the fraction part is 0. xxxx ( the 23 bits in the fraction in single precision ) instead of 1. xxxx β¦. with these exceptions, we cannot present the 0 ( but setting all 32 bits to 0 ) but can present very small numbers. the case where the exponent is all 1s is called β special values encoding. β if the exponent is all 1s and the fraction is all 0s, it represents infinity. if the exponent is all 1s and the fraction is non - zero, this is called nan ( not a number ) and raises an exception. this happens when there is a bug in your program that does a division by 0 or the square root of β 1, for example. 5. 3 β’ machine - level information representation 213
, phone numbers can have hyphens, spaces or parentheses as a mask rather than as data. ) in web content, such digit grouping can be done with css style. it is useful because the number can be copied and pasted into calculators ( including a web browser's omnibox ) and parsed by the computer as - is ( i. e., without the user manually purging the extraneous characters ). for example, wikipedia content can display numbers this way, as in the following examples : metres is 1 astronomical unit is rounded to 20 decimal places is rounded to 20 decimal places. in some programming languages, it is possible to group the digits in the program's source code to make it easier to read ; see integer literal : digit separators. ada c # d go haskell java kotlin ocaml perl python php ruby rust julia, swift, and free - form fortran 90 use the underscore ( _ ) character for this purpose ; as such, these languages allow seven hundred million to be entered as 700 _ 000 _ 000. fixed - form fortran ignores whitespace ( in all contexts ), so 700 000 000 has always been accepted. fortran 90 and its successors allow ( ignored ) underscores in numbers in free - form. c + + 14, rebol, and red all allow the use of an apostrophe for digit grouping, so 700'000'000 is permissible. below is shown an example of kotlin code using separators to increase readability : val examplenumber = 12 _ 004 _ 953 / / twelve million four thousand nine hundred fifty - three exceptions to digit grouping the international bureau of weights and measures states that " when there are only four digits before or after the decimal marker, it is customary not to use a space to isolate a single digit ". likewise, some manuals of style state that thousands separators should not be used in normal text for numbers from to inclusive where no decimal fractional part is shown ( in other words, for four - digit whole numbers ), whereas others use thousands separators and others use both. for example, apa
the digit with value b β 1 begin s = " " ; / / the string of digits pos = 0 ; / / all places are right to the radix point while not defined ( occurs [ p ] ) do occurs [ p ] = pos ; / / the position of the place with remainder p bp = b * p ; z = floor ( bp / q ) ; / / index z of digit within : 0 β€ z β€ b - 1 p = b * p β z * q ; / / 0 β€ p < q if p = 0 then l = 0 ; if not z = 0 then s = s. substring ( digits, z, 1 ) end if return ( s ) ; end if s = s. substring ( digits, z, 1 ) ; / / append the character of the digit pos + = 1 ; end while l = pos - occurs [ p ] ; / / the length of the repetend ( being < q ) / / mark the digits of the repetend by a vinculum : for i from occurs [ p ] to pos - 1 do substring ( s, i, 1 ) = overline ( substring ( s, i, 1 ) ) ; end for return ( s ) ; end function the first highlighted line calculates the digit. the subsequent line calculates the new remainder of the division modulo the denominator. as a consequence of the floor function floor we have thus and because all these remainders are non - negative integers less than, there can be only a finite number of them with the consequence that they must recur in the while loop. such a recurrence is detected by the associative array occurs. the new digit is formed in the yellow line, where is the only non - constant. the length of the repetend equals the number of the remainders ( see also section every rational number is either a terminating or repeating decimal ). applications to cryptography repeating decimals ( also called decimal sequences ) have found cryptographic and error - correction coding applications. in these applications repeating decimals to base 2 are generally used which gives rise to binary sequences.
</ctx>
The following are multiple choice questions (with answers) about STEM.
A four digit number is such that the product of all of its digits is 126. The sum of all the digits is equal to the 2 digit number formed by using thousands digit and tens digit (Thousand digit in tens place & ten digit in units place) which in turn is equal to 19. Then difference of units and thousand place of the number is, given that this difference is positive.
A. A)1692
B. B)1792
C. C)1795
D. D)1892
Answer: | [
"A)1692",
"B)1792",
"C)1795",
"D)1892"
] | B | the number is 1792.
Let the number be represented as 1000x+100y+10z+t
acc to ques,
xyzt=126
x + y + z + t = 10x + z = 19
so, number in thousands and tens place are 1 and 9 respectively.
the remaining factor is 14 which is 7*2
So the number is 1792.
ANSWER:B |
medmcqa_5512 | medmcqa | <ctx>
gallbladder ( known as courvoisier's sign ) may also raise suspicion, and can help differentiate pancreatic cancer from gallstones. medical imaging techniques, such as computed tomography ( ct scan ) and endoscopic ultrasound ( eus ) are used both to confirm the diagnosis and to help decide whether the tumor can be surgically removed ( its " resectability " ). on contrast ct scan, pancreatic cancer typically shows a gradually increasing radiocontrast uptake, rather than a fast washout as seen in a normal pancreas or a delayed washout as seen in chronic pancreatitis. magnetic resonance imaging and positron emission tomography may also be used, and magnetic resonance cholangiopancreatography may be useful in some cases. abdominal ultrasound is less sensitive and will miss small tumors, but can identify cancers that have spread to the liver and build - up of fluid in the peritoneal cavity ( ascites ). it may be used for a quick and cheap first examination before other techniques. a biopsy by fine needle aspiration, often guided by endoscopic ultrasound, may be used where there is uncertainty over the diagnosis, but a histologic diagnosis is not usually required for removal of the tumor by surgery to go ahead. liver function tests can show a combination of results indicative of bile duct obstruction ( raised conjugated bilirubin, Ξ³ - glutamyl transpeptidase and alkaline phosphatase levels ). ca19 - 9 ( carbohydrate antigen 19. 9 ) is a tumor marker that is frequently elevated in pancreatic cancer. however, it lacks sensitivity and specificity, not least because 5 % of people lack the lewis ( a ) antigen and cannot produce ca19 - 9. it has a sensitivity of 80 % and specificity of 73 % in detecting pancreatic adenocarcinoma, and is used for following known cases rather than diagnosis. histopathology the most common form of pancreatic cancer ( adenocarcinoma ) is typically characterized by moderately to poorly differentiated glandular structures on microscopic examination. there is typically considerable desmopl
' s prcrp, a $ 10 million increase over the previous fiscal year. the year also included the 10th anniversary of pancan's annual advocacy event called, " national pancreatic cancer advocacy day " in washington, d. c., with hundreds of attendees, including survivors and their families, meeting with legislators to ask them to increase federal funding for cancer research. programs and services patient services formerly known as patient and liaison services, and patient central, the call center offers free disease information and services that are personalized for caregivers, patients and healthcare professionals. know your tumor the precision medicine service, initiated in 2014, uses molecular profiling to give doctors and their patients, who qualify for the program, details about the biology of their tumor to help select better treatment options. clinical trial finder launched in 2015, the free online resource offers patients, caregivers and healthcare professionals access to an extensive directory of u. s. pancreatic cancer clinical trials. the tool generates a number of trials that fit a patient's requirements. patient registry developed in 2016, the online directory collects patient experiences to show similarities in the pancreatic cancer journey to advance researcher knowledge about the disease and improve patient therapies. precision promise the initiative began in 2016 as the first precision medicine clinical trial to get new therapies to pancreatic cancer patients faster. survivor & caregiver network the program consists of survivors and caregivers who volunteer across the u. s. to speak with patients, caregivers, family members or friends about the disease. community events and volunteer network pancan has raised over $ 87 million for the pancreatic cancer cause through community events from 2008 to 2016. purplestride the pancreatic cancer action network's main fundraising event is a 5k walk / run called purplestride. as of 2017, the events have involved around 80, 000 people in nearly 60 communities across the united states. purplestride contributors raised $ 11. 8 million in 2017. volunteer affiliates pancan's network of volunteer affiliates organize community events, build relationships with local media and meet with elected officials and businesses to spread awareness and raise funds. 58 affiliates across the u. s. were in operation in
8 hours in the 1970s to 5. 5 hours in the 2000s, and mortality within 30 days or in hospital was only 1 %. another series of 2, 050 operations at the massachusetts general hospital between 1941 and 2011 showed a similar picture of improvement. research directions early - stage research on pancreatic cancer includes studies of genetics and early detection, treatment at different cancer stages, surgical strategies, and targeted therapies, such as inhibition of growth factors, immune therapies, and vaccines. bile acids may have a role in the carcinogenesis of pancreatic cancer. a key question is the timing of events as the disease develops and progresses β particularly the role of diabetes, and how and when the disease spreads. the knowledge that new onset of diabetes can be an early sign of the disease could facilitate timely diagnosis and prevention if a workable screening strategy can be developed. the european registry of hereditary pancreatitis and familial pancreatic cancer ( europac ) trial is aiming to determine whether regular screening is appropriate for people with a family history of the disease. keyhole surgery ( laparoscopy ) rather than whipple's procedure, particularly in terms of recovery time, is being evaluated. irreversible electroporation is a relatively novel ablation technique with potential for downstaging and prolonging survival in persons with locally advanced disease, especially for tumors in proximity to peri - pancreatic vessels without risk of vascular trauma. efforts are underway to develop new drugs, including those targeting molecular mechanisms for cancer onset, stem cells, and cell proliferation. a further approach involves the use of immunotherapy, such as oncolytic viruses. galectin - specific mechanisms of the tumor microenvironment are under study. see also gastrointestinal cancer pancreatic cancer action network ( organization in the us ) lustgarten foundation for pancreatic cancer research ( organization in the us ) list of people diagnosed with pancreatic cancer references external links gastrointestinal cancer cancer digestive system neoplasia wikipedia medicine articles ready to translate
but today the world health organization takes a more broad approach, that the principles of palliative care should be applied as early as possible to any chronic and ultimately fatal illness. palpation β is the process of using one's hands to check the body, especially while perceiving / diagnosing a disease or illness. pancreas β is an organ of the digestive system and endocrine system of vertebrates. in humans, it is located in the abdomen behind the stomach and functions as a gland. the pancreas has both an endocrine and a digestive exocrine function. as an endocrine gland, it functions mostly to regulate blood sugar levels, secreting the hormones insulin, glucagon, somatostatin, and pancreatic polypeptide. as a part of the digestive system, it functions as an exocrine gland secreting pancreatic juice into the duodenum through the pancreatic duct. this juice contains bicarbonate, which neutralizes acid entering the duodenum from the stomach ; and digestive enzymes, which break down carbohydrates, proteins, and fats in food entering the duodenum from the stomach. papillary - in oncology, papillary refers to neoplasms with projections ( " papillae ", from latin,'nipple') that have fibrovascular cores. parasitology β is the study of parasites, their hosts, and the relationship between them. as a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. this means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology. parathyroid glands β are small endocrine glands in the neck of humans and other tetrapods. humans usually have four parathyroid glands, located on the back of the thyroid gland in variable locations. the parathyroid gland produces and secretes parathyroid hormone in response to a low blood calcium, which plays a key
</ctx>
The following are multiple choice questions (with answers) about STEM.
The commonest pancreatic tumor is:
A. Ductal adenocarcinoma
B. Cystadenoma
C. Insulinoma
D. Non islet cell tumor
Answer: | [
"Ductal adenocarcinoma",
"Cystadenoma",
"Insulinoma",
"Non islet cell tumor"
] | A | M/C type - Adenocarcinoma Non-specific Tumour Marker - CEA Specific Tumour Marker - CA 19.9 Chemotherapy Regimen - Gemcitabine MC site of metastasis - Liver M/C site for CA pancreas - Head > Body > Tail Neuroendocrine Tumors (NET) of Pancreas- M/C NET of Pancreases - Non-Functional (PPOMA)- M/C functional NET of Pancreas - Insulinoma- M/C functional & Malignant NET of Pancreas - Gastrinoma |
medmcqa_7731 | medmcqa | <ctx>
lacking : the carcinogens are not all mutagens ( hormones... ) ; the target of carcinogens may be the ecm and not the cell ; an ecm exposed to a carcinogen brought into contact with a non - cancerous tissue will cause cancer of this tissue, but not vice versa ; a cancerous tissue in close contact with a normal ecm may become normal tissue again. other authors have shown that it is possible to return to a normal architecture a cancerous tissue when it was taken in charge by an embryonic environment then by somatic tissue. these last examples plead for the reality of the possible reversion of the cancerous to the non - cancerous. finally, more cancers are due to infectious " causes " than to genetic " causes ". these last examples plead for the reality of the possible reversion of the cancerous to the non - cancerous. towards a global approach to cancer any theory of carcinogenesis must explain cancerization since its onset, dysplasia, in situ, then crossing of the basement membrane, the growth of the primary tumor and the appearance of metastases. let us quote dw smithers ( 1962 ) : " cancer is no more a disease of the cells than a traffic jam is a disease of cars ". we therefore see a global approach taking into account both the mechanical and biological signals in this long process that goes from dysplasia to metastases. this new branch of biology has consequences beyond oncology, in embryology, tissue engineering, etc. it is only time for physical oncology to become visible. visible because now integrable into an imagery that can measure mechanical signals, and visible in the scientific field as a full component of carcinogenesis. references annexes related articles mechanobiology external links united states : national cancer institute ( nci ) [ archive ] singapore : mechanobiology institute ( mbi ) [ archive ] β national university of singapore europe : imaging the force of cancer [ archive ], h2020 journal : convergent science physical oncology [ archive ] oncology biophysics
to perform their functions. immunohistochemistry β immunology β is a branch of biology that covers the study of immune systems in all organisms. immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases ; malfunctions of the immune system in immunological disorders ( such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection ) ; and the physical, chemical, and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. iliac vein, common β in human anatomy, the common iliac veins are formed by the external iliac veins and internal iliac veins. the left and right common iliac veins come together in the abdomen at the level of the fifth lumbar vertebra, forming the inferior vena cava. they drain blood from the pelvis and lower limbs. both common iliac veins are accompanied along their course by common iliac arteries. iliac vein, deep circumflex β iliac vein, external β iliac vein, internal β index finger β infectious diseases ( medical specialty ) β inferior oblique muscle β inferior thyroid artery β inferior vena cava β influenza β inspection ( medicine ) β integumentary system β intensive care medicine β internal carotid artery β internal iliac vein β internal jugular vein β internal medicine β internship ( medicine ) β interventional cardiology β interventional radiology β ischium β j jaundice - also known as icterus, is a yellowish or greenish pigmentation of the skin and whites of the eyes due to high bilirubin levels. it is commonly associated with itchiness. the feces may be pale and the urine dark. jaundice in babies occurs in over half in the first week following birth and does not pose a serious threat in most. if bilirubin levels in babies are very high
his colleague lynn trainor, goodwin developed a set of mathematical equations describing the changes of both physical boundaries in the organism and chemical gradients. by considering the mechanochemical behaviour of the cortical cytoplasm ( or cytogel ) of plant cells, a viscoelastic material mainly composed of actin microfilaments and reinforced by a microtubules network, goodwin & trainor ( 1985 ) showed how to couple calcium and the mechanical properties of the cytoplasm. the cytogel is treated as a continuous viscoelastic medium in which calcium ions can diffuse and interact with the cytoskeleton. the model consists in two non - linear partial differential equations which describe the evolution of the mechanical strain field and of the calcium distribution in the cytogel. it has been shown ( trainor & goodwin, 1986 ) that, in a range of parameter values, instabilities may occur and develop in this system, leading to intracellular patterns of strain and calcium concentration. the equations read, in their general form : these equations describe the spatiotemporal dynamics of the displacement from the reference state and the calcium concentration, respectively. here x and t are the space and time coordinates, respectively. these equations can be applied to many different scenarios and the different functions p ( x ) introduce the specific mechanical properties of the medium. these equations can generate a rich variety of static and dynamic patterns, from complex geometrical motifs to oscillations and chaos ( briere 1994 ). structuralism he was also a strong advocate of the view that genes cannot fully explain the complexity of biological systems. in that sense, he became one of the strongest defenders of the systems view against reductionism. he suggested that nonlinear phenomena and the fundamental laws defining their behavior were essential to understand biology and its evolutionary paths. his position within evolutionary biology can be defined as a structuralist one. to goodwin, many patterns in nature are a byproduct of constraints imposed by complexity. the limited repertoire of motifs observed in the spatial organization of plants and animals ( at some scales ) would be, in goodwin's opinion, a fingerprint of the role played by such constraints. the role of natural selection would be
, astrop, t. i. a., mcnamara, m. e., webb, s., ito, s., wakamatsu, k., 2019. synchrotron - x - ray absorption spectroscopy of melanosomes in vertebrates and cephalopods : implications for the affinity of tullimonstrum. proceedings b 286, 20191649. doi : https : / / doi. org / 10. 1098 / rspb. 2019. 1649. slater, t. s., mcnamara, m. e., orr, p. j., foley, t. b., ito, s., wakamatsu, k., 2019. taphonomic experiments reveal controls on preservation of melanosomes and keratinous tissues in feathers. palaeontology 63, 103 - 115. doi : https : / / doi. org / 10. 1111 / pala. 12445. rossi, v., mcnamara, m. e., webb, s., ito, s., wakamatsu, k., 2019. tissue - specific geometry and chemistry of modern and fossilized melanosomes reveal internal anatomy of extinct vertebrates. pnas 116, 17880 - 17889. doi : https : / / doi. org / 10. 1073 / pnas. 1820285116. benton, m. j., dhouailly, d., jiang, b., mcnamara, m., 2019. the early origin of feathers. trends in ecology and evolution 34, 856 - 869. doi : https : / / doi. org / 10. 1016 / j. tree. 2019. 04. 018. yang, z. x., jiang, b. y., mcnamara, m. e., kearns, s. l., pittman, m., kaye, t. g., orr, p. j., xu, x., * benton, m. j., 2019. pterosaur integumentary structures with complex feather - like branching. nature ecology and evolution 3, 24 - 30. doi
</ctx>
The following are multiple choice questions (with answers) about STEM.
Reilly bodies are seen in?
A. Gangliosidosis
B. Bechet's disease
C. Gaucher's disease
D. Hurler disease
Answer: | [
"Gangliosidosis",
"Bechet's disease",
"Gaucher's disease",
"Hurler disease"
] | D | Mucopolysaccharidoses (MPSs) are characterized by defective degradation and excessive storage of mucopolysaccharides in various tissues. Hepatosplenomegaly, skeletal deformities, lesions of hea valves, subendothelial aerial deposits, paicularly in the coronary aeries, and lesions in the brain, are features that are seen in all of the MPSs. Of the seven recognized variants, only two well-characterized syndromes are discussed briefly here. MPS type I, also known as Hurler syndrome, is caused by a deficiency of a-L-iduronidase. Accumulation of dermatan sulfate and heparan sulfate is seen in cells of the mononuclear phagocyte system, in fibroblasts, and within endothelium and smooth muscle cells of the vascular wall. The affected cells are swollen and have clear cytoplasm, resulting from the accumulation of material positive for periodic acid-Schiff staining within engorged, vacuolated lysosomes. Lysosomal inclusions also are found in neurons, accounting for the mental retardation. MPS type II or Hunter syndrome differs from Hurler syndrome in its mode of inheritance (X-linked), the absence of corneal clouding, and often its milder clinical course. Despite the difference in enzyme deficiency, an accumulation of identical substrates occurs because breakdown of heparan sulfate and dermatan sulfate requires both a-L-iduronidase and the sulfatase; if either one is missing, fuher degradation is blocked. Diagnosis is made by measuring the level of enzyme in leukocytes. Ref: ROBBINS BASIC PATHOLOGY 10th Ed. pg no: 260 |
medmcqa_3993 | medmcqa | <ctx>
negative out of all the positive predictions. it represents the probability of an erroneous positive prediction, and it is given by the following formula : negative predicted value ( npv ) : the fraction of negative cases which were correctly predicted out of all the negative predictions. it represents the probability of a correct negative prediction, and it is given by the following formula : false omission rate ( for ) : the fraction of negative predictions which were actually positive out of all the negative predictions. it represents the probability of an erroneous negative prediction, and it is given by the following formula : true positive rate ( tpr ) : the fraction of positive cases which were correctly predicted out of all the positive cases. it is usually referred to as sensitivity or recall, and it represents the probability of the positive subjects to be classified correctly as such. it is given by the formula : false negative rate ( fnr ) : the fraction of positive cases which were incorrectly predicted to be negative out of all the positive cases. it represents the probability of the positive subjects to be classified incorrectly as negative ones, and it is given by the formula : true negative rate ( tnr ) : the fraction of negative cases which were correctly predicted out of all the negative cases. it represents the probability of the negative subjects to be classified correctly as such, and it is given by the formula : false positive rate ( fpr ) : the fraction of negative cases which were incorrectly predicted to be positive out of all the negative cases. it represents the probability of the negative subjects to be classified incorrectly as positive ones, and it is given by the formula : the following criteria can be understood as measures of the three general definitions given at the beginning of this section, namely independence, separation and sufficiency. in the table to the right, we can see the relationships between them. to define these measures specifically, we will divide them into three big groups as done in verma et al. : definitions based on a predicted outcome, on predicted and actual outcomes, and definitions based on predicted probabilities and the actual outcome. we will be working with a binary classifier and the following notation : refers to the score given by the classifier, which is the probability of a certain subject
the positive and negative predictive values ( ppv and npv respectively ) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively. the ppv and npv describe the performance of a diagnostic test or other statistical measure. a high result can be interpreted as indicating the accuracy of such a statistic. the ppv and npv are not intrinsic to the test ( as true positive rate and true negative rate are ) ; they depend also on the prevalence. both ppv and npv can be derived using bayes'theorem. although sometimes used synonymously, a positive predictive value generally refers to what is established by control groups, while a post - test probability refers to a probability for an individual. still, if the individual's pre - test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal. in information retrieval, the ppv statistic is often called the precision. definition positive predictive value ( ppv ) the positive predictive value ( ppv ), or precision, is defined as where a " true positive " is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a " false positive " is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. the ideal value of the ppv, with a perfect test, is 1 ( 100 % ), and the worst possible value would be zero. the ppv can also be computed from sensitivity, specificity, and the prevalence of the condition : cf. bayes'theorem the complement of the ppv is the false discovery rate ( fdr ) : negative predictive value ( npv ) the negative predictive value is defined as : where a " true negative " is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a " false negative " is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. with a perfect test, one which returns no false negative
in statistics, the negative log predictive density ( nlpd ) is a measure of error between a model's predictions and associated true values. a smaller value is better. importantly the nlpd assesses the quality of the model's uncertainty quantification. it is used for both regression and classification. to compute : ( 1 ) find the probabilities given by the model to the true labels. ( 2 ) find the negative log of this product. ( we actually find the negative of the sum of the logs, for numerical reasons ). definition where is the model, are the inputs ( independent variables ) and are the observations outputs ( dependent variable ). example calculating the nlpd for a simple classification example we have a method that classifies images as dogs or cats. importantly it assigns probabilities to the two classes. we show it a picture of three dogs and three cats. it predicts that the probability of the first three being dogs as 0. 9 and 0. 4, 0. 7 and of the last three being cats as 0. 8 and 0. 4, 0. 3. the nlpd is :. comparing to a classifier with better accuracy but overconfident we compare this to another classifier which predicts the first three as being dogs as 0. 95, 0. 98, 0. 02, and the last three being cats as 0. 99, 0. 96, 0. 96. the nlpd for this classifier is 4. 08. the first classifier only guessed half correctly, so did worse on a traditional measure of accuracy ( compared to 5 / 6 for the second classifier ). however it performs better on the metric of nlpd : the second classifier is effectively'confidently wrong'which is penalised heavily by this metric. compared to a very under - confident classifier a third classifier than just predicts 0. 5 for all classes will have an nlpd in this case of 4. 15 : worse than either of the others. usage it is used extensively in probabilistic modelling research. examples include : - candela, joaquin quinonero, et al. " propagation of uncertainty in bayesian kernel models - application to multiple -
or'negative ', i. e., increasing values of the index correspond to negative variations of the phenomenon ( e. g., poverty ), then is used. in any cases, an unbalance among indicators will have a negative effect on the value of the index. ampi given the matrix, we calculate the matrix as follows : where and are the'goalposts'for the indicator, i. e., a minimum and a maximum value that represent the possible range of the indicator for all time periods considered. if the indicator has negative'polarity ', the complement of ( 1 ) with respect to 200 is calculated. to facilitate the interpretation of results, the'goalposts'can be chosen so that 100 represents a reference value ( e. g., the average in a given year ). let and be the minimum and maximum of indicator across all time periods considered, and be the reference value for indicator. then the'goalposts'are defined as :, where denoting with,,, respectively, the mean, standard deviation, and coefficient of variation of the normalized values for unit, the composite index is given by where the sign depends on the kind of phenomenon to be measured. applications the methodology is usually applied to the calculation of both composite indices of β positive β multidimensional phenomena ( the higher the value the better the performance ), such as well - being ( istat, 2015 ), quality of life ( mazziotta and pareto, 2012 ), development ( de muro et al., 2011 ) and infrastructural endowment ( mazziotta and pareto, 2009 ), and for β negative β multidimensional phenomena ( the higher the value the worse the performance ), such as poverty ( de muro et al., 2011 ). references summary statistics
</ctx>
The following are multiple choice questions (with answers) about STEM.
Numerator in negative predictive value -
A. True positive
B. False positive
C. True negative
D. False negative
Answer: | [
"True positive",
"False positive",
"True negative",
"False negative"
] | C | Ans. is 'c' i.e., True negative o The results of a screening test can be arranged in following table : -Screening test resultsDiagnosisTotalDiseasedNot diseasedPositiveNegativeTrue positive (a)False negative (c)False positive (b)True negative (d)a + bc + dTotala + cb + d From this table following measure can be obtained : -i) Sensitivityo It is the percentage of diseased people that wrho are diagnosed as having disease,o Total number of diseased people = True positive + False negative.o Number of people (among diseased people) diagnosed as having disease = True positives,o So, percentage of truely diagnosed people : -Number of true positiveSensitivity = ---------------------------------------------------------------------------------Number of true positives - Number of false negative.Specificityo It is the percentage of healthy people who are identified as healthy.o Total number of healthy people = True negatives + False positiveso Number of healthy people who are identified as healthy = True negatives,o So, percentage of true healthy people : -Number of true negativesSpecificity = ------------------------------------------------------------------------------------Number of true negative + number of false positives.Predictive valve of a positive test (positive predictive value of secreening test)o It is the percentage of truely diseased people among those who show positive test results,o Total number of people showing positive test results = True positive + False positiveo Numbers of truely diseased people showing positive test = true positive,o So, the percentage of truely diseased people : -True positivePositive predictive value = ------------------------------------------------------------------------True positive + False positiveiv) Predictive value of a negative of test (negative predictive value)o It is the percentage of healthy people among those who show negative test results,o Total number of people showing negative test results = True negative + False negativeo Number of healthy people showing negative test = True negativeo So, percentage of healthy people showing negative results.True negativeNegative predictive value = ---------------------------------------------------------------------------True negative +- False negative |
arc_easy_957 | arc_easy | <ctx>
. most protists are single - celled and microscopic. but there are exceptions. some single - celled marine protists are macroscopic. some marine slime molds have unique life cycles that involve switching between unicellular, colonial, and multicellular forms. other marine protist are neither single - celled nor microscopic, such as seaweed. protists have been described as a taxonomic grab bag of misfits where anything that doesn't fit into one of the main biological kingdoms can be placed. some modern authors prefer to exclude multicellular organisms from the traditional definition of a protist, restricting protists to unicellular organisms. this more constrained definition excludes all brown, the multicellular red and green algae, and, sometimes, slime molds ( slime molds excluded when multicellularity is defined as " complex " ). background the ocean represents the largest continuous planetary ecosystem, hosting an enormous variety of organisms, which include microscopic biota such as unicellular eukaryotes ( protists ). despite their small size, protists play key roles in marine biogeochemical cycles and harbour tremendous evolutionary diversity. notwithstanding their significance for understanding the evolution of life on earth and their role in marine food webs, as well as driving biogeochemical cycles to maintain habitability, little is known about their cell biology including reproduction, metabolism and signaling. most of the biological knowledge available is based on comparison of proteins from cultured species to homologs in genetically tractable model taxa. a main impediment to understanding the cell biology of these diverse eukaryotes is that protocols for genetic modification are available for only a small number of species that represent neither the most ecologically relevant protists nor the breadth of eukaryotic diversity. even so, in the decade to 2020, genome and transcriptome sequencing initiatives have resulted in nearly 120 million unigenes being identified in protists, which is facilitating the development of genetic tools for model species. trophic modes protists can be divided broadly into four groups depending on whether their nutrition is plant - like, animal - like, fungal - like, or a
are unicellular protists that exist in both freshwaters and oceans. choanoflagellates have a spherical ( or ovoid ) cell body and a flagellum that is surrounded by a collar composed of actin microvilli. the flagellum is used to facilitate movement and food intake. as the flagellum beats, it takes in water through the microvilli attached to the collar, which helps filter out unwanted bacteria and other tiny food particles. choanoflagellates are composed of approximately 150 species and reproduce by simple division. choanoflagellate salpingoeca rosetta ( also known as choanoflagellate proterospongia ) the choanoflagellate salpingoeca rosetta is a rare freshwater eukaryote consisting of a number of cells embedded in a jelly - like matrix. this organism demonstrates a very primitive level of cell differentiation and specialization. this is seen with flagellated cells and their collar structures that move the cell colony through the water, while the amoeboid cells on the inside serve to divide into new cells to assist in colony growth. similar low level cellular differentiation and specification can also be seen in sponges. they also have collar cells ( also called choanocytes due to their similarities to choanoflaggellates ) and amoeboid cells arranged in a gelatinous matrix. unlike choanoflagellate salpingoeca rosetta, sponges also have other cell - types that can perform different functions ( see sponges ). also, the collar cells of sponges beat within canals in the sponge body, whereas salpingoeca rosetta β s collar cells reside on the inside and it lacks internal canals. despite these minor differences, there is strong evidence that proterospongia and metazoa are highly related. choanoflagellate perplexa these choanoflagellates are able to attach to one another via the pairing of collar microvilli. choanoflagellate codosiga botrytis and desmerella these choanoflagellates are capable of forming colonies via fine intercellular bridges that allow the individual
the cell is the basic structural and functional unit of all forms of life. every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, dna and rna, as well as many small molecules of nutrients and metabolites. the term comes from the latin word meaning'small room '. cells can acquire specified function and carry out various tasks within the cell such as replication, dna repair, protein synthesis, and motility. cells are capable of specialization and mobility within the cell. most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. electron microscopy gives a much higher resolution showing greatly detailed cell structure. organisms can be classified as unicellular ( consisting of a single cell such as bacteria ) or multicellular ( including plants and animals ). most unicellular organisms are classed as microorganisms. the study of cells and how they work has led to many other studies in related areas of biology, including : discovery of dna, cancer systems biology, aging and developmental biology. cell biology is the study of cells, which were discovered by robert hooke in 1665, who named them for their resemblance to cells inhabited by christian monks in a monastery. cell theory, first developed in 1839 by matthias jakob schleiden and theodor schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre - existing cells. cells emerged on earth about 4 billion years ago. discovery with continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. this discovery is largely attributed to robert hooke, and began the scientific study of cells, known as cell biology. when observing a piece of cork under the scope, he was able to see pores. this was shocking at the time as it was believed no one else had seen these. to further support his theory, matthias schleiden and theodor schwann both also studied cells of both animal and plants. what they discovered were significant differences between the two types of cells. this put forth the idea that cells
microbiology ( ) is the scientific study of microorganisms, those being of unicellular ( single - celled ), multicellular ( consisting of complex cells ), or acellular ( lacking cells ). microbiology encompasses numerous sub - disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology. eukaryotic microorganisms possess membrane - bound organelles and include fungi and protists, whereas prokaryotic organisms β all of which are microorganisms β are conventionally classified as lacking membrane - bound organelles and include bacteria and archaea. microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. however, less than 1 % of the microorganisms present in common environments can be cultured in isolation using current means. with the emergence of biotechnology, microbiologists currently rely on molecular biology tools such as dna sequence - based identification, for example, the 16s rrna gene sequence used for bacterial identification. viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search β discovering " infectious proteins ". the existence of microorganisms was predicted many centuries before they were first observed, for example by the jains in india and by marcus terentius varro in ancient rome. the first recorded microscope observation was of the fruiting bodies of moulds, by robert hooke in 1666, but the jesuit priest athanasius kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. antonie van leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, using simple microscopes of his design. scientific microbiology developed in the 19th century through the work of louis pasteur and in medical microbiology
</ctx>
The following are multiple choice questions (with answers) about STEM.
Which of these is an organism that is unicellular?
A. virus
B. amoeba
C. mushroom
D. ant
Answer: | [
"virus",
"amoeba",
"mushroom",
"ant"
] | B | |
medmcqa_666 | medmcqa | <ctx>
broad ligament of the uterus. for a short part, the uterine arteries travel on top for a short ( ) period. they then pass by the cervix, traveling inward towards the bladder. blood and lymphatic supply the arteries which supply the ureter vary along its course. the upper third of the ureter, closest to the kidney, is supplied by the renal arteries. the middle part of the ureter is supplied by the common iliac arteries, direct branches from the abdominal aorta, and gonadal arteries ; the gonadal arteries being the testicular artery in men and the ovarian artery in women. the lower third of the ureter, closest to the bladder, is supplied by branches from the internal iliac arteries, mainly the superior and inferior vesical arteries. the arterial supply can be variable, with arteries that contribute include the middle rectal artery, branches directly from the aorta, and, in women, the uterine and vaginal arteries. the arteries that supply the ureters end in a network of vessels within the adventitia of the ureters. there are many connections ( ) between the arteries of the ureter, particularly in the adventitia, which means damage to a single vessel does not compromise the blood supply of the ureter. venous drainage mostly parallels that of the arterial supply ; that is, it begins as a network of smaller veins in the adventitia ; with the renal veins draining the upper ureters, and the vesicular and gonadal veins draining the lower ureters. lymphatic drainage depends on the position of lymphatic vessels in the ureter. lymph collects in submucosal, intramuscular and adventitial lymphatic vessels. those vessels closer to the kidney drain into renal collecting vessels, and from here into the lateral aortic nodes near the gonadal vessels. the middle part of the ureter drains into the right paracaval and interaortocaval nodes on the right, and the left paraaortic nodes on the left. in the lower ureter, lymph may drain into the common iliac lymph nodes
excess fluid back to blood circulation as lymph. the lymphatic system is a subsystem that is essential for the functioning of the blood circulatory system ; without it the blood would become depleted of fluid. the lymphatic system also works with the immune system. the circulation of lymph takes much longer than that of blood and, unlike the closed ( blood ) circulatory system, the lymphatic system is an open system. some sources describe it as a secondary circulatory system. the circulatory system can be affected by many cardiovascular diseases. cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. vascular surgeons focus on disorders of the blood vessels, and lymphatic vessels. structure the circulatory system includes the heart, blood vessels, and blood. the cardiovascular system in all vertebrates, consists of the heart and blood vessels. the circulatory system is further divided into two major circuits β a pulmonary circulation, and a systemic circulation. the pulmonary circulation is a circuit loop from the right heart taking deoxygenated blood to the lungs where it is oxygenated and returned to the left heart. the systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body, and returns deoxygenated blood back to the right heart via large veins known as the venae cavae. the systemic circulation can also be defined as two parts β a macrocirculation and a microcirculation. an average adult contains five to six quarts ( roughly 4. 7 to 5. 7 liters ) of blood, accounting for approximately 7 % of their total body weight. blood consists of plasma, red blood cells, white blood cells, and platelets. the digestive system also works with the circulatory system to provide the nutrients the system needs to keep the heart pumping. further circulatory routes are associated, such as the coronary circulation to the heart itself, the cerebral circulation to the brain, renal circulation to the kidneys, and bro
cns injury versus cns homeostasis. by depleting naturally occurring regulatory t cells after cns injury, he was able to improve neuronal survival in mice. however, by up - regulating effector autoimmune t cells through immunization with cns antigen, he was able to improve recovery after cns injury. these results showed that the immune system's intrinsic mechanisms to protect against autoimmunity, might not be beneficial when insults demand autoimmune effector function for tissue maintenance. kipnis remained at the weizmann for his postdoctoral training in schwartz's lab. in this period he and other members of the lab, discovered that brain antigen specific t cells play a role in neurogenesis and cognitive functions, such as memory and spatial learning. this was one of the seminal findings showing that the immune system, through t cells, plays a role in cognition and brain homeostasis. career and research kipnis joined the university of virginia school of medicine ( uva ) in 2007, where he later became a harrison distinguished professor and chair of the department of neuroscience. he also directed the center for brain immunology and glia ( big center ) at uva. in 2019, he accepted an offer to join the washington university school of medicine faculty via the bjc investigators program. he is primarily appointed in the department of pathology and immunology, and secondarily in neurology, neuroscience, and neurosurgery. kipnis is also a gutenberg forschungskolleg fellow and supervises a working group at the university of mainz. meningeal lymphatic vessels kipnis is credited with the 2014 discovery of meningeal lymphatic vessels, a recently discovered network of conventional lymphatic vessels located parallel to the dural sinuses and meningeal arteries of the mammalian central nervous system ( cns ). as a part of the lymphatic system, the meningeal lymphatics are responsible for draining immune cells, small molecules, and excess fluid from the cns and into the deep cervical lymph nodes. while it was initially believed that both the brain and men
to perform their functions. immunohistochemistry β immunology β is a branch of biology that covers the study of immune systems in all organisms. immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases ; malfunctions of the immune system in immunological disorders ( such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection ) ; and the physical, chemical, and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. iliac vein, common β in human anatomy, the common iliac veins are formed by the external iliac veins and internal iliac veins. the left and right common iliac veins come together in the abdomen at the level of the fifth lumbar vertebra, forming the inferior vena cava. they drain blood from the pelvis and lower limbs. both common iliac veins are accompanied along their course by common iliac arteries. iliac vein, deep circumflex β iliac vein, external β iliac vein, internal β index finger β infectious diseases ( medical specialty ) β inferior oblique muscle β inferior thyroid artery β inferior vena cava β influenza β inspection ( medicine ) β integumentary system β intensive care medicine β internal carotid artery β internal iliac vein β internal jugular vein β internal medicine β internship ( medicine ) β interventional cardiology β interventional radiology β ischium β j jaundice - also known as icterus, is a yellowish or greenish pigmentation of the skin and whites of the eyes due to high bilirubin levels. it is commonly associated with itchiness. the feces may be pale and the urine dark. jaundice in babies occurs in over half in the first week following birth and does not pose a serious threat in most. if bilirubin levels in babies are very high
</ctx>
The following are multiple choice questions (with answers) about STEM.
Terminal axillary draining lymph nodes -
A. Anterior
B. Posterior
C. Scapular
D. Apical
Answer: | [
"Anterior",
"Posterior",
"Scapular",
"Apical"
] | D | Ans. is 'd' i.e., Apical o The apical group of axillary lymph node receives lymph from anterior (pectoral), posterior (subscapular), central and lateral (humeral) group of axillary lymph nodes. Hence apical group of lymph node is the terminal group of axillary lymph node.Axillary lymph nodeRelated vesselsAnterior (pectoral)Along lateral thoracic vesselsPosterior (scapular)Along subscapular vesselsLateralMedial to axillary veinApical (terminal or infraclavicular)Along axillary vessels |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.