text
stringlengths
60
353k
source
stringclasses
2 values
**Lattice of subgroups** Lattice of subgroups: In mathematics, the lattice of subgroups of a group G is the lattice whose elements are the subgroups of G , with the partial order relation being set inclusion. In this lattice, the join of two subgroups is the subgroup generated by their union, and the meet of two subgroups is their intersection. Example: The dihedral group Dih4 has ten subgroups, counting itself and the trivial subgroup. Five of the eight group elements generate subgroups of order two, and the other two non-identity elements both generate the same cyclic subgroup of order four. In addition, there are two subgroups of the form Z2 × Z2, generated by pairs of order-two elements. The lattice formed by these ten subgroups is shown in the illustration. Example: This example also shows that the lattice of all subgroups of a group is not a modular lattice in general. Indeed, this particular lattice contains the forbidden "pentagon" N5 as a sublattice. Properties: For any A, B, and C subgroups of a group with A ≤ C (A subgroup of C) then AB ∩ C = A(B ∩ C); the multiplication here is the product of subgroups. This property has been called the modular property of groups (Aschbacher 2000) or (Dedekind's) modular law (Robinson 1996, Cohn 2000). Since for two normal subgroups the product is actually the smallest subgroup containing the two, the normal subgroups form a modular lattice. Properties: The Lattice theorem establishes a Galois connection between the lattice of subgroups of a group and that of its quotients. The Zassenhaus lemma gives an isomorphism between certain combinations of quotients and products in the lattice of subgroups. In general, there is no restriction on the shape of the lattice of subgroups, in the sense that every lattice is isomorphic to a sublattice of the subgroup lattice of some group. Furthermore, every finite lattice is isomorphic to a sublattice of the subgroup lattice of some finite group (Schmidt 1994, p. 9). Characteristic lattices: Subgroups with certain properties form lattices, but other properties do not. Normal subgroups always form a modular lattice. In fact, the essential property that guarantees that the lattice is modular is that subgroups commute with each other, i.e. that they are quasinormal subgroups. Nilpotent normal subgroups form a lattice, which is (part of) the content of Fitting's theorem. Characteristic lattices: In general, for any Fitting class F, both the subnormal F-subgroups and the normal F-subgroups form lattices. This includes the above with F the class of nilpotent groups, as well as other examples such as F the class of solvable groups. A class of groups is called a Fitting class if it is closed under isomorphism, subnormal subgroups, and products of subnormal subgroups. Characteristic lattices: Central subgroups form a lattice.However, neither finite subgroups nor torsion subgroups form a lattice: for instance, the free product Z/2Z∗Z/2Z is generated by two torsion elements, but is infinite and contains elements of infinite order. The fact that normal subgroups form a modular lattice is a particular case of a more general result, namely that in any Maltsev variety (of which groups are an example), the lattice of congruences is modular (Kearnes & Kiss 2013). Characterizing groups by their subgroup lattices: Lattice theoretic information about the lattice of subgroups can sometimes be used to infer information about the original group, an idea that goes back to the work of Øystein Ore (1937, 1938). For instance, as Ore proved, a group is locally cyclic if and only if its lattice of subgroups is distributive. If additionally the lattice satisfies the ascending chain condition, then the group is cyclic. Characterizing groups by their subgroup lattices: The groups whose lattice of subgroups is a complemented lattice are called complemented groups (Zacher 1953), and the groups whose lattice of subgroups are modular lattices are called Iwasawa groups or modular groups (Iwasawa 1941). Lattice-theoretic characterizations of this type also exist for solvable groups and perfect groups (Suzuki 1951).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parable** Parable: A parable is a succinct, didactic story, in prose or verse, that illustrates one or more instructive lessons or principles. It differs from a fable in that fables employ animals, plants, inanimate objects, or forces of nature as characters, whereas parables have human characters. A parable is a type of metaphorical analogy.Some scholars of the canonical gospels and the New Testament apply the term "parable" only to the parables of Jesus, although that is not a common restriction of the term. Parables such as the parable of the Prodigal Son are important to Jesus's teaching method. Etymology: The word parable comes from the Greek παραβολή (parabolē), literally "throwing" (bolē) "alongside" (para-), by extension meaning "comparison, illustration, analogy." It was the name given by Greek rhetoricians to an illustration in the form of a brief fictional narrative. History: The Bible contains numerous parables in the Gospels of the New Testament (Jesus's parables). These are believed by some scholars (such as John P. Meier) to have been inspired by mashalim, a form of Hebrew comparison prominent in the Talmudic period (c. 2nd-6th centuries CE). Examples of Jesus' parables include the Good Samaritan and the Prodigal Son. Mashalim from the Old Testament include the parable of the ewe-lamb (told by Nathan in 2 Samuel 12:1-9) and the parable of the woman of Tekoah (in 2 Samuel 14:1-13 ). History: Parables also appear in Islam. In Sufi tradition, parables are used for imparting lessons and values. Recent authors such as Idries Shah and Anthony de Mello have helped popularize these stories beyond Sufi circles. Modern parables also exist. A mid-19th-century example, the parable of the broken window, criticises a part of economic thinking. Characteristics: A parable is a short tale that illustrates a universal truth; it is a simple narrative. It sketches a setting, describes an action, and shows the results. It may sometimes be distinguished from similar narrative types, such as the allegory and the apologue.A parable often involves a character who faces a moral dilemma or one who makes a bad decision and then suffers the unintended consequences. Although the meaning of a parable is often not explicitly stated, it is not intended to be hidden or secret but to be quite straightforward and obvious.The defining characteristic of the parable is the presence of a subtext suggesting how a person should behave or what he should believe. Aside from providing guidance and suggestions for proper conduct in one's life, parables frequently use metaphorical language which allows people to more easily discuss difficult or complex ideas. Parables express an abstract argument by means of using a concrete narrative which is easily understood. Characteristics: The allegory is a more general narrative type; it also employs metaphor. Like the parable, the allegory makes a single, unambiguous point. An allegory may have multiple noncontradictory interpretations and may also have implications that are ambiguous or hard to interpret. As H.W. Fowler put it, the object of both parable and allegory "is to enlighten the hearer by submitting to him a case in which he has apparently no direct concern, and upon which therefore a disinterested judgment may be elicited from him, ..." The parable is more condensed than the allegory: it rests upon a single principle and a single moral, and it is intended that the reader or listener shall conclude that the moral applies equally well to his own concerns. Parables of Jesus: Medieval interpreters of the Bible often treated Jesus' parables as allegories, with symbolic correspondences found for every element in his parables. But modern scholars, beginning with Adolf Jülicher, regard their interpretations as incorrect. Jülicher viewed some of Jesus' parables as similitudes (extended similes or metaphors) with three parts: a picture part (Bildhälfte), a reality part (Sachhälfte), and a tertium comparationis. Jülicher held that Jesus' parables are intended to make a single important point, and most recent scholarship agrees.Gnostics suggested that Jesus kept some of his teachings secret within the circle of his disciples and that he deliberately obscured their meaning by using parables. For example, in Mark 4:11–12: And he said to them, "To you has been given the secret of the kingdom of God, but for those outside, everything comes in parables; in order that 'they may indeed look, but not perceive, and may indeed listen, but not understand; so that they may not turn again and be forgiven.'" (NRSV) The idea that coded meanings in parables would only become apparent when a listener had been given additional information or initiated into a higher set of teachings is supported by The Epistle of Barnabas, reliably dated between AD 70 to 132: For if I should write to you concerning things immediate or future, ye would not understand them, because they are put in parables. So much then for this. Another important component of the parables of Jesus is their participatory and spontaneous quality. Often, but not always, Jesus creates a parable in response to a question from his listeners or an argument between two opposing views. This style, often associated with the Socratics, endeared Jesus to the populations where he taught. Quranic parables: The Quran's Q39:28-30 boasts "every kind of parable in the Quran". The Quranic verses include parables of the good and evil tree (Q14:32-45), of the two men, and of the spider's house. Q16:77 contains the parable of the slave and his master, followed by the parable of the blind man and the sighted. Other figures of speech: The parable is related to figures of speech such as the metaphors and simile. A parable is like a metaphor in that it uses concrete, perceptible phenomena to illustrate abstract ideas. It may be said that a parable is a metaphor that has been extended to form a brief, coherent narrative. A parable also resembles a simile, i.e., a metaphorical construction in which something is said to be "like" something else (e.g., "The just man is like a tree planted by streams of water"). However, unlike the meaning of a simile, a parable's meaning is implicit (although not secret). Examples: Akhfash's goat – a Persian parable Hercules at the crossroads – an ancient Greek parable Parables by Ignacy Krasicki, from his 1779 book Fables and Parables: Abuzei and Tair The Blind Man and the Lame The Drunkard The Farmer Son and Father The Rooster Prince – a Hasidic parable
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hoeffding's inequality** Hoeffding's inequality: In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality and McDiarmid's inequality. It is similar to the Chernoff bound, but tends to be less sharp, in particular when the variance of the random variables is small. It is similar to, but incomparable with, one of Bernstein's inequalities. Statement: Let X1, ..., Xn be independent random variables such that ai≤Xi≤bi almost surely. Consider the sum of these random variables, Sn=X1+⋯+Xn. Then Hoeffding's theorem states that, for all t > 0, exp exp ⁡(−2t2∑i=1n(bi−ai)2) Here E[Sn] is the expected value of Sn. Note that the inequalities also hold when the Xi have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by Serfling (1974). Statement: Example Suppose ai=0 and bi=1 for all i. This can occur when Xi are independent Bernoulli random variables, though they need not be identically distributed. Then we get the inequality exp ⁡(−2t2/n) for all t≥0 . This is a version of the additive Chernoff bound which is more general, since it allows for random variables that take values between zero and one, but also weaker, since the Chernoff bound gives a better tail bound when the random variables have small variance. General case of sub-Gaussian random variables: The proof of Hoeffding's inequality can be generalized to any sub-Gaussian distribution. Recall that a random variable X is called sub-Gaussian, if P(|X|≥t)≤2e−ct2, for some c>0. For any bounded variable X, P(|X|≥t)=0≤2e−ct2 for t>T for some sufficiently large T. Then 2e−cT2≤2e−ct2 for all t≤T so taking log ⁡(2)/T2 yields P(|X|≥t)≤1≤2e−cT2≤2e−ct2, for t≤T . So every bounded variable is sub-Gaussian. For a random variable X, the following norm is finite if and only if X is sub-Gaussian: := inf {c≥0:E(eX2/c2)≤2}. Then let X1, ..., Xn be zero-mean independent sub-Gaussian random variables, the general version of the Hoeffding's inequality states that: exp ⁡(−ct2∑i=1n‖Xi‖ψ22), where c > 0 is an absolute constant. Proof: The proof of Hoeffding's inequality follows similarly to concentration inequalities like Chernoff bounds. The main difference is the use of Hoeffding's Lemma: Suppose X is a real random variable such that X∈[a,b] almost surely. Then exp ⁡(18s2(b−a)2). Proof: Using this lemma, we can prove Hoeffding's inequality. As in the theorem statement, suppose X1, ..., Xn are n independent random variables such that Xi∈[ai,bi] almost surely for all i, and let Sn=X1+⋯+Xn Then for s, t > 0, Markov's inequality and the independence of Xi implies: exp exp exp exp exp exp exp exp exp ⁡(−st+18s2∑i=1n(bi−ai)2) This upper bound is the best for the value of s minimizing the value inside the exponential. This can be done easily by optimizing a quadratic, giving s=4t∑i=1n(bi−ai)2. Proof: Writing the above bound for this value of s, we get the desired bound: exp ⁡(−2t2∑i=1n(bi−ai)2). Usage: Confidence intervals Hoeffding's inequality can be used to derive confidence intervals. We consider a coin that shows heads with probability p and tails with probability 1 − p. We toss the coin n times, generating n samples X1,…,Xn (which are i.i.d Bernoulli random variables). The expected number of times the coin comes up heads is pn. Furthermore, the probability that the coin comes up heads at least k times can be exactly quantified by the following expression: P⁡(H(n)≥k)=∑i=kn(ni)pi(1−p)n−i, where H(n) is the number of heads in n coin tosses. Usage: When k = (p + ε)n for some ε > 0, Hoeffding's inequality bounds this probability by a term that is exponentially small in ε2n: exp ⁡(−2ε2n). Since this bound holds on both sides of the mean, Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail. exp ⁡(−2ε2n). Usage: Thinking of X¯=1nH(n) as the "observed" mean, this probability can be interpreted as the level of significance α (probability of making an error) for a confidence interval around p of size 2ɛ: α=P⁡(X¯∉[p−ε,p+ε])≤2e−2ε2n Finding n for opposite inequality sign in the above, i.e. n that violates inequality but not equality above, gives us: log ⁡(2/α)2ε2 Therefore, we require at least log ⁡(2/α)2ε2 samples to acquire a (1−α) -confidence interval p±ε Hence, the cost of acquiring the confidence interval is sublinear in terms of confidence level and quadratic in terms of precision. Note that there are more efficient methods of estimating a confidence interval.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amnesty bin** Amnesty bin: An amnesty bin or amnesty box is a receptacle into which items can be placed without incurring consequences related to those items. Amnesty bins have been used for various items, including drugs, weapons, fruit, invasive species, and animals. A version of an amnesty bin is also used at Amazon warehouses for damaged items. At music venues: In Europe, drug-related deaths at music festivals present a public health concern. Amnesty bins for drugs at festivals have been proposed as a method of harm reduction; a study in Ireland found that 75% of participants said they would use amnesty bins for drugs if they were part of a drug checking system that would provide alerts about dangerous drugs in circulation. One London dance venue required patrons to place any illegal drugs they possessed into an amnesty bin as of 1999. Items placed into the bin in 1998 and 1999 were analyzed in a 2001 study of illicit drug consumption, in order to determine which street drugs were currently available. At airports: In Australia To prevent certain pests and diseases from entering areas within the country, amnesty bins are used in Australia as part of the Fruit Fly Exclusion Zone (FFEZ). Travelers to Melbourne from outside the FFEZ are asked to place any fruit they are carrying into an amnesty bin in the airport. In New Zealand In New Zealand airports, amnesty bins coupled with signage about the fines for bringing in invasive species are used to help preserve the biosecurity of the isolated country. Chinese and English signage is used on the bins. The bins and signage are placed by the Ministry for Primary Industries. At airports: In the United States Chicago In 2020, bright blue amnesty boxes for cannabis disposal were placed outside the security checkpoints at O'Hare International Airport and Midway International Airport in Chicago. Intended to allow departing travelers to dispose of cannabis, which is legal in Illinois but illegal under federal law, the boxes are owned by the Chicago Department of Aviation and serviced by the Chicago Police Department. At airports: Colorado At Colorado Springs Airport, amnesty boxes just before the entrance to security allow departing travelers to dispose of cannabis, which is legal in Colorado but illegal on commercial flights in the United States. The boxes have been used to dispose of cannabis edibles, electronic cigarettes, pipes, and concentrate.An additional amnesty box for cannabis is located at Aspen/Pitkin County Airport. Most flights from the airport land at Denver International Airport, where cannabis is banned. Departing travelers at Aspen/Pitkin with cannabis are instructed to either return it to their vehicles or place it in the bin. At airports: Hawaii At airports in Hawaii, amnesty bins are provided for agricultural reasons, intended to prevent the introduction of invasive plants and animals. Arriving passengers, who have already filled out agricultural declaration forms, can place prohibited items in the bins without risking consequences. According to the acting manager of the Plant Quarantine Branch at the Hawaiʻi Department of Agriculture, 60 to 70 pounds (27 to 32 kg) of material are placed in the bins at Daniel K. Inouye International Airport in Honolulu every few days; pest-free plant material can be used as animal feed for confiscated animals at the Department of Agriculture facilities, while contaminated material is destroyed. In 2002, a foot-long ball python was found in one of the airport's amnesty bins. The snake was believed to have been placed into the bin inside an airsickness bag, and subsequently escaped from the bag, as a torn bag was also found in the bin. It was the first animal ever found in an amnesty bin in the Oʻahu airport. At airports: Las Vegas In 2018, thirteen green amnesty boxes were placed in high-traffic areas of McCarran International Airport (now Harry Reid International Airport) in Las Vegas for disposal of cannabis and prescription drugs. Seven more were planned to be placed at Henderson Executive Airport, North Las Vegas Airport, and areas of Reid International Airport operated by private companies. Knife bins: A knife bin is a bin in which people can anonymously dispose of knives, avoiding possible criminal offenses related to knives. One such amnesty bin for knives, located in Hackney, had more than 1,500 weapons placed into it over two years in the early 2010s. In the Amazon fulfillment process: Fulfillment centers belonging to Amazon use amnesty bins as part of their process. Robotic stowers of incoming items place damaged or unscannable items into amnesty bins rather than bins for sorted items, thereby identifying them as problems to be solved later by a human. For outgoing items, human workers place damaged or unscannable items into amnesty bins for the same reason; robotic pickers for outbound items do the same.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entheogen** Entheogen: Entheogens are psychoactive substances that induce alterations in perception, mood, consciousness, cognition, or behavior for the purposes of engendering spiritual development or otherwise in sacred contexts. The anthropological study has established that entheogens are used for religious, magical, shamanic, or spiritual purposes in many parts of the world. Entheogens have traditionally been used to supplement many diverse practices geared towards achieving transcendence, including divination, meditation, yoga, sensory deprivation, healings, asceticism, prayer, trance, rituals, chanting, imitation of sounds, hymns like peyote songs, drumming, and ecstatic dance. The psychedelic experience is often compared to non-ordinary forms of consciousness such as those experienced in meditation, near-death experiences, and mystical experiences. Ego dissolution is often described as a key feature of the psychedelic experience. Nomenclature: The neologism entheogen was coined in 1979 by a group of ethnobotanists and scholars of mythology (Carl A. P. Ruck, Jeremy Bigwood, Danny Staples, Richard Evans Schultes, Jonathan Ott and R. Gordon Wasson). The term is derived from two words of Ancient Greek, ἔνθεος (éntheos) and γενέσθαι (genésthai). The adjective entheos translates to English as "full of the god, inspired, possessed," and is the root of the English word "enthusiasm." The Greeks used it as praise for poets and other artists. Genesthai means "to come into being". Thus, an entheogen is a drug that causes one to become inspired or to experience feelings of inspiration, often in a religious or "spiritual" manner.Ruck et al. argued that the term hallucinogen was inappropriate owing to its etymological relationship to words relating to delirium and insanity. The term psychedelic was also seen as problematic, owing to the similarity in sound to words about psychosis and also because it had become irreversibly associated with various connotations of the 1960s pop culture. In modern usage, entheogen may be used synonymously with these terms, or it may be chosen to contrast with recreational use of the same drugs. The meanings of the term entheogen was formally defined by Ruck et al.: In a strict sense, only those vision-producing drugs that can be shown to have figured in shamanic or religious rites would be designated entheogens, but in a looser sense, the term could also be applied to other drugs, both natural and artificial, that induce alterations of consciousness similar to those documented for ritual ingestion of traditional entheogens. Nomenclature: In 2004, David E. Nichols wrote the following about the nomenclature used for serotonergic hallucinogens: Many different names have been proposed over the years for this drug class. The famous German toxicologist Louis Lewin used the name phantastica earlier in this century, and as we shall see later, such a descriptor is not so farfetched. The most popular names—hallucinogen, psychotomimetic, and psychedelic ("mind manifesting")—have often been used interchangeably. Hallucinogen is now, however, the most common designation in the scientific literature, although it is an inaccurate descriptor of the actual effects of these drugs. In the lay press, the term psychedelic is still the most popular and has held sway for nearly four decades. Most recently, there has been a movement in nonscientific circles to recognize the ability of these substances to provoke mystical experiences and evoke feelings of spiritual significance. Thus, the term entheogen, derived from the Greek word entheos, which means "god within," was introduced by Ruck et al. and has seen increasing use. This term suggests that these substances reveal or allow a connection to the "divine within." Although it seems unlikely that this name will ever be accepted in formal scientific circles, its use has dramatically increased in popular media and internet sites. Indeed, in much of the counterculture that uses these substances, entheogen has replaced psychedelic as the name of choice, and we may expect to see this trend continue. History: Entheogens have been used by indigenous peoples for thousands of years. Hemp seeds discovered by archaeologists at Pazyryk suggest early ceremonial practices by the Scythians occurred during the 5th to 2nd century BCE, confirming previous historical reports by Herodotus. Giorgio Samorini has proposed several examples of the cultural use of entheogens that are found in the archaeological record, some conclusions of which have been called into question by R. Gordon Wasson and Erwin Panofsky and other art historians (see the Christianity section, below).According to Ruck, Eyan, and Staples, the familiar shamanic entheogen of which the Eurasians brought knowledge was Amanita muscaria. This fungus could not be cultivated and thus had to be gathered from the wild, making its use compatible with a nomadic lifestyle rather than a settled agriculturalist. When they reached the world of the Caucasus and the Aegean, they encountered wine, the entheogen of Dionysus, who brought it with him from his birthplace in the mythical Nysa when he returned to claim his Olympian birthright. The Eastern Mediterraneans "recognized it as the entheogen of Zeus, and their own traditions of shamanism, the Amanita and the 'pressed juice' of Soma – but better, since no longer unpredictable and wild, the way it was found among the Hyperboreans: as befit their own assimilation of agrarian modes of life, the entheogen was now cultivable." Robert Graves, in his foreword to The Greek Myths, hypothesizes that the ambrosia of various pre-Hellenic tribes was Amanita muscaria and perhaps psilocybin mushrooms of the genus Panaeolus. Amanita muscaria was regarded as divine food, according to Ruck and Staples, not something to be indulged in, sampled lightly, or profaned. It was seen as the food of the gods, their ambrosia, and as mediating between the two realms, and it was said that Tantalus's crime was inviting commoners to share his ambrosia. Uses: Entheogens have been used in various ways, e.g., as part of established religious rituals or as aids for personal spiritual development ("plant teachers"). In religion: Shamans all over the world and in different cultures have traditionally used entheogens, especially psychedelics, for their religious experiences. In these communities, the absorption of drugs leads to dreams (visions) through sensory distortion. The psychedelic experience is often compared to non-ordinary forms of consciousness such as those experienced in meditation, and mystical experiences. Ego dissolution is often described as a key feature of the psychedelic experience.Entheogens used in the contemporary world include biota like peyote (Native American Church), extracts like ayahuasca (Santo Daime, União do Vegetal), and synthetic drugs like 2C-B (Sangoma, Nyanga, and Amagqirha). In religion: Entheogens also play an important role in contemporary religious movements such as the Rastafari movement. Hinduism Bhang is an edible preparation of cannabis native to the Indian subcontinent. It has been used in food and drink as early as 1000 BCE by Hindus in ancient India. In religion: The earliest known reports regarding the sacred status of cannabis in the Indian subcontinent come from the Atharva Veda estimated to have been written sometime around 2000–1400 BCE, which mentions cannabis as one of the "five sacred plants... which release us from anxiety" and that a guardian angel resides in its leaves. The Vedas also refer to it as a "source of happiness," "joy-giver," and "liberator," and in the Raja Valabba, the gods send hemp to the human race. In religion: Buddhism It has been suggested that the Amanita muscaria mushroom was used by the Tantric Buddhist mahasiddha tradition of the 8th to 12th century.In the West, some modern Buddhist teachers have written about the usefulness of psychedelics. The Buddhist magazine Tricycle devoted their entire fall 1996 edition to this issue. Some teachers such as Jack Kornfield have suggested the possibility that psychedelics could complement Buddhist practice, bring healing and help people understand their connection with everything which could lead to compassion. Kornfield warns however that addiction can still be a hindrance. Other teachers, such as Michelle McDonald-Smith, expressed views that saw entheogens as not conducive to Buddhist practice ("I don't see them developing anything").The fifth of the The Five Precepts, the ethical code in the Theravada and Mahayana Buddhist traditions, states that adherents must: "abstain from fermented and distilled beverages that cause heedlessness." The Pali Canon, the scripture of Theravada Buddhism, depicts refraining from alcohol as essential to moral conduct because intoxication causes a loss of mindfulness. Although the Fifth Precept only names a specific wine and cider, this has traditionally been interpreted to mean all alcoholic beverages. In religion: Judaism The primary advocate of the religious use of cannabis in early Judaism was Polish anthropologist Sula Benet, who claimed that the plant kaneh bosem קְנֵה-בֹשֶׂם mentioned five times in the Hebrew Bible, and used in the holy anointing oil of the Book of Exodus, was cannabis. According to theories that hold that cannabis was present in Ancient Israelite society, a variant of hashish is held to have been present. In 2020, it was announced that cannabis residue had been found on the Israelite sanctuary altar at Tel Arad dating to the 8th century BCE of the Kingdom of Judah, suggesting that cannabis was a part of some Israelite rituals at the time.While Benet's conclusion regarding the psychoactive use of cannabis is not universally accepted among Jewish scholars, there is general agreement that cannabis is used in Talmudic sources to refer to hemp fibers, not hashish, as hemp was a vital commodity before linen replaced it. Lexicons of Hebrew and dictionaries of plants of the Bible such as by Michael Zohary (1985), Hans Arne Jensen (2004), and James A. Duke (2010) and others identify the plant in question as either Acorus calamus or Cymbopogon citratus, not cannabis.It has also been suggested by one author that, in modern times, cannabis can be used within Judaism to induce religious experiences. In religion: Christianity Alcohol was clearly used, historically, in Christian religious ceremonies, i.e., that of the Eucharist or Lord's Supper (i.e., Communion practices), where Christians consume bread and wine as elements in remembrance of the broken body and shed blood of Jesus Christ. In the Christian biblical writings, which date to antiquity, both the common use of wine and the subject of the abuse of alcohol are subjects that appear (in the latter case, indicating it was a popular issue requiring address). In many modern contexts of Eucharistic and related practices, in particular, those in Protestant congregations, grape juice often substitutes for wine, and in Catholic and Orthodox, the amount of wine consumed in modern ceremonies is, in any case, far below the levels required for the participant to have an entheogenic experience. In religion: Apart from alcohol, Christian denominations otherwise generally disapprove of the use of most drugs used illicitly in pursuit of entheogenic experiences. David Hillman, in a major work on medicative and recreational drug use in Greek and Roman antiquity, suggests that drug use was indeed found in the early history of the Church, although this was a component argument rejected by his doctoral committee and so did not appear in his dissertation. Generally speaking, Michael Winkelman, writing in 2019, argues in support of the role of "psilocybin mushrooms in the ancient evolution of human religions," stating that:This prehistoric mycolatry persisted into the historic era in the major religious traditions of the world, which often left evidence of these practices in sculpture, art, and scriptures. ... But even through new entheogenic combinations were introduced, complex societies generally removed entheogens from widespread consumption, restricted them in private and exclusive spiritual practices of the leaders, and often carried out repressive punishment of those who engaged in entheogenic practices. As of this date, the question of the extent of entheogen use in practices through Christian history is viewed as remaining poorly considered by academic or independent scholars.The question of whether visionary plants were used in pre-Theodosian Christianity, including by heretical or quasi-Christian groups, was reported by Celdrán and Ruck to be distinct from the extent to which visionary plants were utilized or forgotten in later Christianity. The same is true of the question of possible distinctives of use by groups within orthodox Catholic practice, e.g., elites versus laity (e.g., where Ruck and colleagues suggest use of Datura by early Spanish church elites, based on appearances of the plant of origin in images). In 2001, classical philologist Mark Hoffman and colleagues suggested widespread use of "visionary" (entheogenic) plants in cultures surrounding the emerging Christian movement, and so too in among early Christians, with a gradual reduction of the use of entheogens in Christianity over history. In religion: Jan Irvin and Michael Hoffman—the former including both authors in their publication, but the latter omitting the former in his—argue strongly in 2007 in The Journal of Higher Criticism for revision of the classical rejection of a strong entheogenic presence in Christian art and history, stating:It is most remarkable that none of these scholars–Ramsbottom, Panofsky, Wasson, or Allegro–explicitly consider and address the question, “What was the extent of entheogen use throughout Christian history and in the surrounding cultural context?” Wasson and Allegro share the unexamined and untested assumption that while entheogen use was the original inspiration for religions, it was vanishingly rare in Christianity and the surrounding culture. These authors go on to state their perspective, at odds with Panofsky, Wasson, and others, and argue that the opposing scholars' "unjustified combination of premises has resulted in a standoff of positions", and that their scholarly opponents' historic objection to entheogenic arguments "all share the same shaky foundation, producing inconsistencies and self-contradictions." The response follows from the appearance, in R. Gordon Wasson's book, Soma, of a letter from art historian Erwin Panofsky (and arguments surrounding the same) that communicates those art scholars were aware of "mushroom trees" in "Romanesque and early Gothic art", but that "the plant [in the fresco in question had] nothing whatever to do with mushrooms", going on to state that "the similarity with Amanita muscaria is purely fortuitous". Panofsky continues:The Plaincourault fresco is only one example - and, since the style is provincial, a particularly deceptive one—of a conventionalized tree type, prevalent in Romanesque and early Gothic art, which art historians actually refer to as a ‘mushroom tree’ or in German, Pilzbaum. It comes about by the gradual schematization of the impressionistically rendered Italian pine tree in Roman and early Christian painting, and there are hundreds of instances exemplifying this development—unknown of course to mycologists. ... What the mycologists have overlooked is that the medieval artists hardly ever worked from nature but from classical prototypes which in the course of repeated copying became quite unrecognizable. In religion: Peyotism The Native American Church (NAC) is also known as Peyotism and Peyote Religion. Peyotism is a Native American religion characterized by mixed traditional as well as Protestant beliefs and by sacramental use of the entheogen peyote. In religion: The Peyote Way Church of God believes that "Peyote is a holy sacrament when taken according to our sacramental procedure and combined with a holistic lifestyle." Santo Daime Santo Daime is a syncretic religion founded in the 1930s in the Brazilian Amazonian state of Acre by Raimundo Irineu Serra, known as Mestre Irineu. Santo Daime incorporates elements of several religious or spiritual traditions, including Folk Catholicism, Kardecist Spiritism, African animism and indigenous South American shamanism, including vegetalismo. In religion: Ceremonies – trabalhos (Brazilian Portuguese for "works") – are typically several hours long and are undertaken sitting in silent "concentration," or sung collectively, dancing according to simple steps in geometrical formation. Ayahuasca referred to as Daime within the practice, which contains several psychoactive compounds, is drunk as part of the ceremony. The drinking of Daime can induce a strong emetic effect which is embraced as both emotional and physical purging. In religion: União do Vegetal União do Vegetal (UDV) is a religious society founded on July 22, 1961, by José Gabriel da Costa, known as Mestre Gabriel. The translation of União do Vegetal is Union of the Plants referring to the sacrament of the UDV, Hoasca tea (also known as ayahuasca). This beverage is made by boiling two plants, Mariri (Banisteriopsis caapi) and Chacrona (Psychotria viridis), both of which are native to the Amazon rainforest. In religion: In its sessions, UDV members drink Hoasca Tea for the effect of mental concentration. In Brazil, the use of Hoasca in religious rituals was regulated by the Brazilian Federal Government's National Drug Policy Council on January 25, 2010. The policy established legal norms for the religious institutions that responsibly use this tea. In 2006, in the case of Gonzales v. O Centro Espírita Beneficente União do Vegetal, the Supreme Court of the United States unanimously affirmed the UDV's right to use Hoasca tea in its religious sessions in the United States. By region: Africa The best-known entheogen-using culture of Africa is the Bwitists, who used a preparation of the root bark of Tabernanthe iboga. Although the ancient Egyptians may have been using the sacred blue lily plant in some of their religious rituals or just symbolically, it has been suggested that Egyptian religion once revolved around the ritualistic ingestion of the far more psychoactive Psilocybe cubensis mushroom, and that the Egyptian White Crown, Triple Crown, and Atef Crown were evidently designed to represent pin-stages of this mushroom. There is also evidence for the use of psilocybin mushrooms in Ivory Coast. Numerous other plants used in shamanic ritual in Africa, such as Silene capensis sacred to the Xhosa, are yet to be investigated by western science. A recent revitalization has occurred in the study of southern African psychoactives and entheogens (Mitchell and Hudson 2004; Sobiecki 2002, 2008, 2012).Among the amaXhosa, the artificial drug 2C-B is used as an entheogen by traditional healers or amagqirha over their traditional plants; they refer to the chemical as Ubulawu Nomathotholo, which roughly translates to "Medicine of the Singing Ancestors". By region: Americas Entheogens have played a pivotal role in the spiritual practices of most American cultures for millennia. The first American entheogen to be subject to scientific analysis was the peyote cactus (Lophophora williamsii). One of the founders of modern ethnobotany, Richard Evans Schultes of Harvard University documented the ritual use of peyote cactus among the Kiowa, who live in what became Oklahoma. While it was used traditionally by many cultures of what is now Mexico, in the 19th century its use spread throughout North America, replacing the toxic mescal bean (Calia secundiflora). Other well-known entheogens used by Mexican cultures include the alcoholic Aztec sacrament, pulque, ritual tobacco (known as "picietl" to the Aztecs, and "sikar" to the Maya, from where the word "cigar" derives), psilocybin mushrooms, morning glories (Ipomoea tricolor and Turbina corymbosa), and Salvia divinorum. By region: Datura wrightii is sacred to some Native Americans and has been used in ceremonies and rites of passage by Chumash, Tongva, and others. Among the Chumash, when a boy was 8 years old, his mother would give him a preparation of momoy to drink. This supposed spiritual challenge should help the boy develop the spiritual well-being that is required to become a man. Not all of the boys undergoing this ritual survived. Momoy was also used to enhance spiritual wellbeing among adults. For instance, during a frightening situation, such as when seeing a coyote walk like a man, a leaf of momoy was sucked to help keep the soul in the body. By region: The mescal bean Sophora secundiflora was used by the shamanic hunter-gatherer cultures of the Great Plains region. Other plants with ritual significance in North American shamanism are the hallucinogenic seeds of the Texas buckeye and jimsonweed (Datura stramonium). Paleoethnobotanical evidence for these plants from archaeological sites shows they were used in ancient times thousands of years ago.In South America there is a long tradition of using the Mescaline-containing cactus Echinopsis pachanoi. Archaeological studies have found evidence of use going back to the pre-Columbian era, to Moche culture, Nazca culture, and Chavín culture. By region: Eurasia In the mountains of western China, significant traces of THC, the compound responsible for cannabis’ psychoactive effects, have been found in wooden bowls, or braziers, excavated from a 2,500-year-old cemetery.John Marco Allegro argued that early Jewish and Christian cultic practice was based on the use of Amanita muscaria, which was later forgotten by its adherents, but this view has been widely disputed.In 440 BCE, Herodotus in Book IV of the Histories, documents that the Scythians inhaled cannabis in funeral ceremonies, stating they "take some of this hemp-seed, and … throw it upon the red hot stones" and when it released a vapor, the "Scyths, delighted, shout[ed] for joy."A theory that naturally-occurring gases like ethylene used by inhalation may have played a role in divinatory ceremonies at Delphi in Classical Greece received popular press attention in the early 2000s, yet has not been conclusively proven.Mushroom consumption is part of the culture of Eurasian in general, with particular importance to Slavic and Baltic peoples. Some academics argue that the use of psilocybin- and/or muscimol-containing mushrooms was an integral part of the ancient culture of the Rus' people. By region: Oceania There are no known uses of entheogens by the Māori of New Zealand aside from a variant species of kava, although some modern scholars have claimed that there may be evidence of psilocybin mushroom use. Natives of Papua New Guinea are known to use several species of entheogenic mushrooms (Psilocybe spp, Boletus manicus).Kava or kava kava (Piper Methysticum) has been cultivated for at least 3,000 years by a number of Pacific island-dwelling peoples. Historically, most Polynesian, many Melanesian, and some Micronesian cultures have ingested the psychoactive pulverized root, typically taking it mixed with water. In these traditions, taking kava is believed to facilitate contact with the spirits of the dead, especially relatives and ancestors. Research: Notable early testing of the entheogenic experience includes the Marsh Chapel Experiment, conducted by physician and theology doctoral candidate Walter Pahnke under the supervision of psychologist Timothy Leary and the Harvard Psilocybin Project. In this double-blind experiment, volunteer graduate school divinity students from the Boston area almost all claimed to have had profound religious experiences subsequent to the ingestion of pure psilocybin.Beginning in 2006, experiments have been conducted at Johns Hopkins University, showing that under controlled conditions psilocybin causes mystical experiences in most participants and that they rank the personal and spiritual meaningfulness of the experiences very highly.Except in Mexico, research with psychedelics is limited due to ongoing widespread drug prohibition. The amount of peer-reviewed research on psychedelics has accordingly been limited due to the difficulty of getting approval from institutional review boards. Furthermore, scientific studies on entheogens present some significant challenges to investigators, including philosophical questions relating to ontology, epistemology and objectivity. Legal status: Some countries have legislation that allows for traditional entheogen use. Legal status: Australia Between 2011 and 2012, the Australian Federal Government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as "controlled plants". DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing mescaline or ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (golden wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native or religious peoples). Legal status: United States In 1963 in Sherbert v. Verner the Supreme Court established the Sherbert Test, which consists of four criteria that are used to determine if an individual's right to religious free exercise has been violated by the government. The test is as follows: For the individual, the court must determine whether the person has a claim involving a sincere religious belief, and whether the government action is a substantial burden on the person's ability to act on that belief.If these two elements are established, then the government must prove that it is acting in furtherance of a "compelling state interest", and that it has pursued that interest in the manner least restrictive, or least burdensome, to religion.This test was eventually all-but-eliminated in Employment Division v. Smith 494 U.S. 872 (1990) which held that a "neutral law of general applicability" was not subject to the test. Congress resurrected it for the purposes of federal law in the federal Religious Freedom Restoration Act (RFRA) of 1993. Legal status: In City of Boerne v. Flores, 521 U.S. 507 (1997) RFRA was held to trespass on state sovereignty, and application of the RFRA was essentially limited to federal law enforcement. In Gonzales v. O Centro Espírita Beneficente União do Vegetal, 546 U.S. 418 (2006), a case involving only federal law, RFRA was held to permit a church's use of a DMT-containing tea for religious ceremonies. Legal status: Some states have enacted State Religious Freedom Restoration Acts intended to mirror the federal RFRA's protections. Legal status: Peyote is listed by the United States DEA as a Schedule I controlled substance. However, practitioners of the Peyote Way Church of God, a Native American religion, perceive the regulations regarding the use of peyote as discriminating, leading to religious discrimination issues regarding about the U.S. policy towards drugs. As the result of Peyote Way Church of God, Inc. v. Thornburgh the American Indian Religious Freedom Act of 1978 was passed. This federal statute allow the "Traditional Indian religious use of the peyote sacrament", exempting only use by Native American persons. In literature: Many works of literature have described entheogen use; some of those are: The drug melange (spice) in Frank Herbert's Dune universe acts as both an entheogen (in large enough quantities) and an addictive geriatric medicine. Control of the supply of melange was crucial to the Empire, as it was necessary for, among other things, faster-than-light (folding space) navigation. Consumption of the imaginary mushroom anochi [enoki] as the entheogen underlying the creation of Christianity is the premise of Philip K. Dick's last novel, The Transmigration of Timothy Archer, a theme that seems to be inspired by John Allegro's book. Aldous Huxley's final novel, Island (1962), depicted a fictional psychoactive mushroom – termed "moksha medicine" – used by the people of Pala in rites of passage, such as the transition to adulthood and at the end of life. Bruce Sterling's Holy Fire novel refers to the religion in the future as a result of entheogens, used freely by the population. In Stephen King's The Dark Tower: The Gunslinger, Book 1 of The Dark Tower series, the main character receives guidance after taking mescaline. The Alastair Reynolds novel Absolution Gap features a moon under the control of a religious government that uses neurological viruses to induce religious faith. In literature: A critical examination of the ethical and societal implications and relevance of "entheogenic" experiences can be found in Daniel Waterman and Casey William Hardison's book Entheogens, Society & Law: Towards a Politics of Consciousness, Autonomy and Responsibility (Melrose, Oxford 2013). This book includes a controversial analysis of the term entheogen arguing that Wasson et al. were mystifying the effects of the plants and traditions to which it refers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BeIA** BeIA: BeIA, or BeOS for Internet Appliances, was a minimized version of Be Inc.'s BeOS operating system for embedded systems. BeIA: The BeIA system presents a browser-based interface to the user. The browser was based on the Opera 4.0 code base, but most of times it featured built-in dashboard (like Sony eVilla), and was named Wagner. Unlike the BeOS, which runs the Tracker and Deskbar at boot-up, the BeIA OS boots straight into the Opera browser interface (only on Compaq IA-1, similar to the later Google ChromeOS does with the Google Chrome browser). While it is possible to boot BeIA into an interface similar to the standard BeOS, doing so involves special knowledge. BeIA compression techniques: The BeIA operating system employs a number of techniques to minimise the system footprint. These involve a number of pre processes which yield an installable file system image.The CFS Filesystem was used to reduce the file system size. CFS (Compressed File System) was a file system created in house at Be Inc that aimed to compress the files within itself to save space. The filesystem had a similar set of properties to the native BeOS file system BFS, but some of the more advanced features (live queries and attributes) were either broken or non-functional in many of the Beta releases of the software. BeIA compression techniques: The BeOS uses ELF format executable files, much as many other operating systems. BeIA uses an extended version of ELF, the name of which is unknown but which has come to be known as CELF, from the CEL magic word within the executable header and the fact that it is derived from ELF format executables through a compression process. The CELF (Compressed ELF) files use a patented technique to compress the op codes within the executable and reduce the overall footprint of each executable file. The file was compressed by creating a set of dictionaries that contain the op codes and are read by the kernel at start up and mapped into the executable in memory at run time. This makes the file fast loading, but has an extreme disadvantage, in that the dictionary is not extendible by the user and adding extra executable was not possible when using CELF compression techniques unless the executable symbols existed within the dictionary already present. The creation of CELF executables is generally done in batch. The entire system will be compressed and a file system image created from the crushed files. BeIA compression techniques: Crushing was the term coined for the compression of the system using CELF format. BeIA can run either as CELF or ELF based. However, it can only use one or the other file formats. Versions: The following BeIA versions were released to developers at stages of the development of the system. Pre 1.0 build - reports to be 4.5.2, this is likely a hang over from the BeOS version. Pre numbering of BeIA. 1.0 beta 9 (uncrushed binaries are compatible with Release Candidate) 1.0 Release Candidate (circa the clipper) 1.0 2.0 History: BeIA is believed by many to be partially responsible for the death of Be, Inc., as sales were never anywhere near as high as anticipated. During 2001, a Zanussi "internet fridge" toured the US with a BeIA powered DT-300 webpad docked in its door. List of BeIA devices: Sony eVilla - sold as a home web terminal with BeIA preloaded Compaq IA-1 - sold with either BeIA or MSN Companion. HARP - not a computer, but a standard for audio streaming terminals, used by Virgin in some of their stores Proview iPAD (PI-520B) DT Research DT-300 (NB. DT-325 was used with later 2.0 betas) Hardware known to run BeIA (official and unofficial)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pleaching** Pleaching: Pleaching or plashing is a technique of interweaving living and dead branches through a hedge creating a fence, hedge or lattices. Trees are planted in lines, and the branches are woven together to strengthen and fill any weak spots until the hedge thickens. Branches in close contact may grow together, due to a natural phenomenon called inosculation, a natural graft. Pleach also means weaving of thin, whippy stems of trees to form a basketry effect. History: Pleaching or plashing (an early synonym) was common in gardens from late medieval times to the early eighteenth century, to create shaded paths, or to create a living fence out of trees or shrubs. Commonly deciduous trees were used by planting them in lines. The canopy was pruned into flat planes with the lower branches removed leaving the stems below clear. This craft had been developed by European farmers who used it to make their hedge rows more secure. Julius Caesar (circa 60 B.C.) states that the Gallic tribe of Nervii used plashing to create defensive barriers against cavalry.In hedge laying, this technique can be used to improve or renew a quickset hedge to form a thick, impenetrable barrier suitable for enclosing animals. It keeps the lower parts of a hedge thick and dense, and was traditionally done every few years. History: The stems of hedging plants are slashed through to the centre or more, then bent over and interwoven. The plants rapidly regrow, forming a dense barrier along its entire length. History: In garden design, the same technique has produced elaborate structures, neatly shaded walks and allées. This was not much seen in the American colonies, where a labor-intensive aesthetic has not been a feature of gardening: "Because of the time needed in caring for pleached allées," Donald Wyman noted, "they are but infrequently seen in American gardens, but are frequently observed in Europe." After the second quarter of the eighteenth century, the technique withdrew to the kitchen garden, and the word dropped out of English usage, until Sir Walter Scott reintroduced it for local colour, in The Fortunes of Nigel (1822). After the middle of the nineteenth century, English landowners were once again planting avenues, often shading the sweeping curves of a drive, but sometimes straight allées of pleached limes, as Rowland Egerton's at Arley Hall, Cheshire, which survive in splendidly controlled form.In Much Ado About Nothing, Antonio reports (I.ii.8ff) that the Prince and Count Claudio were "walking in a thick pleached alley in my orchard." A modern version of such free-standing pleached fruit trees is sometimes called a "Belgian fence": young fruit trees pruned to four or six wide Y-shaped crotches, in the candelabra-form espalier called a palmette verrier, are planted at close intervals, about two metres apart, and their branches are bound together to makes a diagonal lattice, a regimen of severe seasonal pruning; lashing of young growth to straight sticks and binding the joints repeat the pattern. History: Smooth-barked trees such as limewood or linden trees, or hornbeams were most often used in pleaching. A sunken parterre surrounded on three sides by pleached allées of laburnum is a feature of the Queen's Garden, Kew, laid out in 1969 to complement the seventeenth-century Anglo-Dutch architecture of Kew Palace. A pleached hornbeam hedge about three meters high is a feature of the replanted town garden at Rubens House, Antwerp, recreated from Rubens' painting The Walk in the Garden and from seventeenth-century engravings.In the gardens of André Le Nôtre and his followers, pleaching kept the vistas of straight rides through woodland cleanly bordered. At Studley Royal, Yorkshire, the avenues began to be pleached once again, as an experiment in restoration, in 1972. Pleaching in art: The word pleach has been used to describe the art form of tree shaping or one of the techniques of tree shaping. Pleaching describes the weaving of branches into houses, furniture, ladders and many other 3D art forms. Examples of living pleached structures include Richard Reames's red alder bench and Axel Erlandson's sycamore tower. There are also conceptual ideas like the Fab Tree Hab.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colpocleisis** Colpocleisis: Colpocleisis is a procedure involving closure of the vagina. Colpocleisis: It is used to treat vaginal prolapse.In older women who are no longer sexually active a simple procedure for reducing prolapse is a partial colpocleisis. The procedure was described by 'Le Fort' and involves the removal of strip of anterior and posterior vaginal wall, with closure of the margins of the anterior and posterior wall to each other. This procedure may be performed whether or not the uterus and cervix are present. When it is completed, a small vaginal canal exists on either side of the septum, produced by the suturing of the lateral margins of the excision.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vinyl composition tile** Vinyl composition tile: Vinyl composition tile (VCT) is a finished flooring material used primarily in commercial and institutional applications. Modern vinyl floor tiles and sheet flooring and versions of those products sold since the early 1980s are composed of colored polyvinyl chloride (PVC) chips formed into solid sheets of varying thicknesses (1⁄8 in or 3.2 mm is most common) by heat and pressure. Floor tiles are cut into modular shapes such 12-by-12-inch (300 mm × 300 mm) squares or 12-by-24-inch (300 mm × 610 mm) rectangles. In installation the floor tiles or sheet flooring are applied to a smooth, leveled sub-floor using a specially formulated vinyl adhesive or tile mastic that remains pliable. In commercial applications some tiles are typically waxed and buffed using special materials and equipment. Vinyl composition tile: Modern vinyl floor tile is frequently chosen for high-traffic areas because of its low cost, durability, and ease of maintenance. Vinyl tiles have high resilience to abrasion and impact damage and can be repeatedly refinished with chemical strippers and mechanical buffing equipment. If properly installed, tiles can be easily removed and replaced when damaged. Tiles are available in a variety of colors from several major flooring manufacturers. Some manufacturers have created vinyl tiles that very closely resemble wood, stone, terrazzo, and concrete and hundreds of varying patterns. History: In 1894, Philadelphia architect Frank Furness patented a system for rubber floor tiles. These tiles were durable, sound-deadening, easy to clean and easy to install. However, they stained easily and deteriorated over time from exposure to oxygen, ozone and solvents, and were not suitable for use in basements where alkaline moisture was present. History: In 1926, Waldo Semon, working in the United States, invented plasticized polyvinyl chloride. Polyvinyl chloride (PVC) is a plastic containing carbon, hydrogen and chlorine. It is produced by the process of polymerisation. Molecules of vinyl chloride monomers combine to make long chain molecules of polyvinyl chloride. Polyvinyl chloride (PVC) based floor coverings, commonly known as vinyls made its big splash when a vinyl composition tile was displayed at the Century of Progress Exposition in Chicago. Because of the scarcity of vinyl during the war years, vinyl flooring was not widely marketed until the late 1940s, eventually became the most popular choice for flooring in just about any hard-surface application. History: Luxury vinyl tile Luxury vinyl tile (LVT) is an industry term, not a standard, for vinyl that realistically mimics the appearance of natural materials with an added layer to improve wear and performance. The extra layer of protection is usually a heavy film covered with a UV-cured urethane that makes it scuff, stain and scratch resistant. Sometimes the term "luxury vinyl tile" is reserved for products that mimic stone and ceramic, whereas the term "luxury vinyl plank" is used for products that mimic wood. History: PVC tiles Polyvinyl chloride (PVC) tiles are a commonly used floor finish made from polyvinyl chloride. Due to the small size of the tiles, usually 150 mm (6"), 225 mm (9") and 305 mm (12"), any damage can soon be repaired by replacing individual tiles (as long as some spares are kept). The tiles are made of a composite of PVC and fibre, producing a thin and fairly hard tile. History: PVC tiles are prone to some issues. The glues used on self-adhesive tiles sometimes give way, causing edges to lift and get broken by foot traffic. The surface wears, in time causing difficulty in cleaning, then loss of the coloured pattern layer. Finally, a very smooth sub-floor is required to lay them on, otherwise they gradually become cut by the foot pressure above and the shallow edges below. History: The main advantages of PVC tiles are low cost, ease of replacing individual tiles, and the fact that the tiles can be laid with only brief periods available. In fact, a DIYer with assorted ten-minute slots in otherwise busy days would have enough time to get a floor laid gradually, and thus could avoid professional installation costs. History: Asbestos health risk Vinyl tiles manufactured in the 20th century frequently contain asbestos fibers, which are today referred to as Vinyl-Asbestos Tiles (VAT). Asbestos fibers were added to vinyl tiles for their outstanding insulative properties, a desirable attribute in regions with cold winter weather. They also improved the tensile strength of vinyl tiles, increasing their service life.An ASTM procedural textbook states that 42% of commercial and public buildings contain vinyl-asbestos floor tiles. 9×9 inch dimension tile almost always contains asbestos.Although vinyl-asbestos floor tiles are not particularly dangerous when properly installed and undamaged, they should only be removed by trained professionals. There is a significant health risk posed by toxic dust dispersal during the removal of vinyl-asbestos tiles. Uncertified individuals should never attempt to remove floor tiles and should always seek to have their composition determined by a licensed professional. History: A New York State ELAP certification manual states that "polarized light microscopy is not consistently reliable in detecting asbestos in floor coverings and similar non-friable organically bound materials".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**End system** End system: In networking jargon, a computer, phone, or internet of things device connected to a computer network is sometimes referred to as an end system or end station, because it sits at the edge of the network. The end user directly interacts with an end system that provides information or services.End systems that are connected to the Internet are also referred to as internet hosts; this is because they host (run) internet applications such as a web browser or an email retrieval program. The Internet's end systems include some computers with which the end user does not directly interact. These include mail servers, web servers, or database servers. With the emergence of the internet of things, household items (such as toasters and refrigerators) as well as portable, handheld computers and digital cameras are all being connected to the internet as end systems. End system: End systems are generally connected to each other using switching devices known as routers rather than using a single communication link. The path that transmitted information takes from the sending end system, through a series of communications links and routers, to the receiving end system is known as a route or path through the network. The sending and receiving route can be different, and can be reallocated during transmission due to changes in the network topology. Normally the cheapest or fastest route is chosen. For the end user the actual routing should be completely transparent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ONIOM** ONIOM: The ONIOM (short for 'Our own N-layered Integrated molecular Orbital and Molecular mechanics') method is a computational approach developed by Morokuma and co-workers. ONIOM is a hybrid method that enables different ab initio, semi-empirical, or molecular mechanics methods to be applied to different parts of a molecule/system in combination to produce reliable geometry and energy at reduced computational cost.The ONIOM computational approach has been found to be particularly useful for modeling biomolecular systems as well as for transition metal complexes and catalysts. Codes that support ONIOM: Gaussian NWChem ORCA (quantum chemistry program)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liljequist parhelion** Liljequist parhelion: A Liljequist parhelion is a rare halo, an optical phenomenon in the form of a brightened spot on the parhelic circle approximately 150–160° from the sun; i.e., between the position of the 120° parhelion and the anthelion. Liljequist parhelion: While the sun touches the horizon, a Liljequist parhelion is located approximately 160° from the sun and is about 10° long. As the sun rises up to 30° the phenomenon gradually moves towards 150°, and as the sun reaches over 30° the optical effect vanishes. The parhelia are caused by light rays passing through oriented plate crystals. Like the 120° parhelia, the Liljequist parhelia display a white-bluish colour. This colour is, however, associated with the parhelic circle itself, not the ice crystals causing the Liljequist parhelia. Liljequist parhelion: The phenomenon was first observed by Gösta Hjalmar Liljequist in 1951 at Maudheim, Antarctica during the Norwegian–British–Swedish Antarctic Expedition in 1949–1952. It was then simulated by Dr. Eberhard Tränkle (1937–1997) and Robert Greenler in 1987 and theoretically explained by Walter Tape in 1994.A theoretical and experimental investigation of the Liljequist parhelion caused by perfect hexagonal plate crystals showed that the azimuthal position of maximum intensity occurs at arccos sin ⁡(π3−αTIR)) where the refractive index n to use for the angle arcsin ⁡(1/n) of total internal reflection is Bravais' index for inclined rays, i.e. sin cos ⁡(e) for a solar elevation e . For ice at zero solar elevation this angle is 153 ∘ . The dispersion of ice causes a variation of this angle, leading to a blueish/cyan coloring close to this azimuthal coordinate. The halo ends towards the anthelion at an angle θL2max arcsin sin ⁡(π3−αTIR))
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Screen scroll centrifuge** Screen scroll centrifuge: Screen/Scroll centrifuge is a filtering or screen centrifuge which is also known as worm screen or conveyor discharge centrifuge. This centrifuge was first introduced in the midst of 19th century. After developing new technologies over the decades, it is now one of the widely used processes in many industries for the separation of crystalline, granular or fibrous materials from a solid-liquid mixture. Also, this process is considered to dry the solid material. This process has been some of the most frequently seen within, especially, coal preparation industry. Moreover, it can be found in other industries such as chemical, environmental, food and other mining fields. Fundamentals: Screen scroll centrifuge is a filtering centrifuge which separates solids and liquid from a solid-liquid mixture. This type of centrifuge is commonly used with a continuous process in which slurry containing both solid and liquid is continuously fed into and continuously discharged from the centrifuge. In a typical screen scroll centrifuge, the basic principle is that entering feed is separated into liquid and solids as two products. The feed is transported from small to larger diameter end of frustoconical basket by the inclination of the screen basket and slightly different speed of the scraper worm. The solid material retained on the screen is moved along the cone via an internal screw conveyor while the liquid output is obtained due to centrifugal force causes the feed slurry to pass through the screen openings. Furthermore, screen scroll centrifuge may rotate either in horizontal or vertical position. Range of applications: The use of screen scroll centrifuge has been seen in numerous process engineering industries. One of the most noticeable applications is within coal preparation industry. In addition to that, this centrifuge is also employed in the dewatering of potash, gilsonite, in salt processes and in dewatering various sands. Moreover, it is also designed for use in the food processing industry, for instant, dairy production, and cocoa butter equivalents and other confectionery fats. Designs available: Screen scroll centrifuges, which are also known as worm screen or the conveyor discharge, instigate the solids to move along the cone through an internal screw conveyor. The conveyor in the centrifuge spins at a differential speed to the conical screen and centrifugal forces approximately 1800g - 2600g facilitate reasonable throughputs. Some of the screen scroll centrifuges are available with up to four separate stages for improved performance. The first stage is used to de-liquor the feed which is followed by a washing stage, with the final stage being used for drying. In an advanced screen scroll centrifuge with four stages, two separate washes are employed in order to segregate the wash liquors.The two most common types of screen/scroll centrifuge used in many industrial applications are vertical screen/scroll centrifuge and horizontal screen/scroll centrifuge. Designs available: Vertical screen scroll centrifuge Vertical screen scroll is built with the main components of screen, scroll, basket, housing, and helical screw. Feed containing liquid and solid materials is introduced into vertical screen scroll centrifuge from the top. This is sped up by centrifugal acceleration produced from the rotating parts contacted. As such, centrifugal force slings liquids through the openings, while solids are held on the screen surface as they cannot pass through because of granular particles larger than the screen pores or due to agglomeration. Movement of solids across the screen surface is manipulated by flights. Liquids that have gone through screen are obtained and discharged through effluent outlet from the side of machine, while solids collected from the screen fall by gravity through the bottom discharge of the machine.Some of the available vertical screen scroll centrifuges are CMI model EBR and CMI model EBW which are manufactured by Centrifugal & Mechanical Industries (CMI). The former can dewater coarser particles size ranging from 1.5 in to 28 mesh whereas the latter can dewater finer particles size ranging from 1 mm to 150 mesh. Designs available: Horizontal screen scroll centrifuge Similar to a vertical screen scroll centrifuge, a horizontal screen scroll centrifuge is constructed of several main parts: screen, scroll, basket, housing, and helical screw. The screen and the basket with frustoconical geometry are assembled into the housing in a horizontal axis. Inside the frustoconical structure there is a tubular wall. Inside the tubular wall there is a cylinder of helical screw which flight on scroll pass. The tubular wall will have a slightly different angular speed to the helical screw.The solid-liquid mixture is fed into the closed rearward portion of the scroll. The rotation movement of the scroll, screen, and basket allows the liquid to pass through from the openings on the screen (via centrifugal force). The solid remains will be separated according to size due to the difference of the angular velocity of the helical screw and the basket. The helical screw pushes the solid material to be discharged to the forward end of the scroll. The processing time depends on helical screw pitch and the angular velocity difference. It may also be influenced by the design of the scroll feed opening. The solid particles exiting are usually collected via a conveyor in the collection unit. Main process characteristics and its assessment: The performance and output efficiency of the screen scroll centrifuge can be affected by several factors, such as particle size and feed concentration, flow rate of feed and screen mesh size of the centrifuge. Main process characteristics and its assessment: Particle size and feed solids Particle size in the feed is one of the most important parameters to be taken into account since the choice of slot and screen holes size of screen scroll centrifuge or different types of process depends on feed contents. Non-uniform particles size in the feed can cause partial blockage on the screen due to the small size solids blocking the holes besides normal and larger particles. So, liquids flow over the screen instead of passing through it. As such, it requires higher solids content in the feed in order to obtain good and reasonable results - normally greater than 15% and up to 60% w/w. Nevertheless, the flow rate of the feed can be monitored to overcome this setback. Another possible method is to carry out pre-treatment on the feed to be used for screen scroll centrifuge, for example, by the filtration process. Particle size, thereafter, can be analysed and the selection of particular screen size can be determined. However, it increases the total operating cost. Main process characteristics and its assessment: Typical operating range of particle size and feed concentration for screen scroll centrifuges are 100 – 20,000 µm and 3 – 90% mass of the solids in the feed. In general, slot and screen holes size range 40 - 200 µm with open areas from 5 - 15%. Nevertheless, recent products are claimed to be able to handle the particle size as low as 50 µm. Screens are generally metallic foil or wedge wire and more recently metallic and composite screens perforated with micro-waterjet cutting. Main process characteristics and its assessment: Feed flow rate As mentioned in the previous section, feed flow rate is one of the crucial parameters to be controlled to achieve highly efficient output. Centrifuge performance is sensitive to feed flow rate. Even though increasing the feed flow rate can prevent from blocking the screens, it is mentioned that wetter solids is achieved. This is due to increase in hydraulic load on the centrifuge when higher feed rate is applied, while differential rotation speed between the cone and scroll, and retention time within dewatering zone of the basket are fixed. In addition, higher feed rate leads to a surge in the effective thickness of the bed since it is dragged down by the scroll. Main process characteristics and its assessment: Basket geometry and its material The material variations for constructing and the design of main components of centrifuge such as the screen plate, helical screw and basket could actually improve the longer life term of the machine. Another important factor is the conical basket size and its angle within the centrifuge. Different basket size and angle between basket and helical screw can vary the angular speed; as a result, the quality of the product is affected. Moreover, the shape of the helical screw is also important since it optimizes the transportation of cake. A selection of typical screen scroll centrifuge with different basket sizes found in the market is presented in the following Table 1. The helical scroll and conical basket sections are commonly built at the angle of 10°, 15° and 20°.Table 1 A selection of screen scroll centrifuge sizes Advantages and limitations over competitive processes: The screen scroll centrifuge has an advantage of having a driven scroll helical conveyor which gives a small differential speed relative to the conical basket. The helical conveyor is installed in the centrifuge to control the transport of the incoming feed, allowing the residence time of the solids in the basket to be increased giving enhanced process performance. Moreover, the helical conveyor and conical basket sections are designed in certain angle of 10°, 15° and 20° being common such that solid particles are dragged on the conveyor along the cone towards the discharge point. As a result, there is no formation of even solids layer but form piles of triangular section in front of the blades of the conveyor. The residence time within screen scroll centrifuge is typically about 4 to 15 seconds which is longer than normal simpler conical basket centrifuge. This permits a sufficient interaction time between wash liquids and cake. However, the presence of the conveyor causes crystals breakage and abrasion problem as well as the formation of uneven solids layer which can lead to poor washing. This can be controlled by conveyor speed.TEMA engineers, specialist in centrifuges, claims that horizontal screen scroll centrifuge can achieve higher overall recovery of fines up to 99% can be achieved, combining with very low product moisture. Furthermore, it is recommended that operating with the feed containing more than 40% solids with minimal size of 100 µm achieve the best results. The use of the screen scroll centrifuge with horizontal orientation is more economical as its capacity is 40% more tonnage than that of vertical orientation of the same size for the same energy cost. In addition, maintenance of the horizontal screen scroll centrifuge can be carried out easily since total disassembly is not needed. Nowadays, screen scroll centrifuges are equipped with CIP-cleaning system for the purpose of self-cleaning within the centrifuge. Advantages and limitations over competitive processes: On the other hand, it has a downside of possible blockage to the screen due to the feed slurry containing small crystals besides large and normal solids crystals. Consequently, this causes the screen to become less permeable so the liquids flow over the screen rather than passing through the screen mesh. This problem, however, can be overcome by reducing the flow rate of feed. Possible heuristics to be used during design of the process: The basket, helical screw, screen filter, and other parts are designed to meet up the process input and certain performance. Most of the parts are made from metal to be able to handle the separation process. The bigger the bowl could contain more input but at the same time could increase the process and residence time. The helical screw is made to be able to hold and move the particle around to be able to control the cake movement. The screen filter is made to be able to sieve the particle and the water. The cleaning process for this type of machine could be difficult compare to other separation model. The design mostly being optimized with low maintenance feature and provided with good sealing to prevent the leaking and breakup of the construction. Necessary post-treatment systems: After removing liquids from the slurry to form a cake of solids in the centrifuge, further or post-treatment is required to completely dry the solids. Drying is the most common process used in the industry. Another post-treatment system is to treat the products with another stage of deliquoring process. New development: The modern screen/scroll centrifuge has been modified in several ways from the original design: The addition of a long-life parts package which reduces sliding abrasion in the feed zone by having a cone cap to deflect the feed input from the top. The mechanic of the process has also been optimized to achieve better products. New screens have become available that are perforated with a micro-waterjet process. These screens offer significantly greater product recovery in combination with dryer output. This manufacturing process also allows screens to be made from extreme abrasion resistant materials such as tungsten-carbide composites for very high wear applications such as coal. New development: Ultrafine screening quality modification allows up to 50 micrometre. The modification is made through the screen filter which could produce higher solid recovery. Other developments made on the screen scroll centrifuge are tight sealing, ability to work on do continuous mode, minimum power consumption, low friction gear, and less maintenance design. All of these modifications are made to ensure safety of the process with less power consumption and for the ease of maintenance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DelNS1-2019-nCoV-RBD-OPT** DelNS1-2019-nCoV-RBD-OPT: DelNS1-2019-nCoV-RBD-OPT is a COVID-19 vaccine candidate developed by Beijing Wantai Biological, Xiamen University and the University of Hong Kong.On 14 December 2022, the vaccine was listed by the National Health Commission of China as a secondary booster dose option for people who have completed their third doses of inactivated COVID-19 vaccines for 6 months or longer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urogenital peritoneum** Urogenital peritoneum: Urogenital peritoneum is a portion of the posterior abdominal peritoneum that is found below the linea terminalis.It includes the broad ligament of uterus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OMII-UK** OMII-UK: OMII-UK is an open-source software organisation for the UK research community. OMII-UK have a number of roles within the UK research community: helping new users get started with E-research, providing the software that is needed and developing that software if it does not exist. OMII-UK also help to guide the development of E-research by liaising with national and international organisations, e-Research groups, standards' groups, and the researchers themselves. Funding: OMII-UK is funded by the Engineering and Physical Sciences Research Council (EPSRC) and Jisc. Project partners: OMII-UK is a collaboration between three bodies: the School of Electronics and Computer Science at the University of Southampton the Open Grid Services Architecture (OGSA)-DAI project at the National e-Science Centre and EPCC the myGrid project at the School of Computer Science at the University of Manchester Project history: The OMII (Open Middleware Infrastructure Institute) started at the University of Southampton in January 2004. In January 2006, the Southampton group joined forces with the established myGrid and OGSA-DAI projects to form OMII-UK - an integral part of the UK e-Science programme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Obsidian (software)** Obsidian (software): Obsidian is a personal knowledge base and note-taking software application that operates on Markdown files. It allows users to make internal links for notes and then to visualize the connections as a graph. It is designed to help users organize and structure their thoughts and knowledge in a flexible, non-linear way. The software is free for personal use, with commercial licenses available for pay. History: Obsidian was founded by Shida Li and Erica Xu while quarantining during the COVID-19 pandemic. Li and Xu, who had met while studying at the University of Waterloo, had already collaborated on several development projects. Obsidian was initially released on 30 March 2020. Version 1.0.0 was released in October 2022. Version 1.1, which adds the Canvas core plugin, was released in December 2022. Features: Obsidian is built on Electron. It is a cross-platform application that runs on Windows, Linux, and macOS, as well as mobile operating systems such as Android and iOS. There is no web-based version of the software. Obsidian can be customized by adding plugins, which are also accessible from the mobile app, and which enable users to extend the software's functionality with additional features or integration with other tools. Obsidian differentiates between core plugins, which are released and maintained by the Obsidian team, and community plugins, which are contributed by users. Examples of community plugins include a Kanban-style task board and a calendar widget.Obsidian operates on a folder of text documents; each new note in Obsidian generates a new text document, and all the documents can be searched from within the app. Obsidian allows for internal linking between notes, and creates an interactive graph which visualizes the relationships between notes. Text-formatting in Obsidian is achieved through Markdown, but Obsidian allows for the instantaneous previewing of formatted text.Obsidian's customer support is accessible only through email. The developers, however, have hosted an Internet forum and a Discord channel where users can exchange solutions and ideas. Obsidian Sync: Obsidian Sync is a solution to sync your notes between devices that is offered from the developers of the app. It includes end-to-end encryption, version history, and works on all devices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oil purification** Oil purification: Oil purification (transformer, turbine, industrial, etc.) removes oil contaminants in order to prolong oil service life. Contaminants of industrial oils: Contaminants and various impurities get into industrial oils during storage and operation. The most common contaminants are: water; solid particles; gases; asphalt-resinous paraffin deposits; acids; oil sludge; organometallic compounds; unsaturated hydrocarbons; polyaromatic hydrocarbons; additive remains; products of oil decomposition. Methods of oil purification: Industrial oils are purified through sedimentation, filtration, centrifugation, vacuum treatment and adsorption purification.Sedimentation is precipitation of solid particles and water to the bottom of oil tanks under gravity. The main drawback of this process is its longevity.Filtration is a partial removal of solid particles through filter medium. Oil filtration systems generally use a multistage filtration with coarse and fine filters.Centrifugation is separation of oil and water, or oil and solid particles by centrifugal forces. Methods of oil purification: Vacuum treatment degasses and dehydrates industrial oil. This method is well suited for removing dispersed and dissolved water, as well as dissolved gases. Adsorption purification, in contrast to the methods mentioned above, does not remove solid particles and gases, but it shows good results at removing water, oil sludge and aging products. This process uses adsorbents of natural or artificial origin: bleaching clays, synthetic aluminosilicates, silica gels, zeolites, etc. The difference between purification and regeneration of industrial oil: Often the terms "oil purification" and "oil regeneration" are used synonymously. Although in fact they are not the same. Oil purification cleans oil from contaminants. it can be used independently or as a part of oil regeneration. Oil regeneration also removes aging products (with the help of adsorbents) and stabilizes oil with additives. Regenerated oil is clean from carcinogenic products of oil aging and stabilized with the help of additives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baseball metaphors for sex** Baseball metaphors for sex: In American slang, baseball metaphors for sex are often used as euphemisms for the degree of physical intimacy achieved in sexual encounters or relationships. In the metaphor, first prevalent in the aftermath of World War II, sexual activities are described as if they are actions in a game of baseball. Baseball has also served as the context for metaphors about sexual roles and identity. Running the bases: Among the most commonly used metaphors is the progress of a batter and base-runner in describing levels of physical intimacy (traditionally from a heterosexual perspective). Definitions vary, but the following are typical usages of the terms: Strikeout – a failure to engage in any form of foreplay or other sexual activity; First base – mouth-to-mouth kissing, especially French kissing; Second base – skin-to-skin touching/kissing of the breasts; in some contexts, it may instead refer to touching any erogenous zones through the clothes (i.e., not actually touching the skin); Third base – touching below the waist (without sexual intercourse) or manual stimulation of the genitals; in some contexts, it may instead refer to oral stimulation of the genitals; Home run (home base or scoring) – "full" (penetrative) sexual intercourseThe metaphors are found variously in popular American culture, with one well-known example in the Meat Loaf song "Paradise by the Dashboard Light", which describes a young couple "making out", with a voice-over commentary of a portion of a baseball game, as a metaphor for the couple's activities. A similar example can be found in Billy Joel's song "Zanzibar" in which he compares himself to Pete Rose and sings the lines, "Me, I'm trying just to get to second base and I'd steal it if she only gave the sign. She's gonna give the go ahead, the inning isn't over yet for me." Trace Adkins's 2006 song "Swing" is based on the same concept. Running the bases: Baseball positions are used as a coded reference to the roles played by men who have sex with men: Pitcher – the penetrative partner in anal sex Catcher – the receptive partner in anal sexSimilar metaphors for sexual identity include: Switch hitter – a bisexual individual, referencing a player who can bat from either side Playing for the other team also Batting for the other team – indicating a person is gay or lesbian Playing for both teams also Batting for both teams – indicating a person is bisexual Views: The sequence of "running the bases" is often regarded as a script, or pattern, for young people who are experimenting with sexual relationships. The script may have slightly changed since the 1960s. Kohl and Francoeur state that with the growing emphasis in the 1990s on safe sex to expand sex beyond heterosexual penetrative intercourse, the "home run" has taken on the additional dimension of oral sex. Richters and Rissel conversely state that "third base" is now sometimes considered to comprise oral sex as part of the accepted pattern of activities, as a precursor to "full" (i.e. penetrative) sex. The use of baseball as a sexual script in general, regardless of what each base signifies, has been critiqued by sexuality educators for misrepresenting sex as a contest with a winner and loser. Deborah Roffman writes that the baseball metaphor has been "insidiously powerful, singularly effective, and very efficient…as a vehicle for transmitting and transferring to successive generations of young people all that is wrong and unhealthy about American sexual attitudes."There are conflicting perspectives on the use of the baseball metaphor as a part of sex education. Some educators have found the baseball metaphor an effective instructional tool when providing sex education to middle school students. Supporters of baseball metaphors in sex education include Leman and Bell. In their book A Chicken's Guide to Talking Turkey With Your Kids About Sex, they use a baseball metaphor to aid parents in the discussion of puberty with their children, dividing the topics into "first base" ("Changes from the neck up"), "second base" ("Changes from the neck to the waist"), "third base" ("Changes from the waist down"), and "home plate" ("The Big 'It'"). Others argue that the baseball metaphor reflects U.S. ideas about sex as a contest to be won, rather than a mutual and consensual activity. These critiques suggest that other metaphors might be more useful for explaining sexual consent and pleasure. Alternative metaphors and a critique of the baseball metaphor are offered in the sex education materials provided by Scarleteen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classical XY model** Classical XY model: The classical XY model (sometimes also called classical rotor (rotator) model or O(2) model) is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2. Definition: Given a D-dimensional lattice Λ, per each lattice site j ∈ Λ there is a two-dimensional, unit-length vector sj = (cos θj, sin θj) The spin configuration, s = (sj)j ∈ Λ is an assignment of the angle −π < θj ≤ π for each j ∈ Λ. Given a translation-invariant interaction Jij = J(i − j) and a point dependent external field hj=(hj,0) , the configuration energy is cos cos ⁡θj The case in which Jij = 0 except for ij nearest neighbor is called nearest neighbor case. The configuration probability is given by the Boltzmann distribution with inverse temperature β ≥ 0: P(s)=e−βH(s)ZZ=∫[−π,π]Λ∏j∈Λdθje−βH(s). where Z is the normalization, or partition function. The notation ⟨A(s)⟩ indicates the expectation of the random variable A(s) in the infinite volume limit, after periodic boundary conditions have been imposed. Rigorous results: The existence of the thermodynamic limit for the free energy and spin correlations were proved by Ginibre, extending to this case the Griffiths inequality. Rigorous results: Using the Griffiths inequality in the formulation of Ginibre, Aizenman and Simon proved that the two point spin correlation of the ferromagnetics XY model in dimension D, coupling J > 0 and inverse temperature β is dominated by (i.e. has an upper bound given by) the two point correlation of the ferromagnetic Ising model in dimension D, coupling J > 0 and inverse temperature β/2 Hence the critical β of the XY model cannot be smaller than the double of the critical temperature of the Ising model One dimension As in any 'nearest-neighbor' n-vector model with free (non-periodic) boundary conditions, if the external field is zero, there exists a simple exact solution. In the free boundary conditions case, the Hamiltonian is therefore the partition function factorizes under the change of coordinates This gives where I0 is the modified Bessel function of the first kind. The partition function can be used to find several important thermodynamic quantities. For example, in the thermodynamic limit ( L→∞ ), the free energy per spin is Using the properties of the modified Bessel functions, the specific heat (per spin) can be expressed as where K=J/kBT , and μ is the short-range correlation function, Even in the thermodynamic limit, there is no divergence in the specific heat. Indeed, like the one-dimensional Ising model, the one-dimensional XY model has no phase transitions at finite temperature. Rigorous results: The same computation for periodic boundary condition (and still h = 0) requires the transfer matrix formalism, though the result is the same. Rigorous results: This transfer matrix approach is also required when using free boundary conditions, but with an applied field h≠0 . If the applied field h is small enough that it can be treated as a perturbation to the system in zero-field, then the magnetic susceptibility χ≡∂M/∂h can be estimated. This is done by using the eigenstates computed by the transfer matrix approach and computing the energy shift with second-order perturbation theory, then comparing with the free-energy expansion F=F0−12χh2 . One finds where C is the Curie constant (a value typically associated with the susceptibility in magnetic materials). This expression is also true for the one-dimensional Ising model, with the replacement tanh ⁡K Two dimensions The two-dimensional XY model with nearest-neighbor interactions is an example of a two-dimensional system with continuous symmetry that does not have long-range order as required by the Mermin–Wagner theorem. Likewise, there is not a conventional phase transition present that would be associated with symmetry breaking. However, as will be discussed later, the system does show signs of a transition from a disordered high-temperature state to a quasi-ordered state below some critical temperature, called the Kosterlitz-Thouless transition. In the case of a discrete lattice of spins, the two-dimensional XY model can be evaluated using the transfer matrix approach, reducing the model to an eigenvalue problem and utilizing the largest eigenvalue from the transfer matrix. Though the exact solution is intractable, it is possible to use certain approximations to get estimates for the critical temperature Tc which occurs at low temperatures. For example, Mattis (1984) used an approximation to this model to estimate a critical temperature of the system as The 2D XY model has also been studied in great detail using Monte Carlo simulations, for example with the Metropolis algorithm. These can be used to compute thermodynamic quantities like the system energy, specific heat, magnetization, etc., over a range of temperatures and time-scales. In the Monte Carlo simulation, each spin is associated to a continuously-varying angle θi (often, it can be discretized into finitely-many angles, like in the related Potts model, for ease of computation. However, this is not a requirement.) At each time step the Metropolis algorithm chooses one spin at random and rotates its angle by some random increment Δθi∈(−Δ,Δ) . This change in angle causes a change in the energy ΔEi of the system, which can be positive or negative. If negative, the algorithm accepts the change in angle; if positive, the configuration is accepted with probability e−βΔEi , the Boltzmann factor for the energy change. The Monte Carlo method has been used to verify, with various methods, the critical temperature of the system, and is estimated to be 0.8935 (1) . The Monte Carlo method can also compute average values that are used to compute thermodynamic quantities like magnetization, spin-spin correlation, correlation lengths, and specific heat. These are important ways to characterize the behavior of the system near the critical temperature. The magnetization and squared magnetization, for example, can be computed as where N=L×L are the number of spins. The mean magnetization characterizes the magnitude of the net magnetic moment of the system; in many magnetic systems this is zero above a critical temperature and becomes non-zero spontaneously at low temperatures. Similarly the mean-squared magnetization characterizes the average of the square of net components of the spins across the lattice. Either of these are commonly used to characterize the order parameter of a system. Rigorous analysis of the XY model shows the magnetization in the thermodynamic limit is zero, and that the square magnetization approximately follows ⟨M2⟩≈N−T/4π , which vanishes in the thermodynamic limit. Indeed, at high temperatures this quantity approaches zero since the components of the spins will tend to be randomized and thus sum to zero. However at low temperatures for a finite system, the mean-square magnetization increases, suggesting there are regions of the spin space that are aligned to contribute to a non-zero contribution. The magnetization shown (for a 25x25 lattice) is one example of this, that appears to suggest a phase transition, while no such transition exists in the thermodynamic limit. Rigorous results: Furthermore, using statistical mechanics one can relate thermodynamic averages to quantities like specific heat by calculating The specific heat is shown at low temperatures near the critical temperature 0.88 . There is no feature in the specific heat consistent with critical behavior (like a divergence) at this predicted temperature. Indeed, estimating the critical temperature comes from other methods, like from the helicity modulus, or the temperature dependence of the divergence of susceptibility. However, there is a feature in the specific heat in the form of a peak at 1.167 (1)kBT/J . This peak position and height have been shown not to depend on system size, for lattices of linear size greater than 256; indeed, the specific heat anomaly remains rounded and finite for increasing lattice size, with no divergent peak. Rigorous results: The nature of the critical transitions and vortex formation can be elucidated by considering a continuous version of the XY model. Here, the discrete spins θn are replaced by a field θ(x) representing the spin's angle at any point in space. In this case the angle of the spins θ(x) must vary smoothly over changes in position. Expanding the original cosine as a Taylor series, the Hamiltonian can be expressed in the continuum approximation as The continuous version of the XY model is often used to model systems that possess order parameters with the same kinds of symmetry, e.g. superfluid helium, hexatic liquid crystals. This is what makes them peculiar from other phase transitions which are always accompanied with a symmetry breaking. Topological defects in the XY model lead to a vortex-unbinding transition from the low-temperature phase to the high-temperature disordered phase. Indeed, the fact that at high temperature correlations decay exponentially fast, while at low temperatures decay with power law, even though in both regimes M(β) = 0, is called Kosterlitz–Thouless transition. Kosterlitz and Thouless provided a simple argument of why this would be the case: this considers the ground state consisting of all spins in the same orientation, with the addition then of a single vortex. The presence of these contributes an entropy of roughly ln ⁡(L2/a2) , where a is an effective length scale (for example, the lattice size for a discrete lattice) Meanwhile, the energy of the system increases due to the vortex, by an amount ln ⁡(L/a) . Putting these together, the free energy of a system would change due to the spontaneous formation of a vortex by an amount In the thermodynamic limit, the system does not favor the formation of vortices at low temperatures, but does favor them at high temperatures, above the critical temperature Tc=πJ/2kB . This indicates that at low temperatures, any vortices that arise will want to annihilate with antivortices to lower the system energy. Indeed, this will be the case qualitatively if one watches 'snapshots' of the spin system at low temperatures, where vortices and antivortices gradually come together to annihilate. Thus, the low-temperature state will consist of bound vortex-antivortex pairs. Meanwhile at high temperatures, there will be a collection of unbound vortices and antivortices that are free to move about the plane. Rigorous results: To visualize the Ising model, one can use an arrow pointing up or down, or represented as a point colored black/white to indicate its state. To visualize the XY spin system, the spins can be represented as an arrow pointing in some direction, or as being represented as a point with some color. Here it is necessary to represent the spin with a spectrum of colors due to each of the possible continuous variables. This can be done using, for example, a continuous and periodic red-green-blue spectrum. As shown on the figure, cyan corresponds to a zero angle (pointing to the right), whereas red corresponds to a 180 degree angle (pointing to the left). One can then study snapshots of the spin configurations at different temperatures to elucidate what happens above and below the critical temperature of the XY model. At high temperatures, the spins will not have a preferred orientation and there will be unpredictable variation of angles between neighboring spins, as there will be no preferred energetically favorable configuration. In this case, the color map will look highly pixellated. Meanwhile at low temperatures, one possible ground-state configuration has all spins pointed in the same orientation (same angle); these would correspond to regions (domains) of the color map where all spins have roughly the same color. Rigorous results: To identify vortices (or antivortices) present as a result of the Kosterlitz–Thouless transition, one can determine the signed change in angle by traversing a circle of lattice points counterclockwise. If the total change in angle is zero, this corresponds to no vortex being present; whereas a total change in angle of ±2π corresponds to a vortex (or antivortex). These vortexes are topologically non-trivial objects that come in vortex-antivortex pairs, which can separate or pair-annihilate. In the colormap, these defects can be identified in regions where there is a large color gradient where all colors of the spectrum meet around a point. Qualitatively, these defects can look like inward- or outward-pointing sources of flow, or whirlpools of spins that collectively clockwise or counterclockwise, or hyperbolic-looking features with some spins pointing toward and some spins pointing away from the defect. As the configuration is studied at long time scales and at low temperatures, it is observed that many of these vortex-antivortex pairs get closer together and eventually pair-annihilate. It is only at high temperatures that these vortices and antivortices are liberated and unbind from one another. Rigorous results: In the continuous XY model, the high-temperature spontaneous magnetization vanishes: Besides, cluster expansion shows that the spin correlations cluster exponentially fast: for instance At low temperatures, i.e. β ≫ 1, the spontaneous magnetization remains zero (see the Mermin–Wagner theorem), but the decay of the correlations is only power law: Fröhlich and Spencer found the lower bound |⟨si⋅sj⟩|≥C(β)1+|i−j|η(β) while McBryan and Spencer found the upper bound, for any ϵ>0 |⟨si⋅sj⟩|≤C(β,ϵ)1+|i−j|η(β,ϵ) Three and higher dimensions Independently of the range of the interaction, at low enough temperature the magnetization is positive. Rigorous results: At high temperature, the spontaneous magnetization vanishes: := |⟨si⟩|=0 . Besides, cluster expansion shows that the spin correlations cluster exponentially fast: for instance |⟨si⋅sj⟩|≤C(β)e−c(β)|i−j| At low temperature, infrared bound shows that the spontaneous magnetization is strictly positive: := |⟨si⟩|>0 . Besides, there exists a 1-parameter family of extremal states, ⟨⋅⟩θ , such that cos sin ⁡θ) but, conjecturally, in each of these extremal states the truncated correlations decay algebraically. Phase transition: As mentioned above in one dimension the XY model does not have a phase transition, while in two dimensions it has the Berezinski-Kosterlitz-Thouless transition between the phases with exponentially and powerlaw decaying correlation functions. In three and higher dimensions the XY model has a ferromagnet-paramagnet phase transition. At low temperatures the spontaneous magnetization is nonzero: this is the ferromagnetic phase. As the temperature is increased, spontaneous magnetization gradually decreases and vanishes at a critical temperature. It remains zero at all higher temperatures: this is the paramagnetic phase. In four and higher dimensions the phase transition has mean field theory critical exponents (with logarithmic corrections in four dimensions). Phase transition: Three dimensional case: the critical exponents The three dimensional case is interesting because the critical exponents at the phase transition are nontrivial. Many three-dimensional physical systems belong to the same universality class as the three dimensional XY model and share the same critical exponents, most notably easy-plane magnets and liquid Helium-4. The values of these critical exponents are measured by experiments, Monte Carlo simulations, and can also be computed by theoretical methods of quantum field theory, such as the renormalization group and the conformal bootstrap. Renormalization group methods are applicable because the critical point of the XY model is believed to be described by a renormalization group fixed point. Conformal bootstrap methods are applicable because it is also believed to be a unitary three dimensional conformal field theory. Phase transition: Most important critical exponents of the three dimensional XY model are α,β,γ,δ,ν,η . All of them can be expressed via just two numbers: the scaling dimensions Δϕ and Δs of the complex order parameter field ϕ and of the leading singlet operator s (same as |ϕ|2 in the Ginzburg–Landau description). Another important field is s′ (same as |ϕ|4 ), whose dimension Δs′ determines the correction-to-scaling exponent ω . According to a conformal bootstrap computation, these three dimensions are given by: This gives the following values of the critical exponents: Monte Carlo methods give compatible determinations: 0.03810 0.67169 0.789 (4)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Core ontology** Core ontology: In philosophy, a core ontology is a basic and minimal ontology consisting only of the minimal concepts required to understand the other concepts. It must be based on a core glossary in some human language so humans can comprehend the concepts and distinctions made. Each natural language tends to rely on its own conceptual metaphor structure, and so tends to have its own core ontology (according to W. V. Quine). It could be said also to represent the moral core of a human linguistic culture, and to self-correct so as to better represent core cultural ideas. Core ontology: Such a core ontology is a key pre-requisite to a more complete foundation ontology, or a more general philosophical sense of ontology. Most applicable to teaching, e.g. the Longmans defining dictionary of the simplest meanings of 2,000 English words is used to define the 4,000 most basic English idioms—this is a core glossary of the English language, which permits access to the core ontology (the idioms). Core ontology: Core ontologies is a concept that is used in information science as well. For example, CIDOC-CRM and CORA are considered as core ontologies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PsbNH RNA motif** PsbNH RNA motif: The psbNH RNA motif describes a class of RNA molecules that have a conserved secondary structure. psbNH RNAs are always found between psbH and psbH genes, both of which are involved in the cyanobacterial photosystem II are transcribed in opposite orientations. It is unknown whether the biological psbNH RNA is as depicted in the diagram, or whether its reverse complement is the transcribed molecule. In either case, the RNA would be in the 5' untranslated region of a gene, either psbN or psbH, and likely a cis-regulatory element.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stolz–Cesàro theorem** Stolz–Cesàro theorem: In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time. The Stolz–Cesàro theorem can be viewed as a generalization of the Cesàro mean, but also as a l'Hôpital's rule for sequences. Statement of the theorem for the */∞ case: Let (an)n≥1 and (bn)n≥1 be two sequences of real numbers. Assume that (bn)n≥1 is a strictly monotone and divergent sequence (i.e. strictly increasing and approaching +∞ , or strictly decreasing and approaching −∞ ) and the following limit exists: lim n→∞an+1−anbn+1−bn=l. Then, the limit lim n→∞anbn=l. Statement of the theorem for the 0/0 case: Let (an)n≥1 and (bn)n≥1 be two sequences of real numbers. Assume now that (an)→0 and (bn)→0 while (bn)n≥1 is strictly decreasing. If lim n→∞an+1−anbn+1−bn=l, then lim n→∞anbn=l. Proofs: Proof of the theorem for the */∞ case Case 1: suppose (bn) strictly increasing and divergent to +∞ , and −∞<l<∞ . By hypothesis, we have that for all ϵ/2>0 there exists ν>0 such that ∀n>ν |an+1−anbn+1−bn−l|<ϵ2, which is to say l−ϵ/2<an+1−anbn+1−bn<l+ϵ/2,∀n>ν. Since (bn) is strictly increasing, bn+1−bn>0 , and the following holds (l−ϵ/2)(bn+1−bn)<an+1−an<(l+ϵ/2)(bn+1−bn),∀n>ν .Next we notice that an=[(an−an−1)+⋯+(aν+2−aν+1)]+aν+1 thus, by applying the above inequality to each of the terms in the square brackets, we obtain (l−ϵ/2)(bn−bν+1)+aν+1=(l−ϵ/2)[(bn−bn−1)+⋯+(bν+2−bν+1)]+aν+1<anan<(l+ϵ/2)[(bn−bn−1)+⋯+(bν+2−bν+1)]+aν+1=(l+ϵ/2)(bn−bν+1)+aν+1. Now, since bn→+∞ as n→∞ , there is an n0>0 such that bn>0 for all n>n0 , and we can divide the two inequalities by bn for all max {ν,n0} (l−ϵ/2)+aν+1−bν+1(l−ϵ/2)bn<anbn<(l+ϵ/2)+aν+1−bν+1(l+ϵ/2)bn. Proofs: The two sequences (which are only defined for n>n0 as there could be an N≤n0 such that bN=0 ) := aν+1−bν+1(l±ϵ/2)bn are infinitesimal since bn→+∞ and the numerator is a constant number, hence for all ϵ/2>0 there exists n±>n0>0 , such that |cn+|<ϵ/2,∀n>n+,|cn−|<ϵ/2,∀n>n−, therefore max =: N>0, which concludes the proof. The case with (bn) strictly decreasing and divergent to −∞ , and l<∞ is similar. Proofs: Case 2: we assume (bn) strictly increasing and divergent to +∞ , and l=+∞ . Proceeding as before, for all 2M>0 there exists ν>0 such that for all n>ν an+1−anbn+1−bn>2M. Again, by applying the above inequality to each of the terms inside the square brackets we obtain an>2M(bn−bν+1)+aν+1,∀n>ν, and max {ν,n0}. The sequence (cn)n>n0 defined by := aν+1−2Mbν+1bn is infinitesimal, thus such that −M<cn<M,∀n>n¯, combining this inequality with the previous one we conclude max =: N. The proofs of the other cases with (bn) strictly increasing or decreasing and approaching +∞ or −∞ respectively and l=±∞ all proceed in this same way. Proof of the theorem for the 0/0 case Case 1: we first consider the case with l<∞ and (bn) strictly decreasing. This time, for each ν>0 , we can write an=(an−an+1)+⋯+(an+ν−1−an+ν)+an+ν, and for any ϵ/2>0, ∃n0 such that for all n>n0 we have (l−ϵ/2)(bn−bn+ν)+an+ν=(l−ϵ/2)[(bn−bn+1)+⋯+(bn+ν−1−bn+ν)]+an+ν<anan<(l+ϵ/2)[(bn−bn+1)+⋯+(bn+ν−1−bn+ν)]+an+ν=(l+ϵ/2)(bn−bn+ν)+an+ν. The two sequences := an+ν−bn+ν(l±ϵ/2)bn are infinitesimal since by hypothesis an+ν,bn+ν→0 as ν→∞ , thus for all ϵ/2>0 there are ν±>0 such that |cν+|<ϵ/2,∀ν>ν+,|cν−|<ϵ/2,∀ν>ν−, thus, choosing ν appropriately (which is to say, taking the limit with respect to ν ) we obtain l−ϵ<l−ϵ/2+cν−<anbn<l+ϵ/2+cν+<l+ϵ,∀n>n0 which concludes the proof. Case 2: we assume l=+∞ and (bn) strictly decreasing. For all 2M>0 there exists n0>0 such that for all n>n0, an+1−anbn+1−bn>2M⟹an−an+1>2M(bn−bn+1). Therefore, for each ν>0, anbn>2M+an+ν−2Mbn+νbn,∀n>n0. The sequence := an+ν−2Mbn+νbn converges to 0 (keeping n fixed). Hence ∀M>0∃ν¯>0 such that −M<cν<M,∀ν>ν¯, and, choosing ν conveniently, we conclude the proof anbn>2M+cν>M,∀n>n0. Applications and examples: The theorem concerning the ∞/∞ case has a few notable consequences which are useful in the computation of limits. Arithmetic mean Let (xn) be a sequence of real numbers which converges to l , define := := n then (bn) is strictly increasing and diverges to +∞ . We compute lim lim lim n→∞xn=l therefore lim lim n→∞xn. Given any sequence (xn)n≥1 of real numbers, suppose that lim n→∞xn exists (finite or infinite), then lim lim n→∞xn. Applications and examples: Geometric mean Let (xn) be a sequence of positive real numbers converging to l and define := log := n, again we compute lim lim log lim log lim log log ⁡(l), where we used the fact that the logarithm is continuous. Thus lim log lim log log ⁡(l), since the logarithm is both continuous and injective we can conclude that lim lim n→∞xn .Given any sequence (xn)n≥1 of (strictly) positive real numbers, suppose that lim n→∞xn exists (finite or infinite), then lim lim n→∞xn. Applications and examples: Suppose we are given a sequence (yn)n≥1 and we are asked to compute lim n→∞ynn, defining y0=1 and xn=yn/yn−1 we obtain lim lim lim n→∞ynn, if we apply the property above lim lim lim n→∞ynyn−1. This last form is usually the most useful to compute limits Given any sequence (yn)n≥1 of (strictly) positive real numbers, suppose that lim n→∞yn+1yn exists (finite or infinite), then lim lim n→∞yn+1yn. Examples Example 1 lim lim 1. Example 2 lim lim lim lim n→∞1(1+1n)n=1e where we used the representation of e as the limit of a sequence. History: The ∞/∞ case is stated and proved on pages 173—175 of Stolz's 1885 book and also on page 54 of Cesàro's 1888 article. It appears as Problem 70 in Pólya and Szegő (1925). The general form: Statement The general form of the Stolz–Cesàro theorem is the following: If (an)n≥1 and (bn)n≥1 are two sequences such that (bn)n≥1 is monotone and unbounded, then: lim inf lim inf lim sup lim sup n→∞an+1−anbn+1−bn. The general form: Proof Instead of proving the previous statement, we shall prove a slightly different one; first we introduce a notation: let (an)n≥1 be any sequence, its partial sum will be denoted by := ∑m≥1nam . The equivalent statement we shall prove is: Let (an)n≥1,(bn)≥1 be any two sequences of real numbers such that bn>0,∀n∈Z>0 lim n→∞Bn=+∞ ,then lim inf lim inf lim sup lim sup n→∞anbn. The general form: Proof of the equivalent statement First we notice that: lim inf lim sup n→∞AnBn holds by definition of limit superior and limit inferior; lim inf lim inf n→∞AnBn holds if and only if lim sup lim sup n→∞anbn because lim inf lim sup n→∞(−xn) for any sequence (xn)n≥1 .Therefore we need only to show that lim sup lim sup n→∞anbn . If := lim sup n→∞anbn=+∞ there is nothing to prove, hence we can assume L<+∞ (it can be either finite or −∞ ). By definition of lim sup , for all l>L there is a natural number ν>0 such that anbn<l,∀n>ν. The general form: We can use this inequality so as to write An=Aν+aν+1+⋯+an<Aν+l(Bn−Bν),∀n>ν, Because bn>0 , we also have Bn>0 and we can divide by Bn to get AnBn<Aν−lBνBn+l,∀n>ν. Since Bn→+∞ as n→+∞ , the sequence as (keeping fixed) , and we obtain lim sup n→∞AnBn≤l,∀l>L, By definition of least upper bound, this precisely means that lim sup lim sup n→∞anbn, and we are done. The general form: Proof of the original statement Now, take (an),(bn) as in the statement of the general form of the Stolz-Cesàro theorem and define α1=a1,αk=ak−ak−1,∀k>1β1=b1,βk=bk−bk−1∀k>1 since (bn) is strictly monotone (we can assume strictly increasing for example), βn>0 for all n and since bn→+∞ also Bn=b1+(b2−b1)+⋯+(bn−bn−1)=bn→+∞ , thus we can apply the theorem we have just proved to (αn),(βn) (and their partial sums (An),(Bn) lim sup lim sup lim sup lim sup n→∞an−an−1bn−bn−1, which is exactly what we wanted to prove. Notes: This article incorporates material from Stolz-Cesaro theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laser drilling** Laser drilling: Laser drilling is the process of creating thru-holes, referred to as “popped” holes or “percussion drilled” holes, by repeatedly pulsing focused laser energy on a material. The diameter of these holes can be as small as 0.002” (~50 μm). If larger holes are required, the laser is moved around the circumference of the “popped” hole until the desired diameter is created. Applications: Laser drilling is one of the few techniques for producing high-aspect-ratio holes—holes with a depth-to-diameter ratio much greater than 10:1.Laser-drilled high-aspect-ratio holes are used in many applications, including the oil gallery of some engine blocks, aerospace turbine-engine cooling holes, laser fusion components, and printed circuit board micro-vias. Applications: Manufacturers of turbine engines for aircraft propulsion and for power generation have benefited from the productivity of lasers for drilling small (0.3–1 mm diameter typical) cylindrical holes at 15–90° to the surface in cast, sheet metal and machined components. Their ability to drill holes at shallow angles to the surface at rates of between 0.3 and 3 holes per second has enabled new designs incorporating film-cooling holes for improved fuel efficiency, reduced noise, and lower NOx and CO emissions. Applications: Incremental improvements in laser process and control technologies have led to substantial increases in the number of cooling holes used in turbine engines. Fundamental to these improvements and increased use of laser drilled holes is an understanding of the relationship between process parameters and hole quality and drilling speed. Theory: Following is a summary of technical insights about the laser drilling process and the relationship between process parameters and hole quality and drilling speed. Physical phenomena Laser drilling of cylindrical holes generally occurs through melting and vaporization (also referred to as "ablation") of the workpiece material through absorption of energy from a focused laser beam. Theory: The energy required to remove material by melting is about 25% of that needed to vaporize the same volume, so a process that removes material by melting is often favored.Whether melting or vaporization is more dominant in a laser drilling process depends on many factors, with laser pulse duration and energy playing an important role. Generally speaking, ablation dominates when a Q-switched Nd:YAG laser is used. On the other hand, melt expulsion, the means by which a hole is created through melting the material, dominates when a flashtube pumped Nd:YAG laser is used. A Q-switched Nd:YAG laser normally has pulse duration in the order of nanoseconds, peak power on the order of ten to hundreds of MW/cm2, and a material removal rate of a few micrometers per pulse. A flash lamp pumped Nd:YAG laser normally has a pulse duration on the order of hundreds of microseconds to a millisecond, peak power in the order of sub MW/cm2, and material removal rate of ten to hundreds of micrometers per pulse. For machining processes by each laser, ablation and melt expulsion typically coexist.Melt expulsion arises as a result of the rapid build-up of gas pressure (recoil force) within a cavity created by evaporation. For melt expulsion to occur, a molten layer must form and the pressure gradients acting on the surface due to vaporization must be sufficiently large to overcome surface tension forces and expel the molten material from the hole.The "best of both worlds" is a single system capable of both "fine" and "coarse" melt expulsion. "Fine" melt expulsion produces features with excellent wall definition and small heat-affected zone while "coarse" melt expulsion, such as used in percussion drilling, removes material quickly. Theory: The recoil force is a strong function of the peak temperature. The value of Tcr for which the recoil and surface tension forces are equal is the critical temperature for liquid expulsion. For instance, liquid expulsion from titanium can take place when the temperature at the center of the hole exceeds 3780 K. Theory: In early work (Körner, et al., 1996), the proportion of material removed by melt expulsion was found to increase as intensity increased. More recent work (Voisey, et al., 2000) shows that the fraction of the material removed by melt expulsion, referred to as melt ejection fraction (MEF), drops when laser energy further increases. The initial increase in melt expulsion on raising the beam power has been tentatively attributed to an increase in the pressure and pressure gradient generated within the hole by vaporization. Theory: A better finish can be achieved if the melt is ejected in fine droplets. Generally speaking, droplet size decreases with increasing pulse intensity. This is due to the increased vaporization rate and thus a thinner molten layer. For the longer pulse duration, the greater total energy input helps form a thicker molten layer and results in the expulsion of correspondingly larger droplets. Theory: Previous models Chan and Mazumder (1987) developed a 1-D steady state model to incorporate liquid expulsion consideration but the 1-D assumption is not suited for high aspect ratio hole drilling and the drilling process is transient. Kar and Mazumder (1990) extended the model to 2-D, but melt expulsion was not explicitly considered. A more rigorous treatment of melt expulsion has been presented by Ganesh, et al. (1997), which is a 2-D transient generalized model to incorporate solid, fluid, temperature, and pressure during laser drilling, but it is computationally demanding. Yao, et al. (2001) developed a 2-D transient model, in which a Knudsen layer is considered at the melt-vapor front, and the model is suited for shorter pulse and high peak power laser ablation. Theory: Laser energy absorption and melt-vapor front At the melt-vapor front, the Stefan boundary condition is normally applied to describe the laser energy absorption (Kar and Mazumda, 1990; Yao, et al., 2001). Theory: Iabs+k(∂T∂z+r∂T∂r)+ρlνiLv−ρvνv(cpTi+Ev)=0 (1)where Iabs=I(t)−βz is the absorbed laser intensity, β is the laser absorption coefficient depending on laser wavelength and target material, and I(t) describes temporal input laser intensity including pulse width, repetition rate, and pulse temporal shape. k is the heat conductivity, T is the temperature, z and r are distances along axial and radial directions, p is density, v the velocity, Lv the latent heat of vaporization. The subscripts l, v and i denote liquid phase, vapor phase and vapor-liquid interface, respectively. Theory: If the laser intensity is high and pulse duration is short, the so-called Knudsen layer is assumed to exist at the melt-vapor front where the state variables undergo discontinuous changes across the layer. By considering the discontinuity across the Knudsen layer, Yao, et al. (2001) simulated the surface recess velocity Vv distribution, along the radial direction at different times, which indicates the material ablation rate is changing significantly across the Knudsen layer. Theory: Melt expulsion After obtaining the vapor pressure pv, the melt layer flow and melt expulsion can be modeled using hydrodynamic equations (Ganesh et al.,1997). Melt expulsion occurs when the vapor pressure is applied on the liquid free surface which in turn pushes the melt away in the radial direction. In order to achieve fine melt expulsion, the melt flow pattern needs to be predicted very precisely, especially the melt flow velocity at the hole's edge. Thus, a 2-D axisymmetric transient model is used and accordingly the momentum and continuity equations used. Theory: Ganesh's model for melt ejection is comprehensive and can be used for different stages of the hole drilling process. However, the calculation is very time consuming and Solana, et al. (2001), presented a simplified time dependent model that assumes that the melt expulsion velocity is only along the hole wall, and can give results with a minimum computational effort. The liquid will move upwards with velocity u as a consequence of the pressure gradient along the vertical walls, which is given in turn by the difference between the ablation pressure and the surface tension divided by the penetration depth x. Assuming that the drilling front is moving at a constant velocity, the following linear equation of liquid motion on the vertical wall is a good approximation to model the melt expulsion after the initial stage of drilling. ρ∂u(r,t)∂t=P(t)+μ∂2u(r,t)∂r2 (2)where p is the melt density, μ is the viscosity of the liquid, P(t)=(ΔP(t)/x(t)) is the pressure gradient along the liquid layer, ΔP(t) is the difference between the vapor pressure Pv and the surface tension 2σδ¯ Pulse shape effect: Roos (1980) showed that a 200 µs train consisting of 0.5 µs pulses produced superior results for drilling metals than a 200 µs flat shaped pulse. Anisimov, et al. (1984) discovered that process efficiency improved by accelerating the melt during the pulse. Pulse shape effect: Grad and Mozina (1998) further demonstrated the effect of pulse shapes. A 12 ns spike was added at the beginning, middle, and the end of a 5 ms pulse. When the 12 ns spike was added to the beginning of the long laser pulse, where no melt had been produced, no significant effect on removal was observed. On the other hand, when the spike was added at the middle and the end of the long pulse, the improvement of the drilling efficiency was 80 and 90%, respectively. The effect of inter-pulse shaping has also been investigated. Low and Li (2001) showed that a pulse train of linearly increasing magnitude had a significant effect on expulsion processes. Pulse shape effect: Forsman, et al. (2007) demonstrated that a double pulse stream produced increased drilling and cutting rates with significantly cleaner holes. Conclusion: Manufacturers are applying results of process modeling and experimental methods to better understand and control the laser drilling process. The result is higher quality and more productive processes that in turn lead to better end products such as more fuel efficient and cleaner aircraft and power generating turbine engines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cognitive Science and Neuropsychology Program of Szeged** Cognitive Science and Neuropsychology Program of Szeged: The Cognitive Science and Neuropsychology Program was organised by Csaba Pléh, in 1999 at the Institute of Psychology, University of Szeged. The aim of the program is to introduce the theories and methods of cognitive science and neuropsychology both to undergraduate students and researchers from other fields. The program welcome guest professors, international students and other interested students and researchers for participation and collaboration. The program has the intent of becoming an interface between cognitive labs and disciplines in cognitive science, broadly conceived in Central Europe. In 2011-2012 the members of this group left University of Szeged. This group is not exist anymore. The members went to the University of Debrecen, the Hungarian Academy of Sciences, Eötvös Loránd University, etc. Cognitive Science and Neuropsychology Group in Szeged: Members (until 2011) WINKLER, István SZOKOLSZKY, Ágnes RACSMÁNY, Mihály KRAJCSI, Attila NÉMETH, Dezső TISLJÁR, Roland CSIFCSÁK, Gábor JANACSEK, Karolina
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Live Labs Deepfish** Microsoft Live Labs Deepfish: Deepfish was an experimental browsing software system for Windows Mobile devices that used a zooming user interface, being developed at Microsoft Live Labs. It aimed to provide a consistent browsing experience on desktops and mobile devices, to display content on the small mobile displays in the same layout as larger displays, and to avoid the need to recode the web-page for small displays.When a page was opened, it appeared zoomed-out and shrunk, and formatted as would be in a desktop browser. The user could zoom into the certain areas of the page by using a selection rectangle, and pan the zoomed-in page. Microsoft Live Labs Deepfish: Deepfish consisted of a light-weight browser client powered by a server backend which does most of the processing. The server streams only the data that is visible at any moment, which improves load times and responsiveness. Despite the increased speed in Deepfish, it's quite bandwidth heavy and can render pages slowly if it were launched on a device with lower specifications. Whenever a user would zoom in, the zoomed-in high quality has to be downloaded from the server.The browser was available for preview until a limited number of reviewers had participated. Deepfish was retired on 30 September 2008 while important features including JavaScript, AJAX, cookies, ActiveX controls and HTTP POST were not implemented.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Artificial turf** Artificial turf: Artificial turf is a surface of synthetic fibers made to look like natural grass, used in sports arenas, residential lawns and commercial applications that formerly used grass. It is durable and easily maintained without irrigation or trimming. Covered stadiums may require it, lacking sunlight for photosynthesis. Downsides include periodic cleaning requirements and heightened health concerns about the petroleum and toxic chemicals used to make it. Artificial turf: Artificial turf first gained substantial attention in 1966, when ChemGrass was installed in the year-old Astrodome, developed by Monsanto and rebranded as AstroTurf, now a generic trademark (registered to a new owner) for any artificial turf. The first-generation system of shortpile fibers without infill of the 1960s has largely been replaced by two more. The second features longer fibers and sand infill and the third adds recycled crumb rubber to the sand. History: David Chaney, who moved to Raleigh, North Carolina, in 1960 and later served as Dean of the North Carolina State University College of Textiles, headed the team of Research Triangle Park researchers who created the first notable artificial turf. That accomplishment led Sports Illustrated to declare Chaney as the man "responsible for indoor major league baseball and millions of welcome mats." Artificial turf was first installed in 1964 on a recreation area at the Moses Brown School in Providence, Rhode Island. The material came to public prominence in 1966, when AstroTurf was installed in the Astrodome in Houston, Texas. The state-of-the-art indoor stadium had attempted to use natural grass during its initial season in 1965, but this failed miserably and the field conditions were grossly inadequate during the second half of the season, with the dead grass painted green. Due to a limited supply of the new artificial grass, only the infield was installed before the Houston Astros' home opener in April 1966; the outfield was installed in early summer during an extended Astros road trip and first used after the All-Star Break in July. History: The use of AstroTurf and similar surfaces became widespread in the U.S. and Canada in the early 1970s, installed in both indoor and outdoor stadiums used for baseball and football. More than 11,000 artificial turf playing fields have been installed nationally. More than 1,200 were installed in the U.S. in 2013 alone, according to the industry group the Synthetic Turf Council. Sports applications: Baseball Artificial turf was first used in Major League Baseball in the Houston Astrodome in 1966, replacing the grass field used when the stadium opened a year earlier. Even though the grass was specifically bred for indoor use, the dome's semi-transparent Lucite ceiling panels, which had been painted white to cut down on glare that bothered the players, did not pass enough sunlight to support the grass. For most of the 1965 season, the Astros played on green-painted dirt and dead grass. Sports applications: The solution was to install a new type of artificial grass on the field, ChemGrass, which became known as AstroTurf. Given its early use, the term astroturf has since been genericized as a term for any artificial turf. Because the supply of AstroTurf was still low, only a limited amount was available for the first home game. There was not enough for the entire outfield, but there was enough to cover the traditional grass portion of the infield. The outfield remained painted dirt until after the All-Star Break. The team was sent on an extended road trip before the break, and on 19 July 1966, the installation of the outfield portion of AstroTurf was completed. Sports applications: The Chicago White Sox became the first team to install artificial turf in an outdoor stadium, as they used it only in the infield and adjacent foul territory at Comiskey Park from 1969 through 1975. Artificial turf was later installed in other new multi-purpose stadiums such as Pittsburgh's Three Rivers Stadium, Philadelphia's Veterans Stadium, and Cincinnati's Riverfront Stadium. Early AstroTurf baseball fields used the traditional all-dirt path, but starting in 1970 with Cincinnati's Riverfront Stadium, teams began using the "base cutout" layout on the diamond, with the only dirt being on the pitcher's mound, batter's circle, and in a five-sided diamond-shaped "sliding box" around each base. With this layout, a painted arc would indicate where the edge of the outfield grass would normally be, to assist fielders in positioning themselves properly. The last stadium in MLB to use this configuration was Rogers Centre in Toronto, when they switched to an all-dirt infield (but keeping the artificial turf) for the 2016 season. Sports applications: The biggest difference in play on artificial turf was that the ball bounced higher than on real grass and also traveled faster, causing infielders to play farther back than they would normally so that they would have sufficient time to react. The ball also had a truer bounce than on grass so that on long throws fielders could deliberately bounce the ball in front of the player they were throwing to, with the certainty that it would travel in a straight line and not be deflected to the right or left. The biggest impact on the game of "turf", as it came to be called, was on the bodies of the players. The artificial surface, which was generally placed over a concrete base, had much less give to it than a traditional dirt and grass field did, which caused more wear-and-tear on knees, ankles, feet, and the lower back, possibly even shortening the careers of those players who played a significant portion of their games on artificial surfaces. Players also complained that the turf was much hotter than grass, sometimes causing the metal spikes to burn their feet or plastic ones to melt. These factors eventually provoked a number of stadiums, such as the Kansas City Royals' Kauffman Stadium, to switch from artificial turf back to natural grass. Sports applications: In 2000, St. Petersburg's Tropicana Field became the first MLB field to use a third-generation artificial surface, FieldTurf. All other remaining artificial turf stadiums were either converted to third-generation surfaces or were replaced entirely by new natural grass stadiums. In a span of 13 years, between 1992 and 2005, the National League went from having half of its teams using artificial turf to all of them playing on natural grass. With the replacement of Minneapolis's Hubert H. Humphrey Metrodome by Target Field in 2010, only two MLB stadiums used artificial turf from 2010 through 2018: Tropicana Field and Toronto's Rogers Centre. This number grew to three when the Arizona Diamondbacks switched Chase Field to artificial turf for the 2019 season; the stadium had grass from its opening in 1998 until 2018, but the difficulty of maintaining the grass in the stadium, which has a retractable roof and is located in a desert city, was cited as the reason for the switch. In 2020, Miami's Marlins Park (now loanDepot Park) also switched to artificial turf for similar reasons, while the Texas Rangers' new Globe Life Field was opened with an artificial surface, as it is also a retractable roof ballpark in a hot weather city; this puts the number of teams using synthetic turf in MLB at five as of 2023. Sports applications: American football The first professional American football team to play on artificial turf was the Houston Oilers, then part of the American Football League, who moved into the Astrodome in 1968, which had installed AstroTurf two years prior. In 1969, the University of Pennsylvania's Franklin Field in Philadelphia, at the time also home field of the Philadelphia Eagles, switched from grass to AstroTurf, making it the first National Football League stadium to use artificial turf. Sports applications: In 2002, CenturyLink Field, originally planned to have a natural grass field, was instead surfaced with FieldTurf upon positive reaction from the Seattle Seahawks when they played on the surface at their temporary home of Husky Stadium during the 2000 and 2001 seasons. This would be the first of a leaguewide trend taking place over the next several seasons that would not only result in teams already using artificial surfaces for their fields switching to the new FieldTurf or other similar surfaces but would also see several teams playing on grass adopt a new surface. (The Indianapolis Colts' RCA Dome and the St. Louis Rams' Edward Jones Dome were the last two stadiums in the NFL to replace their first-generation AstroTurf surfaces for next-generation ones after the 2004 season). For example, after a three-year experiment with a natural surface, Giants Stadium went to FieldTurf for 2003, while M&T Bank Stadium added its own artificial surface the same year (it has since been removed and replaced with a natural surface, which the stadium had before installing the turf). Later examples include Paul Brown Stadium (now Paycor Stadium), which went from grass to turf in 2004; Gillette Stadium, which made the switch in 2006; and NRG Stadium, which did so in 2015. As of 2021, 14 NFL fields out of 30 are artificial. Sports applications: NFL players overwhelmingly prefer natural grass over synthetic surfaces, according to a league survey conducted in 2010. When asked, "Which surface do you think is more likely to shorten your career?", 90% responded artificial turf.Following receiver Odell Beckham Jr.’s injury during Super Bowl LVI, other NFL players started calling for turf to be banned since the site of the game, SoFi Stadium, was a turf field.Arena football is played indoors on the older short-pile artificial turf. Sports applications: Canadian football The first professional Canadian football stadium to use artificial turf was Empire Stadium in Vancouver, British Columbia, then home of the Canadian Football League's BC Lions, which installed 3M TartanTurf in 1970. Today, eight of the nine stadiums in the CFL currently use artificial turf, largely because of the harsh weather conditions in the latter-half of the season. The only one that does not is BMO Field in Toronto, which initially had an artificial pitch and has been shared by the CFL's Toronto Argonauts since 2016 (part of the endzones at that stadium are covered with artificial turf). The first stadium to use the next-generation surface was Ottawa's Frank Clair Stadium (now TD Place Stadium), which the Ottawa Renegades used when they began play in 2002. The Saskatchewan Roughriders' Taylor Field was the only major professional sports venue in North America to use a second-generation artificial playing surface, Omniturf, which was used from 1988 to 2000, followed by AstroTurf from 2000 to 2007 and FieldTurf from 2007 to its 2016 closure. Sports applications: Cricket Some cricket pitches are made of synthetic grass or of a hybrid of mostly natural and some artificial grass, with these "hybrid pitches" having been implemented across several parts of the United Kingdom and Australia. The first synthetic turf cricket field in the USA was opened in Fremont, California in 2016. Sports applications: Field hockey The introduction of synthetic surfaces has significantly changed the sport of field hockey. Since being introduced in the 1970s, competitions in western countries are now mostly played on artificial surfaces. This has increased the speed of the game considerably and changed the shape of hockey sticks to allow for different techniques, such as reverse stick trapping and hitting. Sports applications: Field hockey artificial turf differs from artificial turf for other sports, in that it does not try to reproduce a grass feel, being made of shorter fibers. This allows the improvement in speed brought by earlier artificial turfs to be retained. This development is problematic for areas which cannot afford to build an extra artificial field for hockey alone. The International Hockey Federation and manufacturers are driving research in order to produce new fields that will be suitable for a variety of sports. Sports applications: The use of artificial turf in conjunction with changes in the game's rules (e.g., the removal of offside, introduction of rolling substitutes and the self-pass, and to the interpretation of obstruction) have contributed significantly to change the nature of the game, greatly increasing the speed and intensity of play as well as placing far greater demands on the conditioning of the players. Sports applications: Association football Some association football clubs in Europe installed synthetic surfaces in the 1980s, which were called "plastic pitches" (often derisively) in countries such as England. There, four professional club venues had adopted them; QPR's Loftus Road (1981-1988), Luton Town's Kenilworth Road (1985-1991), Oldham Athletic's Boundary Park (1986-1991) and Preston North End's Deepdale (1986-1994). QPR had been the first team to install an artificial pitch at their stadium in 1981, but were the first to remove it when they did so in 1988. Artificial pitches were banned from top-flight (then First Division) football in 1991, forcing Oldham Athletic to remove their artificial pitch after their promotion to the First Division in 1991, while then top-flight Luton Town also removed their artificial pitch at the same time. The last Football League team to have an artificial pitch in England was Preston North End, who removed their pitch in 1994 after eight years in use. Sports applications: Artificial turf gained a bad reputation globally, with fans and especially with players. The first-generation artificial turf surfaces were carpet-like in their look and feel, and thusly, a far harder surface than grass and soon became known as an unforgiving playing surface that was prone to cause more injuries, and in particular, more serious joint injuries, than would comparatively be suffered on a grass surface. This turf was also regarded as aesthetically unappealing to many fans. Sports applications: In 1981, London football club Queens Park Rangers dug up its grass pitch and installed an artificial one. Others followed, and by the mid-1980s there were four artificial surfaces in operation in the English league. They soon became a national joke: the ball pinged round like it was made of rubber, the players kept losing their footing, and anyone who fell over risked carpet burns. Unsurprisingly, fans complained that the football was awful to watch and, one by one, the clubs returned to natural grass. Sports applications: In the 1990s, many North American soccer clubs also removed their artificial surfaces and re-installed grass, while others moved to new stadiums with state-of-the-art grass surfaces that were designed to withstand cold temperatures where the climate demanded it. The use of artificial turf was later banned by FIFA, UEFA and by many domestic football associations, though, in recent years, both governing bodies have expressed resurrected interest in the use of artificial surfaces in competition, provided that they are FIFA Recommended. UEFA has now been heavily involved in programs to test artificial turf, with tests made in several grounds meeting with FIFA approval. A team of UEFA, FIFA and German company Polytan conducted tests in the Stadion Salzburg Wals-Siezenheim in Salzburg, Austria which had matches played on it in UEFA Euro 2008. It is the second FIFA 2 Star approved artificial turf in a European domestic top flight, after Dutch club Heracles Almelo received the FIFA certificate in August 2005. The tests were approved.FIFA originally launched its FIFA Quality Concept in February 2001. UEFA announced that starting from the 2005–06 season, approved artificial surfaces were to be permitted in their competitions. Sports applications: A full international fixture for the 2008 European Championships was played on 17 October 2007 between England and Russia on an artificial surface, which was installed to counteract adverse weather conditions, at the Luzhniki Stadium in Moscow. It was one of the first full international games to be played on such a surface approved by FIFA and UEFA. The latter ordered the 2008 European Champions League final hosted in the same stadium in May 2008 to place on grass, so a temporary natural grass field was installed just for the final. Sports applications: UEFA stressed that artificial turf should only be considered an option where climatic conditions necessitate. One Desso "hybrid grass" product incorporates both natural grass and artificial elements.In June 2009, following a match played at Estadio Ricardo Saprissa in Costa Rica, American national team manager Bob Bradley called on FIFA to "have some courage" and ban artificial surfaces. Sports applications: FIFA designated a star system for artificial turf fields that have undergone a series of tests that examine quality and performance based on a two star system. Recommended two-star fields may be used for FIFA Final Round Competitions as well as for UEFA Europa League and Champions League matches. There are currently 130 FIFA Recommended 2-Star installations in the world.In 2009, FIFA launched the Preferred Producer Initiative to improve the quality of artificial football turf at each stage of the life cycle (manufacturing, installation and maintenance). Currently, there are five manufacturers that were selected by FIFA: Act Global, Limonta, Desso, GreenFields, and Edel Grass. These firms have made quality guarantees directly to FIFA and have agreed to increased research and development. Sports applications: In 2010, Estadio Omnilife with an artificial turf opened in Guadalajara to be the new home of Chivas, one of the most popular teams in Mexico. The owner of Chivas, Jorge Vergara, defended the reasoning behind using artificial turf because the stadium was designed to be "environment friendly and as such, having grass would result [in] using too much water." Some players criticized the field, saying its harder surface caused many injuries. When Johan Cruyff became the adviser of the team, he recommended the switch to natural grass, which the team did in 2012.In November 2011, it was reported that a number of English football clubs were interested in using artificial pitches again on economic grounds. As of January 2020, artificial pitches are not permitted in the Premier League or Football League but are permitted in the National League and lower divisions. Bromley are an example of an English football club who currently use a third-generation artificial pitch. In 2018, Sutton United were close to achieving promotion to the Football League and the debate in England about artificial pitches resurfaced again. It was reported that, if Sutton won promotion, they would subsequently be demoted two leagues if they refused to replace their pitch with natural grass. After Harrogate Town's promotion to the Football League in 2020, the club was obliged to install a natural grass pitch at Wetherby Road; and after winning promotion in 2021 Sutton Utd were also obliged to tear up their artificial pitch and replace it with grass, at a cost of more than £500,000. Artificial pitches are permitted in all rounds of the FA Cup competition. Sports applications: The first stadium to use artificial turf in Brazil was Atlético Paranaense's Arena da Baixada in 2016. In 2020, the administration of Allianz Parque, home of Sociedade Esportiva Palmeiras, started the implementation of the second artificial pitch in the country. Sports applications: 2015 Women's World Cup controversy The 2015 FIFA Women's World Cup took place entirely on artificial surfaces, as the event was played in Canada, where almost all of the country's stadiums use artificial turf due to climate issues. This plan garnered criticism from players and fans, some believing the artificial surfaces make players more susceptible to injuries. Over fifty of the female athletes protested against the use of artificial turf on the basis of gender discrimination.Australia winger Caitlin Foord said that after playing 90 minutes there was no difference to her post-match recovery – a view shared by the rest of the squad. The squad spent much time preparing on the surface and had no problems with its use in Winnipeg. "We've been training on [artificial] turf pretty much all year so I think we're kind of used to it in that way ... I think grass or turf you can still pull up sore after a game so it's definitely about getting the recovery in and getting it right", Foord said.The 2012 Women's World Player of the Year, Abby Wambach, noted "The men would strike, playing on artificial turf."The controversial issue of gender equality and an equal playing field for all has sparked debate in many countries around the world. A lawsuit was filed on 1 October 2014 in an Ontario tribunal court by a group of women's international soccer players against FIFA and the Canadian Soccer Association and specifically points out that in 1994 FIFA spent $2 million to plant natural grass over artificial turf in New Jersey and Detroit.Various celebrities showed their support for the women soccer players in defense of their lawsuit, including actor Tom Hanks, NBA player Kobe Bryant and U.S. men's soccer team keeper Tim Howard. Even with the possibility of boycotts, FIFA's head of women's competitions, Tatjana Haenni, made it clear that "we play on artificial turf and there's no Plan B." Rugby union Rugby union also uses artificial surfaces at a professional level. Infill fields are used by English Premiership Rugby teams Gloucester, Newcastle Falcons, Saracens F.C. and Worcester Warriors, as well as United Rugby Championship teams Cardiff and Glasgow Warriors. Some fields, including Twickenham Stadium, have incorporated a hybrid field, with grass and synthetic fibers used on the surface. This allows for the field to be much more hard wearing, making it less susceptible to weather conditions and frequent use. Sports applications: Tennis Carpet has been used as a surface for indoor tennis courts for decades, though the first carpets used were more similar to home carpets than a synthetic grass. After the introduction of AstroTurf, it came to be used for tennis courts, both indoor and outdoor, though only a small minority of courts use the surface. Both infill and non-infill versions are used, and are typically considered medium-fast to fast surfaces under the International Tennis Federation's classification scheme. A distinct form found in tennis is an "artificial clay" surface, which seeks to simulate a clay court by using a very short pile carpet with an infill of the same loose aggregate used for clay courts that rises above the carpet fibers.Tennis courts such as Wimbledon are considering using an artificial hybrid grass to replace their natural lawn courts. Such systems incorporate synthetic fibers into natural grass to create a more durable surface on which to play. Such hybrid surfaces are currently used for some association football stadiums, including Wembley Stadium. Sports applications: Golf Synthetic turf can also be used in the golf industry, such as on driving ranges, putting greens and even in some circumstances tee boxes. For low budget courses, particularly those catering to casual golfers, synthetic putting greens offer the advantage of being a relatively cheap alternative to installing and maintaining grass greens, but are much more similar to real grass in appearance and feel compared to sand greens which are the traditional alternative surface. Because of the vast areas of golf courses and the damage from clubs during shots, it is not feasible to surface fairways with artificial turf. Sports applications: Motor racing Artificial grass is used to line the perimeter of some sections of some motor circuits, and offers less grip than some other surfaces. It can pose an obstacle to drivers if it gets caught on their car. Other applications: Landscaping Since the early 1990s, the use of synthetic grass in the more arid western states of the United States has moved beyond athletic fields to residential and commercial landscaping. New water saving programs, as of 2019, which grant rebates for turf removal, do not accept artificial turf as replacement and require a minimum of plants.The use of artificial grass for convenience sometimes faces opposition: Legislation frequently seeks to preserve natural gardens and fully water permeable surfaces, therefore restricting the use of hardscape and plantless areas, including artificial turf. In several locations in different countries, homeowners have been fined or forced to remove artificial turf or had to defend themselves in courts. Many of these restrictions can be found in local bylaws and ordinances, and are not always applied in a consistent manner.Sunlight reflections from nearby windows can cause artificial turf to melt. This can be avoided by adding perforated vinyl privacy window film adhesive to the outside of the window causing the reflection. Other applications: Airports Artificial turf has been used at airports. Here it provides several advantages over natural turf – it does not support wildlife, it has high visual contrast with runways in all seasons, it reduces foreign object damage (FOD) since the surface has no rocks or clumps, and it drains well.Some artificial turf systems allow for the integration of fiber-optic fibers into the turf. This would allow for runway lighting to be embedded in artificial landing surfaces for aircraft (or lighting or advertisements to be directly embedded in a playing surface). Other applications: Tanks for octopuses Artificial turf is commonly used for tanks containing octopusses, in particular the Giant Pacific octopus since it is a reliable way to prevent the octopusses from escaping their tank, as they prevent the suction cups on the tentacles from getting a tight seal. Environmental and safety concerns: Environmental footprint The first major academic review of the environmental and health risks and benefits of artificial turf was published in 2014; it was followed by extensive research on possible risks to human health, but holistic analyses of the environmental footprint of artificial turf compared with natural turf only began to emerge in the 2020s, and frameworks to support informed policymaking were still lacking. Evaluating the relative environmental footprints of natural and artificial turf is complex, with outcomes depending on a wide range of factors, including (to give the example of a sports field): what ecosystem services are lost by converting a site to a sports pitch how resource-intensive is the landscaping work and transport of materials to create a pitch whether input materials are recycled and whether these are recycled again at the end of the pitch's life how resource-intensive and damaging maintenance is (whether through water, fertiliser, weed-killer, reapplication of rubber crumb, snow-clearing, etc.) how intensively the facility is used, for how long, and whether surface type can reduce the overall number of pitches requiredArtificial turf has been shown to contribute to global warming by absorbing significantly more radiation than living turf and, to a lesser extent, by displacing living plants that could sequester carbon dioxide through photosynthesis; a study at New Mexico State University found that in that environment, water-cooling of artificial turf can demand as much water as natural turf. However, a 2022 study that used real-world data to model a ten-year-life-cycle environmental footprint for a new natural-turf soccer field compared with an artificial-turf field found that the natural-turf field contributed twice as much to global warming as the artificial one (largely due to a more resource-intensive construction phase), while finding that the artificial turf would likely cause more pollution of other kinds. It promoted improvements to usual practice such as the substitution of cork for rubber in artificial pitches and more drought-resistant grasses and electric mowing in natural ones. In 2021, a Zurich University of Applied Sciences study for the City of Zürich, using local data on extant pitches, found that, per hour of use, natural turf had the lowest environmental footprint, followed by artificial turf with no infill, and then artificial turf using an infill (e.g. granulated rubber). However, because it could tolerate more hours of use, unfilled artificial turf often had the lowest environmental footprint in practice. The study recommended optimising the use of existing pitches before building new ones, and choosing the best surface for the likely intensity of use. Another suggestion is the introduction of green roofs to offset the conversion of grassland to artificial turf. Environmental and safety concerns: Pollution and associated health risks Some artificial turf uses infill such as silicon sand, but much uses granulated rubber, referred to as "crumb rubber". Granulated rubber can be made from recycled car tires and may carry heavy metals, PFAS chemicals, and other chemicals of environmental concern. The synthetic fibers of artificial turf are also subject to degradation. Thus chemicals from artificial turfs leach into the environment, and artificial turf is a source of microplastics pollution and rubber pollution in air, fresh-water, sea and soil environments. In Norway, Sweden, and at least some other places, the rubber granulate from artificial turf infill constitutes the second largest source of microplastics in the environment after the tire and road wear particles that make up a large portion of the fine road debris. As early as 2007, Environment and Human Health, Inc., a lobby-group, proposed a moratorium on the use of ground-up rubber tires in fields and playgrounds based on health concerns; in September 2022, the European Commission made a draft proposal to restrict the use of microplastic granules as infill in sports fields.What is less clear is how likely this pollution is in practice to harm humans or other organisms and whether these environmental costs outweigh the benefits of artificial turf, with many scientific papers and government agencies (such as the United States Environmental Protection Agency) calling for more research. A 2018 study published in Water, Air, & Soil Pollution analyzed the chemicals found in samples of tire crumbs, some used to install school athletic fields, and identified 92 chemicals only about half of which had ever been studied for their health effects and some of which are known to be carcinogenic or irritants. It stated "caution would argue against use of these materials where human exposure is likely, and this is especially true for playgrounds and athletic playing fields where young people may be affected". Conversely, a 2017 study in Sports Medicine argued that "regular physical activity during adolescence and early adulthood helps prevent cancer later in life. Restricting the use or availability of all-weather year-round synthetic fields and thereby potentially reducing exercise could, in the long run, actually increase cancer incidence, as well as cardiovascular disease and other chronic illnesses."The possibility that carcinogenic substances in artificial turf could increase risks of human cancer (the artificial turf–cancer hypothesis) gained a particularly high profile in the first decades of the twenty-first century and attracted extensive study, with scientific reports around 2020 finding cancer-risks in modern artificial turf negligible. But concerns have extended to other human-health risks, such as endocrine disruption that might affect early puberty, obesity, and children's attention spans. Potential harm to fish and earthworm populations has also been shown. Environmental and safety concerns: A study for the New Jersey Department of Environmental Protection analyzed lead and other metals in dust kicked into the air by physical activity on five artificial turf fields. The results suggest that even low levels of activity on the field can cause particulate matter containing these chemicals to get into the air where it can be inhaled and be harmful. The authors state that since no level of lead exposure is considered safe for children, “only a comprehensive mandated testing of fields can provide assurance that no health hazard on these fields exists from lead or other metals used in their construction and maintenance.” Kinesiological health risks A number of health and safety concerns have been raised about artificial turf. Friction between skin and older generations of artificial turf can cause abrasions and/or burns to a much greater extent than natural grass. Artificial turf tends to retain heat from the sun and can be much hotter than natural grass with prolonged exposure to the sun.There is some evidence that periodic disinfection of artificial turf is required as pathogens are not broken down by natural processes in the same manner as natural grass. Despite this, a 2006 study suggests certain microbial life is less active in artificial turf.There is evidence showing higher rates of player injury on artificial turf. By November 1971, the injury toll on first-generation artificial turf had reached a threshold that resulted in congressional hearings by the House subcommittee on commerce and finance. In a study performed by the National Football League Injury and Safety Panel, published in the October 2012 issue of the American Journal of Sports Medicine, Elliott B. Hershman et al. reviewed injury data from NFL games played between 2000 and 2009, finding that "the injury rate of knee sprains as a whole was 22% higher on FieldTurf than on natural grass. While MCL sprains did not occur at a rate significantly higher than on grass, rates of ACL sprains were 67% higher on FieldTurf." Metatarsophalangeal joint sprain, known as "turf toe" when the big toe is involved, is named from the injury being associated with playing sports on rigid surfaces such as artificial turf and is a fairly common injury among professional American football players. Artificial turf is a harder surface than grass and does not have much "give" when forces are placed on it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Climate of New England** Climate of New England: The climate of New England varies greatly across its 500-mile (800 km) span from northern Maine to southern Connecticut. Maine, Vermont, New Hampshire, and most of interior Massachusetts have a humid continental climate (Dfb under the Köppen climate classification). In this region, the winters are long, cold, and heavy snow is common, courtesy of both coastal and continental low pressure systems. Most (but not all) locations in this region receive between 60 to 120 inches or 1.52 to 3.05 metres of snow annually. The summer months are moderately warm with only rare instances of excessive heat, and summer in this region is rather short. Annual rainfall has historically been spread evenly throughout the year, although climate change may be partly responsible for increasingly frequent droughts in the region during the summer months. Cities like Bangor, Maine; Portland, Maine; Manchester, New Hampshire; Burlington, Vermont; and Pittsfield, Massachusetts average around 45 inches (1,100 mm) of rainfall and 60 to 90 inches (1.52 to 2.29 m) of snow annually. The frost-free growing season ranges from just 90 days in far northern Maine and in the valleys of the White and Green Mountains, to as much as 140 days along the Southern Maine coast and in most of western Massachusetts. Climate of New England: In eastern Massachusetts, northern Rhode Island, and northern Connecticut, a hot-summer version of the humid continental climate (Köppen Dfa) prevails. Here summers are hotter and winters shorter with less snowfall. Cities like Boston, Hartford, and Providence generally receive 35 to 50 inches or 0.89 to 1.27 metres of snow annually, with much of this snowfall coming in just a few sudden bursts each winter due to powerful Nor'easter cyclones. Summers are often hot and humid, with high temperatures in the lower Connecticut River valley of southern Massachusetts and Connecticut between 85 and 90 °F (29 and 32 °C) regularly during June, July, and August. Convective thunderstorms are common in these months as well, some of which can become severe. The frost-free growing season ranges from 140 days in parts of central Massachusetts to near 160 days across interior Connecticut and most of Rhode Island.Coastal Rhode Island and southern Connecticut are the broad transition zone from continental climates to the north, to temperate climates (called subtropical in some climate classifications) to the south. In this region, summers can be quite long and hot, with humid, tropical air masses common between May and September. Convective thundershowers are common in summer. The coast of Connecticut from Stamford, through the New Haven area to the New London, and Westerly and Newport, Rhode Island area is usually the mildest area of New England in winter. Winter precipitation in this area frequently falls in the form of rain or a wintry mix of sleet, rain, and wet snow. Seasonal snowfall is far less across far southern Connecticut and coastal Rhode Island than it is across interior and Northern coastal areas (only 24 to 30 inches or 0.61 to 0.76 metres of snow annually), and in some years little snow falls. Cold snaps in this far southern zone also tend to be shorter and less intense than points north. Winters also tend to be sunnier and warmer in southern Connecticut and southern Rhode Island compared to northern and central New England. The frost-free growing season approaches 200 days along the Connecticut coast.Tropical cyclones sometimes directly impact New England. The 1938 New England hurricane and Hurricane Carol in 1954 were especially devastating storms which made landfall in Southern New England. Other tropical systems that have directly impacted the region include Hurricane Donna, Hurricane Gloria, Hurricane Bob, Hurricane Irene, Hurricane Sandy, and Tropical Storm Isaias. While infrequent, tornadoes occasionally occur in the region, with notable events including the 1953 Worcester tornado, the Windsor Locks, Connecticut, tornado in 1979, and the 2011 New England tornado outbreak, which produced several destructive twisters throughout much of the region.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Constant chord theorem** Constant chord theorem: The constant chord theorem is a statement in elementary geometry about a property of certain chords in two intersecting circles. Constant chord theorem: The circles k1 and k2 intersect in the points P and Q . Z1 is an arbitrary point on k1 being different from P and Q . The lines Z1P and Z1Q intersect the circle k2 in P1 and Q1 . The constant chord theorem then states that the length of the chord P1Q1 in k2 does not depend on the location of Z1 on k1 , in other words the length is constant. Constant chord theorem: The theorem stays valid when Z1 coincides with P or Q , provided one replaces the then undefined line Z1P or Z1Q by the tangent on k1 at Z1 . A similar theorem exists in three dimensions for the intersection of two spheres. The spheres k1 and k2 intersect in the circle ks . Z1 is arbitrary point on the surface of the first sphere k1 , that is not on the intersection circle ks . The extended cone created by ks and Z1 intersects the second sphere k2 in a circle. The length of the diameter of this circle is constant, that is it does not depend on the location of Z1 on k1 Nathan Altshiller Court described the constant chord theorem 1925 in the article sur deux cercles secants for the Belgian math journal Mathesis. Eight years later he published On Two Intersecting Spheres in the American Mathematical Monthly, which contained the 3-dimensional version. Later it was included in several textbooks, such as Ross Honsberger's Mathematical Morsels and Roger B. Nelsen's Proof Without Words II, where it was given as a problem, or the German geometry textbook Mit harmonischen Verhältnissen zu Kegelschnitten by Halbeisen, Hungerbühler and Läuchli, where it was given as a theorem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasma diagnostics** Plasma diagnostics: Plasma diagnostics are a pool of methods, instruments, and experimental techniques used to measure properties of a plasma, such as plasma components' density, distribution function over energy (temperature), their spatial profiles and dynamics, which enable to derive plasma parameters. Invasive probe methods: Ball-pen probe A ball-pen probe is novel technique used to measure directly the plasma potential in magnetized plasmas. The probe was invented by Jiří Adámek in the Institute of Plasma Physics AS CR in 2004. The ball-pen probe balances the electron saturation current to the same magnitude as that of the ion saturation current. In this case, its floating potential becomes identical to the plasma potential. This goal is attained by a ceramic shield, which screens off an adjustable part of the electron current from the probe collector due to the much smaller gyro–radius of the electrons. The electron temperature is proportional to the difference of ball-pen probe(plasma potential) and Langmuir probe (floating potential) potential. Thus, the electron temperature can be obtained directly with high temporal resolution without additional power supply. Invasive probe methods: Faraday cup The conventional Faraday cup is applied for measurements of ion (or electron) flows from plasma boundaries and for mass spectrometry. Invasive probe methods: Langmuir probe Measurements with electric probes, called Langmuir probes, are the oldest and most often used procedures for low-temperature plasmas. The method was developed by Irving Langmuir and his co-workers in the 1920s, and has since been further developed in order to extend its applicability to more general conditions than those presumed by Langmuir. Langmuir probe measurements are based on the estimation of current versus voltage characteristics of a circuit consisting of two metallic electrodes that are both immersed in the plasma under study. Two cases are of interest: (a) The surface areas of the two electrodes differ by several orders of magnitude. This is known as the single-probe method. Invasive probe methods: (b) The surface areas are very small in comparison with the dimensions of the vessel containing the plasma and approximately equal to each other. This is the double-probe method. Invasive probe methods: Conventional Langmuir probe theory assumes collisionless movement of charge carriers in the space charge sheath around the probe. Further it is assumed that the sheath boundary is well-defined and that beyond this boundary the plasma is completely undisturbed by the presence of the probe. This means that the electric field caused by the difference between the potential of the probe and the plasma potential at the place where the probe is located is limited to the volume inside the probe sheath boundary. Invasive probe methods: The general theoretical description of a Langmuir probe measurement requires the simultaneous solution of the Poisson equation, the collision-free Boltzmann equation or Vlasov equation, and the continuity equation with regard to the boundary condition at the probe surface and requiring that, at large distances from the probe, the solution approaches that expected in an undisturbed plasma. Invasive probe methods: Magnetic (B-dot) probe If the magnetic field in the plasma is not stationary, either because the plasma as a whole is transient or because the fields are periodic (radio-frequency heating), the rate of change of the magnetic field with time ( B˙ , read "B-dot") can be measured locally with a loop or coil of wire. Such coils exploit Faraday's law, whereby a changing magnetic field induces an electric field. The induced voltage can be measured and recorded with common instruments. Invasive probe methods: Also, by Ampere's law, the magnetic field is proportional to the currents that produce it, so the measured magnetic field gives information about the currents flowing in the plasma. Both currents and magnetic fields are important in understanding fundamental plasma physics. Invasive probe methods: Energy analyzer An energy analyzer is a probe used to measure the energy distribution of the particles in a plasma. The charged particles are typically separated by their velocities from the electric and/or magnetic fields in the energy analyzer, and then discriminated by only allowing particles with the selected energy range to reach the detector. Energy analyzers that use an electric field as the discriminator are also known as retarding field analyzers. It usually consists of a set of grids biased at different potentials to set up an electric field to repel particles lower than the desired amount of energy away from the detector. In contrast, energy analyzers that employ the use of a magnetic field as a discriminator are very similar to mass spectrometers. Particles travel through a magnetic field in the probe and require a specific velocity in order to reach the detector. These were first developed in the 1960s, and are typically built to measure ions. (The size of the device is on the order the particle's gyroradius because the discriminator intercepts the path of the gyrating particle.) The energy of neutral particles can also be measured by an energy analyzer, but they first have to be ionized by an electron impact ionizer. Invasive probe methods: Proton radiography Proton radiography uses a proton beam from a single source to interact with the magnetic field and/or the electric field in the plasma and the intensity profile of the beam is measured on a screen after the interaction. The magnetic and electric fields in the plasma deflect the beam's trajectory and the deflection causes modulation in the intensity profile. From the intensity profile, one can measure the integrated magnetic field and/or electric field. Invasive probe methods: Self Excited Electron Plasma Resonance Spectroscopy (SEERS) Nonlinear effects like the I-V characteristic of the boundary sheath are utilized for Langmuir probe measurements but they are usually neglected for modelling of RF discharges due to their very inconvenient mathematical treatment. The Self Excited Electron Plasma Resonance Spectroscopy (SEERS) utilizes exactly these nonlinear effects and known resonance effects in RF discharges. The nonlinear elements, in particular the sheaths, provide harmonics in the discharge current and excite the plasma and the sheath at their series resonance characterized by the so-called geometric resonance frequency. Invasive probe methods: SEERS provides the spatially and reciprocally averaged electron plasma density and the effective electron collision rate. The electron collision rate reflects stochastic (pressure) heating and ohmic heating of the electrons. The model for the plasma bulk is based on 2d-fluid model (zero and first order moments of Boltzmann equation) and the full set of the Maxwellian equations leading to the Helmholtz equation for the magnetic field. The sheath model is based additionally on the Poisson equation. Passive spectroscopy: Passive spectroscopic methods simply observe the radiation emitted by the plasma. Doppler shift If the plasma (or one ionic component of the plasma) is flowing in the direction of the line of sight to the observer, emission lines will be seen at a different frequency due to the Doppler effect. Passive spectroscopy: Doppler broadening The thermal motion of ions will result in a shift of emission lines up or down, depending on whether the ion is moving toward or away from the observer. The magnitude of the shift is proportional to the velocity along the line of sight. The net effect is a characteristic broadening of spectral lines, known as Doppler broadening, from which the ion temperature can be determined. Passive spectroscopy: Stark effect The splitting of some emission lines due to the Stark effect can be used to determine the local electric field. Stark broadening Irrespectively of the presense of macroscopic electric fields, any single atom is affected by microscopic electric fields due to the neighboring charged plasma particles. This results in the Stark broadening of spectral lines that can be used to determine the plasma density. Spectral line ratios The brightness of spectral lines emitted by atoms in a plasma depends on the plasma temperature and density. If a sufficiently complete collisional radiative model is used, the temperature (and, to a lesser degree, density) of plasmas can often be inferred by taking ratios of the emission intensities of various atomic spectral lines. Zeeman effect The presence of a magnetic field splits the atomic energy levels due to the Zeeman effect. This leads to broadening or splitting of spectral lines. Analyzing these lines can, therefore, yield the magnetic field strength in the plasma. Active spectroscopy: Active spectroscopic methods stimulate the plasma atoms in some way and observe the result (emission of radiation, absorption of the stimulating light or others). Active spectroscopy: Absorption spectroscopy By shining through the plasma a laser with a wavelength, tuned to a certain transition of one of the species present in the plasma, the absorption profile of that transition could be obtained. This profile provides information not only for the plasma parameters, that could be obtained from the emission profile, but also for the line-integrated number density of the absorbing species. Active spectroscopy: Beam emission spectroscopy A beam of neutral atoms is fired into a plasma. Some atoms are excited by collisions within the plasma and emit radiation. This can be used to probe density fluctuations in a turbulent plasma. Active spectroscopy: Charge exchange recombination spectroscopy In very hot plasmas (as in magnetic fusion experiments), light elements are fully ionized and don't emit line radiation. When a beam of neutral atoms is fired into the plasma, electrons from beam atoms are transferred to hot plasma ions, which form hydrogenic ions which promptly emit line radiation. This radiation is analyzed for ion density, temperature, and velocity. Active spectroscopy: Laser-induced fluorescence If the plasma is not fully ionized but contains ions that fluoresce, laser-induced fluorescence can provide very detailed information on temperature, density, and flows. Active spectroscopy: Photodetachment Photodetachment combines Langmuir probe measurements with an incident laser beam. The incident laser beam is optimised, spatially, spectrally, and pulse energy, to detach an electron bound to a negative ion. Langmuir probe measurements are conducted to measure the electron density in two situations, one without the incident laser and one with the incident laser. The increase in the electron density with the incident laser gives the negative ion density. Active spectroscopy: Motional Stark effect If an atom is moving in a magnetic field, the Lorentz force will act in opposite directions on the nucleus and the electrons, just as an electric field does. In the frame of reference of the atom, there is an electric field, even if there is none in the laboratory frame. Consequently, certain lines will be split by the Stark effect. With an appropriate choice of beam species and velocity and of geometry, this effect can be used to determine the magnetic field in the plasma. Active spectroscopy: Two-photon absorption laser-induced fluorescence The two-photon absorption laser-induced fluorescence (TALIF) is a modification of the laser-induced fluorescence technique. In this approach the upper level is excited by absorbing two photons and the subsequent fluorescence caused by the radiative decay of the excited level is observed. TALIF is able to give a measure of absolute ground state atomic densities, such as hydrogen, oxygen, and nitrogen. However, this is only possible with a suitable calibration; this can be done either using a titration method or a more modern comparison with a noble gasses.TALIF is able to give information on not only atomic densities, but also temperatures of species. However, this requires lasers with a high spectral resolution to determine the Gaussian contribution of the temperature broadening against the natural broadening of the two-photon excitation profile and the spectral broadening of the laser itself. Optical effects from free electrons: The optical diagnostics above measure line radiation from atoms. Alternatively, the effects of free charges on electromagnetic radiation can be used as a diagnostic. Electron cyclotron emission In magnetized plasmas, electrons will gyrate around magnetic field lines and emit cyclotron radiation. The frequency of the emission is given by the cyclotron resonance condition. In a sufficiently thick and dense plasma, the intensity of the emission will follow Planck's law, and only depend on the electron temperature. Faraday rotation The Faraday effect will rotate the plane of polarization of a beam passing through a plasma with a magnetic field in the direction of the beam. This effect can be used as a diagnostic of the magnetic field, although the information is mixed with the density profile and is usually an integral value only. Interferometry If a plasma is placed in one arm of an interferometer, the phase shift will be proportional to the plasma density integrated along the path. Optical effects from free electrons: Thomson scattering Scattering of laser light from the electrons in a plasma is known as Thomson scattering. The electron temperature can be determined very reliably from the Doppler broadening of the laser line. The electron density can be determined from the intensity of the scattered light, but a careful absolute calibration is required. Although Thomson scattering is dominated by scattering from electrons, since the electrons interact with the ions, in some circumstances information on the ion temperature can also be extracted. Neutron diagnostics: Fusion plasmas using D-T fuel produce 3.5 MeV alpha particles and 14.1 MeV neutrons. By measuring the neutron flux, plasma properties such as ion temperature and fusion power can be determined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tritium (programming language)** Tritium (programming language): Tritium is a simple scripting language for efficiently transforming structured data like HTML, XML, and JSON. It is similar in purpose to XSLT but has a syntax influenced by jQuery, Sass, and CSS versus XSLT's XML based syntax. History: Tritium was designed by Hampton Catlin, the creator of languages Sass and Haml and is currently bundled with the Moovweb mobile platform.As with Sass (created to address deficiencies in CSS) and Haml (created to address deficiencies in coding HTML templates), Catlin designed Tritium to address issues he saw with XSLT while preserving the core benefits of a transformation language. Much of this was based on his prior experience porting Wikipedia's desktop website to the mobile web.Open Tritium is the open source implementation of the Tritium language. It was presented at O'Reilly Open Source Convention 2014 and the compiler is implemented in Go. Concept: Tritium takes as input HTML, XML, or JSON documents and outputs HTML, XML, or JSON data that has been transformed according to the rules defined in the Tritium script. Like jQuery, idiomatic Tritium code is structured around selecting a collection of elements via a CSS or XPath selector and then chaining a series of operations on them.For example, the following script will select all the HTML table elements with id of foo and change their width attributes to 100%. Concept: While Tritium supports both XPath and CSS selectors via the $() and $$() functions (respectively), the preferred usage is XPath. For example, the above code rewritten to use the equivalent XPath selector would be: Comparison to XSLT: Both Tritium and XSLT are designed for transforming data. However, Tritium differs in key ways to make it more familiar and easier to use for web developers: Familiar syntax: Tritium's syntax is similar to CSS and jQuery so that it is more familiar and readable to web developers than the XML based syntax of XSLT. Imperative style: Tritium uses an imperative programming style instead of the functional and recursive processing model of XSLT. While functional programming has key advantages, it is less familiar to web designers than imperative programming. Input transparency: In XSLT any input elements that are not specified by a transform rule are removed from the output. Tritium reverses this behavior: any input elements that are not specified by a transform rule are passed to the output unchanged. HTML-compatible: Tritium was designed to process HTML, XML, and JSON, whereas XSLT only works on XML.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iron cross (gymnastics)** Iron cross (gymnastics): An iron cross is a gymnastics skill on the rings in which the body is suspended upright while the arms are extended laterally, forming the shape of the Christian cross. It is a move that requires significant shoulder and bicep tendon strength. Other common variations of the move include the vertically inverted cross and the Maltese cross, in which the gymnast holds his body parallel to the ground at ring height with arms extended laterally. The International Gymnastics Federation code of points lists the iron cross (or L-cross) as a "B" value skill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bridging (programming)** Bridging (programming): In computer science, bridging describes systems that map the runtime behaviour of different programming languages so they can share common resources. They are often used to allow "foreign" languages to operate a host platform's native object libraries, translating data and state across the two sides of the bridge. Bridging contrasts with "embedding" systems that allow limited interaction through a black box mechanism, where state sharing is limited or non-existent. Bridging (programming): Apple Inc. has made heavy use of bridging on several occasions, notably in early versions of Mac OS X which bridged to older "classic" systems using the Carbon system as well as Java. Microsoft's Common Language Runtime, introduced with the .NET Framework, was designed to be multi-language from the start, and avoided the need for extensive bridging solutions. Both platforms have more recently added new bridging systems for JavaScript, Apple's ObjC-to-JS and Microsoft's HTML Bridge. Concepts: Functions, libraries and runtimes Most programming languages include the concept of a subroutine or function, a mechanism that allows commonly used code to be encapsulated and re-used throughout a program. For instance, a program that makes heavy use of mathematics might need to perform the square root calculation on various numbers throughout the program, so this code might be isolated in a sqrt(aNumber) function that is "passed in" the number to perform the square root calculation on, and "returns" the result. In many cases the code in question already exists, either implemented in hardware or as part of the underlying operating system the program runs within. In these cases the sqrt function can be further simplified by calling the built-in code. Concepts: Functions often fall into easily identifiable groups of similar capabilities, mathematics functions for instance, or handling text files. Functions are often gathered together in collections known as libraries that are supplied with the system or, more commonly in the past, the programming language. Each language has its own method of calling functions so the libraries written for one language may not work with another; the semantics for calling functions in C is different from Pascal, so generally C programs cannot call Pascal libraries and vice versa. The commonly used solution to this problem is to pick one set of call semantics as the default system for the platform, and then have all programming languages conform to that standard. Concepts: Most computer languages and platforms have generally added functionality that cannot be expressed in the call/return model of the function. Garbage collection, for instance, runs throughout the lifetime of the application's run. This sort of functionality is effectively "outside" the program, it is present but not expressed directly in the program itself. Functions like these are generally implemented in ever-growing runtime systems, libraries that are compiled into programs but not necessarily visible within the code. Concepts: Shared libraries and common runtimes The introduction of shared library systems changed the model of conventional program construction considerably. In the past, library code was copied directly into programs by the "linker" and effectively became part of the program. With dynamic linking the library code (normally) exists in only one place, a vendor-provided file in the system that all applications share. Early systems presented many problems, often in performance terms, and shared libraries were largely isolated to particular languages or platforms, as opposed to the operating system as a whole. Many of these problems were addressed through the 1990s, and by the early 2000s most major platforms had switched to shared libraries as the primary interface to the entire system. Concepts: Although such systems addressed the problem of providing common code libraries for new applications, these systems generally added their own runtimes as well. This meant that the language, library, and now the entire system, were often tightly linked together. For instance, under OpenStep the entire operating system was, in effect, an Objective-C program. Any programs running on it that wished to use the extensive object suite provided in OpenStep would not only have to be able to call those libraries using Obj-C semantics, but also interact with the Obj-C runtime to provide basic control over the application. Concepts: In contrast, Microsoft's .NET Framework was designed from the start to be able to support multiple languages, initially C#, C++ and a new version of Visual Basic. To do this, MS isolated the object libraries and the runtime into the Common Language Infrastructure (CLI). Instead of programs compiling directly from the source code to the underlying runtime format, as is the case in most languages, under the CLI model all languages are first compiled to the Common Intermediate Language (CIL), which then calls into the Common Language Runtime (CLR). In theory, any programming language can use the CLI system and use .NET objects. Concepts: Bridging Although platforms like OSX and .NET offer the ability for most programming languages to be adapted to the platform's runtime system, it is also the case that these programming languages often have a target runtime in mind - Objective-C essentially requires the Obj-C runtime, while C# does the same for the CLR. If one wants to use C# code within Obj-C, or vice versa, one has to find a version written to use the other runtime, which often does not exist. Concepts: A more common version of this problem concerns the use of languages that are platform independent, like Java, which have their own runtimes and libraries. Although it is possible to build a Java compiler that calls the underlying system, like J#, such a system would not also be able to interact with other Java code unless it too was re-compiled. Access to code in Java libraries may be difficult or impossible. Concepts: The rise of the web browser as a sort of virtual operating system has made this problem more acute. The modern "programming" paradigm under HTML5 includes the JavaScript (JS) language, the Document Object Model as a major library, and the browser itself as a runtime environment. Although it would be possible to build a version of JS that runs on the CLR, but this would largely defeat the purpose of a language designed largely for operating browsers - unless that compiler can interact with the browser directly, there is little purpose in using it. Concepts: In these cases, and many like it, the need arises for a system that allows the two runtimes to interoperate. This is known as "bridging" the runtimes. Examples: Apple Apple has made considerable use of bridging technologies since the earliest efforts that led to Mac OS X. Examples: When NeXT was first purchased by Apple, the plan was to build a new version of OpenStep, then-known as Rhapsody, with an emulator known as a Blue Box that would run "classic" Mac OS programs. This led to considerable push-back from the developer community, and Rhapsody was cancelled. In its place, OS X would implement many of the older Mac OS calls on top of core functionality in OpenStep, providing a path for existing applications to be gracefully migrated forward. Examples: To do this, Apple took useful code from the OpenStep platform and re-implemented the core functionality in a pure-C library known as Core Foundation, or CF for short. OpenStep's libraries calling CF underlying code became the Cocoa API, while the new Mac-like C libraries became the Carbon API. As the C and Obj-C sides of the system needed to share data, and the data on the Obj-C side was normally stored in objects (as opposed to base types), conversions to and from CF could be expensive. Apple was not willing to pay this performance penalty, so they implemented a scheme known as "toll-free bridging" to help reduce or eliminate this problem.At the time, Java was becoming a major player in the programming world, and Apple also provided a Java bridging solution that was developed for the WebObjects platform. This was a more classical bridging solution, with direct conversions between Java and OpenStep/CF types being completed in code, where required. Under Carbon, a program using CFStrings was using the same code as a Cocoa application using NSString, and the two could be bridged toll-free. With the Java bridge, CFStrings were instead cast into Java's own String objects, which required more work but made porting essentially invisible. Other developers made widespread use of similar technologies to provide support for other languages, including the "peering" system used to allow Obj-C code to call .NET code under Mono.As the need for these porting solutions waned, both Carbon and the Java Bridge were deprecated and eventually removed from later releases of the system. Java support was migrated to using the Java Native Interface (JNI), a standard from the Java world that allowed Java to interact with C-based code. On OSX, the JNI allowed Obj-C code to be used, with some difficulty.Around 2012, Apple's extensive work on WebKit has led to the introduction of a new bridging technology that allows JavaScript program code to call into the Obj-C/Cocoa runtime, and vice versa. This allows browser automation using Obj-C, or alternately, the automation of Cocoa applications using JavaScript. Originally part of the Safari web browser, in 2013 the code was promoted to be part of the new OSX 10.9. Examples: Microsoft Although there are some examples of bridging being used in the past, Microsoft's CLI system was intended to support languages on top of the .NET system rather than running under native runtimes and bridging. This led to a number of new languages being implemented in the CLI system, often including either a hash mark (#) or "Iron" in their name. See the List of CLI languages for a more comprehensive set of examples. This concept was seen as an example of MS's embrace, extend and extinguish behaviour, as it produced Java-like languages (C# and J# for instance) that did not work with other Java code or used their libraries. Examples: Nevertheless, the "classic" Windows ecosystem included considerable code that would be needed to be used within the .NET world, and for this role MS introduced a well supported bridging system. The system included numerous utilities and language features to ease the use of Windows or Visual Basic code within the .NET system, or vice versa.Microsoft has also introduced a JavaScript bridging technology for Silverlight, the HTML Bridge. The Bridge exposes JS types to .NET code, .NET types to JS code, and manages memory and access safety between them. Examples: Other examples Similar bridging technologies, often with JavaScript on one side, are common on various platforms. One example is JS bridge for the Android OS written as an example.The term is also sometimes used to describe object-relational mapping systems, which bridge the divide between the SQL database world and modern object programming languages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amazon Kindle** Amazon Kindle: Amazon Kindle is a series of e-readers designed and marketed by Amazon. Amazon Kindle devices enable users to browse, buy, download, and read e-books, newspapers, magazines and other digital media via wireless networking to the Kindle Store. The hardware platform, which Amazon subsidiary Lab126 developed, began as a single device in 2007. Currently, it comprises a range of devices, including e-readers with E Ink electronic paper displays and Kindle applications on all major computing platforms. All Kindle devices integrate with Windows and macOS file systems and Kindle Store content and, as of March 2018, the store had over six million e-books available in the United States. Naming and evolution: In 2004, Amazon founder and CEO Jeff Bezos instructed the company's employees to build the world's best e-reader before Amazon's competitors could. Amazon originally used the codename Fiona for the device.Branding consultants Michael Cronan and Karin Hibma devised the Kindle name. Lab126 asked them to name the product, and they suggested "kindle", meaning to light a fire. They felt this was an apt metaphor for reading and intellectual excitement.Kindle hardware evolved from the original Kindle introduced in 2007 and the Kindle DX (with its larger 9.7" screen) introduced in 2009. The DX remained the only non-6" eink Kindle device until the 2017 introduction of the Oasis 2. The range included early generation devices with a keyboard (Kindle Keyboard), devices with touch-sensitive, lighted, high-resolution screens (Kindle Paperwhite), early generations of a tablet computer with the Kindle app (Kindle Fire), and low-priced devices with a touch-sensitive screen (Kindle 7). However, the Kindle e-reader has often been a narrow-purpose device for reading rather than being multipurpose hardware that might create distractions while reading. Active Content support was introduced in 2010 only to be dropped from new Kindle devices in late 2014. After an initial 3 generations the Kindle Fire tablet branding was changed in 2014 to Amazon Fire, reflecting their wider capabilities as an Android-derived tablet. Other later developments include devices with larger eink displays such as the Kindle Oasis 2 (2017) at 7" and the Paperwhite 5 (2021) at 6.8", as well as a device with a 10.2" screen and Wacom stylus support called the Kindle Scribe (2022). In 2022 Amazon also introduced the 11th gen Kindle with a 300 PPI display, ending the use of the 6" 167 PPI display that had been on every basic Kindle since 2007. Naming and evolution: Amazon has also introduced Kindle apps for use on various devices and platforms, including Windows, macOS, Android, iOS, BlackBerry 10 and Windows Phone. Amazon also has a cloud reader to allow users to read e-books using modern web browsers. Devices: Kindles with physical keyboards First generation Kindle Amazon released the Kindle, its first e-reader, on November 19, 2007, for $399. It sold out in 5.5 hours. The device remained out of stock for five months until late April 2008.The device featured a six-inch (diagonal) four-level grayscale E Ink display, with 250 MB of internal storage, which can hold approximately 200 non-illustrated titles. It also has a speaker and a headphone jack for listening to audio files. It has expandable storage via an SD card slot. Content was available from Amazon via the Sprint Corporation US-wide EVDO 3G data network, via a dedicated connection protocol which Amazon called Whispernet. Amazon did not sell the first-generation Kindle outside of the US. Devices: Second generation Kindle 2 On February 10, 2009, Amazon announced the Kindle 2, the second-generation Kindle. It became available for purchase on February 23, 2009. The Kindle 2 features a text-to-speech option to read the text aloud. It also has 6 inch screen and 2 GB of internal memory, of which 1.4 GB is user-accessible. By Amazon's estimates, the Kindle 2 can hold about 1,500 non-illustrated books. Unlike the first-generation Kindle, Kindle 2 does not have a slot for SD memory cards. It is slimmer than the original Kindle. Devices: The Kindle 2 features a Freescale 532 MHz, ARM-11 90 nm processor, 32 MB main memory, 2 GB flash memory and a 3.7 V 1,530 mAh lithium polymer battery.To promote the Kindle 2, in February 2009 author Stephen King released Ur, his then-new novella, made available exclusively through the Kindle Store. Kindle 2 international On October 7, 2009, Amazon announced an international version of the Kindle 2 with the ability to download e-books wirelessly. This version released in over 100 countries. It became available on October 19, 2009. The international Kindle 2 is physically the same as the U.S.-only Kindle 2, although it uses a different mobile network standard. The original Kindle 2 used CDMA2000 for use on the Sprint network. The international version used standard GSM and 3G GSM, enabling it to be used on AT&T's U.S. mobile network and internationally in 100 other countries with Amazon offering free unlimited roaming. Devices: Kindle DX Amazon launched the Kindle DX on May 6, 2009. At at 9.7 inches, this device had the largest Kindle screen until the release of the Scribe. The pixel density of 150 ppi was the lowest of any eink Kindle device. It supports displaying PDF files. It was marketed as more suitable for displaying newspaper and textbook content, includes built-in speakers, and has an accelerometer that enables users to rotate pages between landscape and portrait orientations when the Kindle DX is turned on its side. The device can only connect to Whispernet while in the U.S. Devices: Kindle DX international On January 19, 2010, the Kindle DX international version was released in over 100 countries. The Kindle DX international version is the same as the Kindle DX, except for having support for international 3G data. Devices: Kindle DX Graphite On July 1, 2010, Amazon released the Kindle DX Graphite (DXG) globally. The DXG has an E Ink display with 50% better contrast ratio due to using E Ink Pearl technology and comes only in a graphite case color. It is speculated the case color change is to improve contrast ratio perception further, as some users found the prior white casing highlighted that the E Ink background is light gray and not white. Like the Kindle DX, it does not have a Wi-Fi connection. The DXG is a mix of third-generation hardware and second-generation software. The CPU has the same speed as Kindle Keyboard's CPU, but the DXG has only half the system memory, 128MB. Due to these differences, the DXG runs the same firmware as Kindle 2. Therefore, DXG cannot display international fonts, like Cyrillic, Chinese, or any other non-Latin font, and PDF support and the web browser are limited to matching the Kindle 2's features. Devices: Amazon withdrew the Kindle DX from sale in October 2012, but in September 2013 made it available again for a few months. Using 3G data is free when accessing the Kindle Store and Wikipedia. Downloading personal documents via 3G data costs about $1 per megabyte. Its battery life is about one week with 3G on and two weeks with 3G off. Text-to-Speech and MP3 playback are supported. Devices: Third generation Kindle Keyboard Amazon announced the third-generation Kindle, later renamed "Kindle Keyboard", on July 28, 2010. Amazon began accepting pre-orders for the Kindle Keyboard as soon as it was announced and began shipping the devices on August 27, 2010. On August 25, Amazon announced that the Kindle Keyboard was the fastest-selling Kindle ever. While Amazon does not officially add numbers to the end of each Kindle denoting its generation, reviewers, customers and press companies often referred to this Kindle as the "K3" or the "Kindle 3". The Kindle Keyboard has a 6-inch screen with a resolution of 600x800 (167 PPI).The Kindle Keyboard was available in two versions. One of these, the Kindle Wi-Fi, was initially priced at $139 and connects to the Internet via Wi-Fi networks. The other version, called the Kindle 3G, was priced at $189 and includes both 3G and Wi-Fi connectivity. The built-in free 3G connectivity uses the same wireless signals that cell phones use, allowing it to download and purchase content from any location with cell service. The Kindle Keyboard is available in two colors: classic white and graphite. Both versions use an E Ink "Pearl" display, which has a higher contrast than prior displays and a faster refresh rate than prior e-ink displays. However, it remains significantly slower than traditional LCDs. An ad-supported version, the "Kindle with Special Offers", was introduced on May 3, 2011, with a price $25 lower than the no-ad version, for $114. On July 13, 2011, Amazon announced that due to a sponsorship with AT&T, the price of the Kindle 3G with ads would be $139, $50 less than the Kindle 3G without ads.The Kindle Keyboard is 0.5 inches shorter and 0.5 inches narrower than the Kindle 2. It supports additional fonts and international Unicode characters and has a Voice Guide feature with spoken menu navigation from the built-in speakers or audio jack. Internal memory is expanded to 4 GB, with approximately 3 GB available for user content. Battery life is advertised at up to two months of reading half an hour a day with the wireless turned off, which amounts to roughly 30 hours.The Kindle Keyboard generally received good reviews after launch. Review Horizon describes the device as offering "the best reading experience in its class" while Engadget states, "In the standalone category, the Kindle is probably the one to beat". Devices: Fourth generation The fourth-generation Kindle and the Kindle Touch were announced on September 28, 2011. They retain the 6-inch, 167-PPI e-ink display of the 2010 Kindle model, with the addition of an infrared touch-screen control on the Touch. They also include Amazon's experimental web-browsing capability with Wi-Fi. On the same date, Amazon announced the Kindle Fire, a tablet computer including a Kindle app; in September 2014, Kindle was dropped from the Amazon Fire's name. Devices: Kindle 4 The fourth-generation Kindle was significantly less expensive (initially $79 ad supported, $109 no ads) and features a slight reduction in weight and size, with a reduced battery life and storage capacity, compared to the Kindle 3. It has a silver-grey bezel, 6-inch display, nine hard keys, a cursor pad, an on-screen rather than physical keyboard, a flash storage capacity of 2 GB, and an estimated one month battery life under ideal reading conditions. Devices: Kindle Touch Amazon introduced two versions of touchscreen Kindles: the Kindle Touch, available with Wi-Fi (initially $99 ad-supported, $139 no ads), and the Kindle Touch 3G, with Wi-Fi/3G connectivity (initially $149 ad-supported, $189 no ads). The latter version is capable of connecting via 3G to the Kindle Store, downloading books and periodicals, and accessing Wikipedia. Experimental web browsing (outside Wikipedia) on Kindle Touch 3G is only available over a Wi-Fi connection. (Kindle Keyboard does not have this restriction). The usage of the 3G data is limited to 50MB per month. Like the Kindle 3, the Kindle Touch has a capacity of 4 GB and battery life of two months under ideal reading conditions, and is larger than the Kindle 4. The Kindle Touch was released on November 15, 2011. Amazon announced in March 2012 that the device would be available in the UK, Germany, France, Spain and Italy on April 27, 2012. The Touch was the first Kindle to support X-Ray, which lists the commonly used character names, locations, themes, or ideas in a book. In January 2013, Amazon released the 5.2.0 firmware that updated the operating system to match the Paperwhite's interface with the Touch's MP3/audiobook capabilities remaining. Devices: Fifth generation Kindle 5 Amazon released the Kindle 5 on September 6, 2012 ($70 ad-supported, $90 no ads). The Kindle has a black bezel, differing from the Kindle 4 which was available in silver-grey, and has better display contrast. Amazon also claims that it has 15% faster page loads. It has a 167 PPI display and was the lightest Kindle, at 5.98 ounce, until 2016's Kindle Oasis. Devices: Kindle Paperwhite (first iteration) The first-iteration Kindle Paperwhite was announced on September 6, 2012, and released on October 1. It has a 6 in, 212 PPI E Ink Pearl display (758×1024 resolution) with four built-in LEDs to illuminate the screen. It was available in Wi-Fi ($120 ad-supported, $140 no ads) and Wi-Fi + 3G ($180 ad-supported, $200 no ads) models, with the ad-supported options only intended to be available in the United States. The light is one of the main features of the Paperwhite and it has a manually adjusted light level. The 3G access restrictions are the same as the Kindle Touch, and usage of the 3G data is limited to 50 MB per month and only on Amazon and Wikipedia's websites; additional data may be bought. Battery life is advertised as up to eight weeks of reading with half an hour per day with wireless off and constant light use; this usage equals 28 hours. The official leather cover for the Paperwhite uses a hall effect sensor to detect when the cover is closed or opened and turn the screen off or on respectively. This was the first Kindle model to track reading speed to estimate when the reader will finish a chapter or book; this feature was later included with updates to the other models of Kindle and Kindle Fire. The Kindle Paperwhite lacks physical buttons for page turning and does not perform auto-hyphenation. Except for the lock screen/power button at its bottom, it relies solely on the touchscreen interface.In November 2012, Amazon released the 5.3.0 update that allowed users to turn off recommended content on the home screen in Grid View (allowing two rows of user content) and included general bug fixes. In March 2014, the Paperwhite 5.4.4 update was released that added Goodreads integration, Kindle FreeTime to restrict usage for children, Cloud Collections for organization and Page Flip for scanning content without losing your place, which closely matched the Paperwhite 2's software features.The Kindle Paperwhite was released in most major international markets in early 2013, with Japan's version including 4 GB of storage, and in China on June 7, 2013; all non-Japan versions have 2 GB of storage (1.25 GB usable).Engadget praised the Paperwhite, giving it 92 of 100. The reviewer liked the frontlit display, high contrast, and useful software features, but did not like that it was less comfortable to hold than the Nook, the starting price includes ads, and it had no expandable storage.Shortly after release, some users complained about the lighting implementation on the Kindle Paperwhite. While not widespread, some users found the lighting inconsistent, causing the bottom edge to cast irregular shadows. Also, some users complained that the light cannot be turned off completely. Devices: Sixth generation Kindle Paperwhite (second iteration) Amazon announced the second-iteration Kindle Paperwhite, marketed as the "All-New Kindle Paperwhite" and colloquially referred to as the Paperwhite 2, on September 3, 2013; the Wi-Fi version was released on September 30 ($120 ad-supported, $140 no ads), and the 3G/Wi-Fi version was released in the US on November 5, 2013 ($190 ad-supported, $210 no ads). The Paperwhite 2 features a higher contrast E Ink Carta display technology, improved LED illumination, 25% faster processor (1 GHz) that allows for faster page turns, and better response to touch input compared to the original Paperwhite. It has the same 6" screen with 212 PPI, bezel and estimated 28-hour battery life as the original Paperwhite. The software features dictionary/Wikipedia/X-Ray look-up, Page Flip that allows the user to skip ahead or back in the text in a pop-up window and go back to the previous page, and Goodreads social integration.The Paperwhite 2 uses a similar experimental web browser with the same 3G data use restrictions as previous Kindles; there are no use restrictions when using Wi-Fi. The official Amazon leather cover for the Paperwhite 2 is the same item as was used for the original Paperwhite. The cover's magnets turn the screen on and off when it is opened and closed. Devices: Although released in 2013 with 2GB of storage, all versions of the Paperwhite 2 were sold with 4GB of storage by September 2014. Devices: Engadget rated the Paperwhite 2 as 93 of 100, saying while it offers few new features, "an improved frontlight and some software tweaks have made an already great reading experience even better." Seventh generation Kindle 7 Amazon announced an upgraded basic Kindle and the Kindle Voyage on September 18, 2014. The Kindle 7 was released on October 2, 2014 ($80 ad-supported, $100 no ads). It is the first basic Kindle to use a touchscreen for navigating within books and to have a 1 GHz CPU. It is also the first basic Kindle available in international markets such as India, Japan and China. Amazon claims that a single charge lasts up to 30 days if used for 30 minutes a day without using Wi-Fi. Devices: Kindle Voyage The Kindle Voyage was released on November 4, 2014, in the U.S. It has a 6-inch, 300 ppi E Ink Carta HD display, which was the highest resolution and contrast available in e-readers, as of 2014, with six LEDs with an adaptive light sensor that can automatically illuminate the screen depending on the environment. It is available in Wi-Fi ($200 ad-supported, $220 no ads) and Wi-Fi + 3G ($270 ad-supported, $290 no ads) models. It has 4 GB of storage. Its design features a flush glass screen on the front and the rear has angular, raised plastic edges that house the power button, similar to the Fire HDX. At 0.3 inches, it is the thinnest Kindle to date. The Voyage uses "PagePress", a navigation system that has sensors on either side of the screen that turns the page when pressed. PagePress may be disabled, but the touchscreen is always active. Devices: The Verge rated the Voyage as 9.1 of 10, stating that "this is the best E Ink e-reader I've used, and it's unquestionably the best that Amazon has ever made. The thing is, it's only marginally better than the fantastic Paperwhite in several ways, and significantly better in none" and with those differences in mind, disliked how it costs $80 more than the Paperwhite. Engadget rated the Voyage as 94 of 100, stating that while it was "easily the best e-reader that Amazon has ever crafted," it was also the priciest at $199. Devices: Kindle Paperwhite (third iteration) The third-iteration Kindle Paperwhite, marketed as the "All-New Kindle Paperwhite" and colloquially referred to as the Paperwhite 3 and Paperwhite 2015, was released on June 30, 2015, in the US. It is available in Wi-Fi ($120 ad-supported, $140 no ads) and Wi-Fi + 3G ($190 ad-supported, $210 no ads) models. It has a 6-inch, 1448×1072, 300 ppi E Ink Carta HD display, which is twice the pixels of the original Paperwhite and has the same touchscreen, four LEDs and size as the previous Paperwhite. It has over 3 GB of user accessible storage. This device improved on the display of PDF files, with the possibility to select text and use some functionalities, such as translation on a PDF's text. Amazon claims it has 6 weeks of battery life if used for 30 minutes per day with wireless off and brightness set to 10, which is about 21 hours. Devices: The Paperwhite 3 is the first e-reader to include the Bookerly font, a new font designed by Amazon, and includes updated formatting functions such as hyphenation and improved spacing. The Bookerly font was added to most older models via a firmware update. The official Amazon leather cover for the Paperwhite 3 is the same item as was used with the previous two Paperwhite devices. Devices: In February 2016, the Paperwhite 2, Paperwhite 3, Kindle 7, and Voyage received the 5.7.2 update that included a new home screen layout, an OpenDyslexic font choice, improved book recommendations, and a new quick actions menu.On June 30, 2016, Amazon released a white version of the Paperwhite 3 worldwide; the only thing different about this version is the color of the shell.In October 2016, Amazon released the Paperwhite 3 "Manga Model" in Japan that has a 33% increase in page-turning speed and includes 32 GB of storage, which is space for up to 700 manga books. The Manga model launched at 16,280 yen (~$156) for the ad-supported Wi-Fi version or 12,280 yen (~$118) for Prime members.The Verge rated the Paperwhite 3 as 9.0 of 10, saying that "The Kindle Paperwhite is the best e-reader for most people by a wide margin" and liked the high-resolution screen but disliked that there was no adaptive backlight; this is featured on the Kindle Voyage. Popzara called the 2015 Paperwhite "the best dedicated E Ink e-reader for the money." Eighth generation Kindle Oasis (first iteration) Amazon announced the first-iteration Kindle Oasis on April 13, 2016, and it was released on April 27 worldwide. The Kindle Oasis is available in Wi-Fi and Wi-Fi + 3G models. The Oasis has a 6-inch, 300 ppi E Ink Carta HD display with ten LEDs. Its asymmetrical design features physical page turn buttons on one side and it has an accelerometer so the display can be rotated for one-hand operation with either hand. It has one thicker side that tapers to an edge that is 20% thinner than the Paperwhite. It includes a removable leather battery cover for device protection and increased battery life that is available in either black, walnut (brown) or merlot (red); the cover fits in the tapered edge. The Oasis has 28 hours of battery life if used with the battery cover with Wi-Fi off. However, without the cover, the Oasis battery lasts about seven hours. It has nearly 3 GB of user storage. The Oasis includes the Bookerly (serif) font and it is the first Kindle to include the Amazon Ember (sans-serif) font.The Guardian's reviewer praised the Oasis's ease for holding, its lightweight design, long battery life, excellent display, even front lighting, usable page-turn buttons, and the luxurious cover. However, the reviewer believed the product was overpriced, noted that the battery cover only partially protects the back, and that the reader is not waterproof. The reviewer concluded, "…the Paperwhite will likely be all the e-reader most will need, but Oasis is the one you'll want. The Oasis is the Bentley to the Paperwhite's Golf – both will get the job done, just one is a cut above the other." The Verge rated the Oasis as 9 of 10, praising its thinness, its weight without the cover and the ability to read with one hand, but did not like that it is so expensive, has no adaptive backlight like the Voyage and it is not waterproof. Devices: Kindle 8 Amazon's upgrade of the standard Kindle was released on June 22, 2016, in both black and white colors ($80 ad-supported, $100 no ads). The Kindle 8 features a new rounded design that is 0.35 inches (9 mm) shorter, 0.16 inches (4 mm) narrower, 0.043 inches (1.1 mm) thinner, and 1.1 ounces (30 g) lighter than the previous Kindle 7, and features double the RAM (512 MB) of its predecessor. The Kindle 8 is the first Kindle to use Bluetooth that can support VoiceView screen reader software for the visually impaired. It has the same screen display as its predecessor, a 167 ppi E Ink Pearl touch-screen display, and Amazon claims it has a four-week battery life and can be fully charged within four hours. Devices: Ninth generation Kindle Oasis (second iteration) Amazon released the second-iteration Kindle Oasis, marketed as the "All-New Kindle Oasis" and colloquially referred to as the Oasis 2, on October 31, 2017. It is available in 8 GB Wi-Fi, 32 GB Wi-Fi and 32 GB Wi-Fi + 3G ($350 no ads) models with a 7-inch E Ink display with 300 ppi. It has an asymmetric design like the first-iteration Oasis, so it works for one-handed use, and the device finish is made from aluminum. The device has a black front, with either a silver or gold colored back. The Oasis 2 is the first Kindle to be IPX8 rated so it is water-resistant up to two meters for up to 60 minutes, and first to be able to change the background black and the text to white. It is frontlit with 12 LEDs, and has ambient light sensors to adjust the screen brightness automatically. It supports playback of Audible audiobooks by pairing with A2DP supported external Bluetooth 4.2 speakers or headphones; the device can store up to 35 audiobooks with 8 GB or 160 audiobooks with the 32 GB model. The Oasis 2's internal battery lasts about six weeks of reading at 30 minutes a day. Devices: The Verge gave the Oasis 2 a score of 8 of 10, praising its design, display, and water resistance, but criticizing its high cost and inability to read an e-book while its related audiobook is playing. TechRadar rated it as 4.5 of 5, saying the Oasis 2 is expensive but it praises as the best e-reader at the time with its lovely metal design, waterproofing and great reading experience. Devices: Tenth generation Kindle Paperwhite (fourth iteration) Amazon announced the fourth-iteration Kindle Paperwhite on October 16, 2018, and released it on November 7, 2018; it is colloquially referred to as the Paperwhite 4 and Paperwhite 2018. It is available in 8 GB Wi-Fi, 32 GB Wi-Fi and 32 GB Wi-Fi + 4G LTE models. It features a 6-inch plastic-backed display of Amazon's own design with 300 ppi and a flush screen featuring five LED lights. It is waterproof with an IPX8 rating, allowing submersion in 2 meters of fresh water for up to one hour. It supports playback of Audible audio books only by pairing with external Bluetooth speakers or headphones. Devices: The Verge rated the Paperwhite 4 as 8.5 of 10, praising its great display, water-resistance and battery life but criticizing its lack of physical buttons and no USB-C support. Devices: Kindle (10th generation) Amazon announced the Kindle (10th generation) on March 20, 2019, which features the first front light available on a basic Kindle. The front light uses 4 LEDs compared to the Paperwhite with 5 LEDs. Kindle 10 uses a 6-inch display with higher contrast than previous basic Kindles and has the same 167 ppi resolution. It has black and white colors. Devices: Kindle Oasis (third iteration) Amazon released the third-iteration Kindle Oasis, colloquially referred to as the Oasis 3, on July 24, 2019. Externally it is nearly identical in appearance to the second-iteration Oasis, with a similar 7-inch, 300ppi E Ink display, adjustable warm light, one-handed design, waterproofing, aluminum exterior, Bluetooth support and Micro USB for charging. It adds a 25 LED front light that can adjust color temperature to warmer tones, the first Kindle to be able to do so. This device is available in two different colors; Graphite or Champagne Gold. Devices: The Verge gave the Oasis 3 an 8 of 10 rating, praising its design, display, and warmer E Ink display, but criticizing its high cost, no USB-C support and the lackluster update over the 2017 model. Devices: Eleventh generation Kindle Paperwhite (fifth iteration) Amazon announced the Kindle Paperwhite (fifth iteration) on September 21, 2021, and it was released on October 27, 2021. It features 8 GB of storage and has similar dimensions to its predecessor but has a larger 6.8-inch display set in thinner bezels, 17 LEDs in the front light that can adjust color temperature to warmer tones (first featured in Kindle Oasis 3), an updated processor, and longer battery life that Amazon claims lasts up to ten weeks on a single charge. It is the first Kindle with a USB-C port. The Paperwhite 5 is also available in a higher cost Signature Edition that additionally supports Qi wireless charging, has 32 GB of storage, and includes an ambient light sensor that automatically adjusts the backlight brightness. Amazon has stated that some Qi chargers are incompatible and recommends using an Amazon charging dock. Devices: The Verge gave the Kindle Paperwhite (fifth iteration) 8.5 out of 10, praising the display and battery but did not like the lack of physical buttons and no improvement of Kindle software support for e-books found outside of the Kindle Store. In September 2022, a model with 16 GB of storage was added. Kindle (11th generation) Amazon announced the Kindle (11th generation) on September 17, 2022. It is upgraded with a 300 ppi display, 16 GB of storage, and includes a USB-C port. Devices: Kindle Scribe Amazon announced the Kindle Scribe on September 22, 2022, with a release date of November 30th. Notably it is the first Kindle to include a 10.2 inch, 300 ppi display; and stylus functionality for writing. Additionally, it is also the first Kindle sold without an option for a discounted ad-supported model since the Kindle Keyboard was introduced in 2010. Owners can choose to enable Special Offers but do not receive a discount or reimbursement for doing so. Amazon offers two Wacom EMR compatible stylus pens for use with the Scribe: a basic pen and a premium pen. Both pens feature standard Wacom interchangeable nibs and magnetic attachment to the edge of the Scribe, and have pressure and tilt features. The premium pen adds an eraser to the end and a configurable shortcut button on the pen body. Either stylus is available for purchase separately.The Scribe is offered in three storage tiers: 16 GB, 32 GB, or 64 GB. The base 16 GB model is available with either the basic stylus pen or a premium stylus pen. The 32 GB and 64 GB models come with the premium stylus pen. Devices: Upon release the Scribe received mixed reviews, reviewers criticized the lack of software features compared to the competition, but praised the hardware and build quality. The Verge gave the Kindle a 6 out of 10, praising its long battery life, large display size, and pen feel, but noting its “lackluster software” and “outdated document syncing” held the device back.Amazon released several firmware updates that added features that were missing at the original release. New features in Scribe firmware include: Conversion of handwriting to text during document export Lasso select tool with cut, copy, and paste between notebook pages, different notebooks, and Sticky Notes Enhanced handling of PDFs uploaded and converted via Send to Kindle Added Store content and categories for "Write-on Books" or “On-page writing” Official accessories: Cases Several cases and covers have been produced for all Kindle models, with official branded covers from Amazon along with a large 3rd party market of varying designs. Official accessories: The original Kindle design was bulky and asymmetric designed to be held like a paperback book, with a rubberized rear cover panel for grip. The Kindle 2 was redesigned to be used with an official Amazon leather cover. It had a much thinner chassis with a smooth metal rear cover. Two small slots in the left edge are used to clip into the official case. The Kindle 3 (Kindle Keyboard) included power pass through via the cover clips, to power a pull-out light. The Kindle 4/5/Touch cover design is form-fit to the Kindle and power for the flip-up light is passed through pogo pins at the bottom of the rear chassis.With the release of the Kindle Paperwhite in 2012 a light in the cover was no longer necessary. Amazon released a natural leather cover and a plastic back that is form-fitted for the device that weighs 5.6 ounces, removing some of the bulk of the previous lighted covers. The cover closes book-like from the left edge. Magnets activate the sleep/wake function in the Kindle when the front is either closed or opened. The subsequent Amazon covers include this function. Official accessories: As a cost reduced model, the Kindle 7 (2014) did not have a frontlight and also did not have provisions for powering a cover light. Official Amazon covers were simple and only included sleep/wake functionality and multiple color options.With the release of the Voyage in 2014, Amazon released two covers made with polyurethane or leather. The Voyage attaches to the rear of the Protective Cover magnetically and the case's cover folds over the top, and the case weighs 4.6 ounces. The case can fold into a stand, propping the Kindle up for hands-free reading.The Oasis was released in 2016 with a case that added extra battery capacity via a pogo pin connection similar to earlier lighted covers. The case was called the Leather Charging Cover. The subsequent Oasis models removed this feature and used their larger size to include a larger built-in battery. Official accessories: Covers for the Oasis 2 in 2017 added multiple kinds of material and colors: Fabric in Charcoal, Marine Blue, and Punch Red colors, Leather in Black and Merlot colors, and Premium Leather in a distressed brown.With the release of the Paperwhite 4 in 2018, Amazon released three versions of its cover: a water-safe fabric cover that can withstand brief exposure to water, a standard leather cover and a premium leather cover; these covers all weigh 4 ounces.Kids Edition bundles often feature covers with whimsical and bright designs. Some include branding or themes to tie in to popular books series such as Warrior Cats. Non-bundled exclusives have also been produced such as a branded covers for The Hunger Games.Cork was introduced as a new cover material for the Paperwhite 5 in 2021.The Scribe was released with covers that flip and fold, and also have a loop to securely hold the stylus. The Scribe fits into the covers with magnets. The front flap is held closed or open, either flat or as a kickstand, with magnets. Cut outs on both sides allows the stylus to be magnetically attached to the side of Scribe as normal and with the cover open or closed. Official accessories: Audio adapter In May 2016, Amazon released the official Kindle Audio Adapter for reading e-books aloud via a text-to-speech (TTS) system for the blind and visually impaired. This accessibility accessory, initially supported only for the Paperwhite 3 and Oasis, plugs in the USB port and connects to headphones or speakers. Once connected, the reader uses the Voiceview for Kindle feature to navigate the interface and listen to e-books via TTS. This feature only supports e-books, not audiobooks or music. Official accessories: Using the accessory reduces the Paperwhite 3's battery life to six hours. As an alternative to the official adapter, a generic USB to audio converter will also work with Voiceview. Wireless charger With the release of the 2021 Paperwhite Signature Edition, Amazon announced the Wireless Charging Dock which supports Qi charging up to 7.5 W. Features: Kindle devices support dictionary and Wikipedia look-up functions when highlighting a word in an e-book. The font type, size and margins can be customized. Kindles are charged by connecting to a computer's USB port or to an AC adapter. Users needing accessibility due to impaired vision can use an audio adapter to listen to any e-book read aloud on supported Kindles, or those with difficulty in reading text may use the Amazon Ember Bold font for darker text and other fonts may too have bold font versions. Features: The Kindle also contains experimental features such as a web browser that uses NetFront based on WebKit. The browser can freely access the Kindle Store and Wikipedia on 3G models while the browser may be limited to 50 MB of data per month to websites other than Amazon and Wikipedia, Other possible experimental features, depending on the model are a Text-to-Speech engine that can read the text from e-books and an MP3 player that can be used to play music while reading. Features: The Kindle's operating system updates are designed to be received wirelessly and installed automatically during a period in sleep mode in which Wi-Fi is turned on. A user may install firmware updates manually by downloading the firmware for their device and copying the file to the device's root directory. The Kindle operating system uses the Linux kernel with a Java app for reading e-books. Features: Send to Kindle service Amazon initially offered a Personal Documents Service to add content to a user's Kindle which only worked via email. Documents were sent directly to the Kindle via WhisperSync. Later expansions added cloud library features and content management. The modern service is called Send to Kindle and is available through various means such as email, website, app, or browser extension. It allows the user to send files such as EPUB, PDF, HTML pages, Microsoft Word documents, GIF, PNG, and BMP graphics directly to the user's Kindle library. When Amazon receives the file, it converts the file to Kindle File Format and stores it in the user's online library (called "Your Content" by Amazon). All content added via Send to Kindle is added as Personal Documents. The Send to Kindle service's personal documents can be accessed by all Kindle hardware devices as well as iOS and Android devices using the Kindle app.Until August 2022, in addition to the document types mentioned above, this service could be used to send unprotected and original version only .mobi/.azw files to a user's Kindle library.Sending the file is free if downloaded using Wi-Fi, but, prior to 2021, cost $0.15 per MB when using Kindle's former 3G service. Features: Format support by device The first Kindle could read unprotected Mobipocket files (MOBI, PRC), plain text files (TXT), Topaz format books (TPZ) and Amazon's AZW format. Features: The Kindle 2 added native PDF capability with the version 2.3 firmware upgrade. The Kindle 1 could not read PDF files, but Amazon provides experimental conversion to the native AZW format, with the caveat that not all PDFs may format correctly. The Kindle 2 added the ability to play the Audible Enhanced (AAX) format. The Kindle 2 can also display HTML files. Features: The fourth and later generation Kindles, Touch, Paperwhite (all generations), Voyage and Oasis (all generations) can display AZW, AZW3, TXT, PDF, unprotected MOBI, and PRC files natively. HTML, DOC, DOCX, JPEG, GIF, PNG, and BMP are usable through Amazon's conversion service. The Keyboard, Touch, Oasis 2 & 3, Kindle 8 & 9, and Paperwhite 4 can also play Audible Enhanced (AA, AAX). The Kindle (7, 8 & 9), Kindle Paperwhite (2, 3, 4 & 5), Voyage and Oasis (1, 2 & 3) can display KFX files natively. KFX is Amazon's successor to the AZW3 format. Features: Kindles cannot natively display EPUB files. However, at least two methods allow viewing the content of EPUB formatted content on Kindles: Specialized software like Calibre allows EPUB or some other unsupported files to be converted to one of the supported file formats. Kindles can be jailbroken to allow third-party software, such as KOReader which does support EPUB, to be installed.In late April 2022, Amazon announced that Send to Kindle will support EPUB, beginning in late 2022. Features: Multiple devices and organization An e-book may be downloaded from Amazon to several devices at the same time, as long as the devices are registered to the same Amazon account. A sharing limit typically ranges from one to six devices, depending on an undisclosed number of licenses set by the publisher. When a limit is reached, the user must remove the e-book from some device or unregister a device containing the e-book in order to add the e-book to another device. Features: The original Kindle and Kindle 2 did not allow the user to organize books into folders. The user could only select what type of content to display on the home screen and whether to organize by author, title, or download date. Kindle software version 2.5 allowed for the organization of books into "Collections" which behave like non-structured tags/labels: a collection can not include other collections, and one book may be added to multiple collections. These collections are normally set and organized on the Kindle itself, one book at a time. The set of all collections of a first Kindle device can be imported to a second Kindle device that is connected to the cloud and is registered to the same user; as the result of this operation, the documents that are on the second device now become organized according to the first device's collections. There is no option to organize by series or series order, as the AZW format does not possess the necessary metadata fields. Features: X-Ray X-Ray is a reference tool that is incorporated in Kindle Touch and later devices, the Fire tablets, the Kindle app for mobile platforms and Fire TV. X-Ray lets users explore in more depth the contents of a book, by accessing preloaded files with relevant information, such as the most common characters, locations, themes, or ideas. Features: Annotations Users can bookmark, highlight, and search through content. Pages can be bookmarked for reference, and notes can be added to relevant content. While a book is open on the display, menu options allow users to search for synonyms and definitions from the built-in dictionary. The device also remembers the last page read for each book. Pages can be saved as a "clipping", or a text file containing the text of the currently displayed page. All clippings are appended to a single file, which can be downloaded over a USB cable. Due to the TXT format of the clippings file, all formatting (such as bold, italics, bigger fonts for headlines, etc.) is stripped off the original text. Features: Textbook rentals On July 18, 2011, Amazon began a program that allows college students to rent Kindle textbooks from three different publishers for a fixed period of time. Features: Collection of user reading data Kindle devices may report information about their users' reading data that includes the last page read, how long each e-book was opened, annotations, bookmarks, notes, highlights, or similar markings to Amazon. The Kindle stores this information on all Amazon e-books but it is unclear if this data is stored for non-Amazon e-books. There is a lack of e-reader data privacy — Amazon knows the user's identity, what the user is reading, whether the user has finished the book, what page the user is on, how long the user has spent on each page, and which passages the user may have highlighted. Kindle ecosystem: Kindle Store Content from Amazon's Kindle Store is encoded in Amazon's proprietary Kindle formats (.azw, .kf8 and .kfx). In addition to published content, Kindle users can also access the Internet using the experimental web browser, which uses NetFront. Users can use the Kindle Store to access reading material using the Kindle itself or through a web browser to access content. The store features Kindle Unlimited for unlimited access to over one million e-books for a monthly fee.Content for the Kindle can be purchased online and downloaded wirelessly in some countries, using either standard Wi-Fi or Amazon's 3G "Whispernet" network. Whispernet is accessible without any monthly fees or a subscription, although fees can be incurred for the delivery of periodicals and other content when roaming internationally beyond the customer's home country. Through a service called "Whispersync," customers can synchronize reading progress, bookmarks, and other information across Kindle hardware and other mobile devices. The Kindles that only can access Whispernet via the 3G network had that network turned off in December 2021 due to the carriers retiring 3G.For U.S. customers traveling abroad, Amazon originally charged a $1.99 fee to download e-books over 3G while overseas, but later removed the fee. Fees remain for wireless 3G delivery of periodical subscriptions and personal documents, while Wi-Fi delivery has no extra charge.In addition to the Kindle Store, content for the Kindle can be purchased from various independent sources such as Fictionwise and Baen Ebooks. Public domain titles are also obtainable for the Kindle via content providers such as Project Gutenberg, The Internet Archive and the World Public Library. In 2011, the Kindle Store had more than twice as much paid content as its nearest competitor, Barnes & Noble.Public libraries that offer books via OverDrive, Inc. can also choose to lend titles for the Kindle and Kindle reading apps in the US via Libby. Books can be checked out from the library's own site, which forwards to Amazon for the completion of the checkout process. The Libby app stores user account and library details during set up and can send content to the users Amazon account at the time of checkout. Amazon then delivers the title to the Kindle for the duration of the loan, though some titles may require transfer via a USB connection to a computer. If the book is later checked out again or purchased, annotations and bookmarks are preserved. Kindle ecosystem: Kindle applications for reading on other devices Amazon released the Kindle for PC application in late 2009, available for Microsoft Windows systems. This application allows ebooks from Amazon's store or personal ebooks to be read on a personal computer, with no Kindle device required. Amazon released a Kindle for Mac app for Apple Macintosh & OS X systems in early 2010. In June 2010, Amazon released the Amazon Kindle for Android. Soon after the Android release, versions for the Apple iOS (iPhone and iPad) and BlackBerry OS phones were available. In January 2011, Amazon released Kindle for Windows Phone. In July 2011, Kindle for HP TouchPad (running webOS) was released in the U.S. as a beta version. In August 2011, Amazon released an HTML5-based webapp for supported web browsers called Kindle Cloud Reader. In 2013, Amazon has expressed no interest in releasing a separate Kindle application for Linux systems; the Cloud Reader can be used on supported browsers in Linux.On April 17, 2014, Samsung announced it would discontinue its own e-book store effective July 1, 2014 and it partnered with Amazon to create the Kindle for Samsung app optimized for display on Samsung Galaxy devices. The app uses Amazon's e-book store and it includes a monthly limited selection of free e-books.In June 2016, Amazon released the Page Flip feature to its Kindle applications that debuted on its e-readers a few years previously. This feature allows the user to flip through nine thumbnails of page images at a time. Kindle ecosystem: Kindle Direct Publishing Concurrently with the release of the first Kindle device, Amazon launched Kindle Direct Publishing, used by authors and publishers to independently publish their books directly to Kindle and Kindle Apps worldwide. Authors can upload documents in several formats for delivery via Whispernet and charge between $0.99 and $200.00 per download.In a December 5, 2009 interview with The New York Times, Amazon CEO Jeff Bezos revealed that Amazon keeps 65% of the revenue from all e-book sales for the Kindle; the remaining 35% is split between the book author and publisher. After numerous commentators observed that Apple's popular App Store offers 70% of royalties to the publisher, Amazon began a program that offers 70% royalties to Kindle publishers who agree to certain conditions. Some of these conditions, such as the inability to opt out of the lendability feature, have caused some controversy. Kindle ecosystem: Kindle Development Kit On January 21, 2010, Amazon announced the release of its Kindle Development Kit (KDK). KDK aims to allow developers to build "active content" for the Kindle, and a beta version was announced with a February 2010 release date. A number of companies have already experimented with delivering active content through the Kindle's bundled browser, and the KDK gives sample code, documentation and a Kindle Simulator together with a new revenue sharing model for developers. The KDK is based on the Java programming language's Personal Basis Profile packaged Java APIs. Kindle ecosystem: As of May 2014, the Kindle store offered over 400 items labeled as active content. These items include simple applications and games, including a free set provided by Amazon Digital Services. As of 2014, active content is only available to users with a U.S. billing address. In October 2014, Amazon announced that the Voyage and future e-readers would not support active content because most users prefer to use apps on their smartphones and tablets, but the Paperwhite first-iteration and earlier Kindles would continue to support active content. Reception: Sales Specific Kindle device sales numbers are not released by Amazon; however, according to anonymous inside sources, over three million Kindles had been sold as of December 2009, while external estimates, as of Q4-2009, place the number at about 1.5 million. According to James McQuivey of Forrester Research, estimates are ranging around four million, as of mid-2010.In 2010, Amazon remained the undisputed leader in the e-reader category, accounting for 59% of e-readers shipped, and it gained 14 percentage points in share. According to an International Data Corporation (IDC) study from March 2011, sales for all e-book readers worldwide reached 12.8 million in 2010; 48% of them were Kindles. In the last three months of 2010, Amazon announced that in the United States its e-book sales had surpassed sales of paperback books for the first time.In January 2011, Amazon announced that digital books were outselling their traditional print counterparts for the first time ever on its site, with an average of 115 Kindle editions being sold for every 100 paperback editions. In December 2011, Amazon announced that customers had purchased "well over" one million Kindles per week since the end of November 2011; this includes all available Kindle models and also the Kindle Fire tablet. IDC estimated that the Kindle Fire sold about 4.7 million units during the fourth quarter of 2011. Pacific Crest estimated that the Kindle Fire models sold six million units during Q4 2012.Morgan Stanley estimates that Amazon sold $3.57 billion worth of Kindle e-readers and tablets in 2012, $4.5 billion in Kindle device sales in 2013 and $5 billion in Kindle device sales in 2014. Reception: Aftermarket Working Kindles in good condition can be sold, traded, donated or recycled in the aftermarket. Due to some Kindle devices being limited to use as reading device and the hassle of reselling Kindles, some people choose to donate their Kindle to schools, developing countries, literacy organizations, or charities. "The Kindle Classroom Project" promotes reading by distributing donated Kindles to schools in need. Worldreader and "Develop Africa" ships donated e-readers to schools in developing countries in Africa for educational use. "Project Hart" may take donations of e-readers that could be given to people in need.Whether in good condition or not, Kindles should not be disposed of in normal waste due to the device's electronic ink components and batteries. Instead, Kindles at the end of their useful life should be recycled. In the United States, Amazon runs their own program, 'Take Back', which allows owners to print out a prepaid shipping label, which can be used to return the device for disposal. Criticism: Removal of Nineteen Eighty-Four On July 17, 2009, Amazon withdrew from sale two e-books by George Orwell, Animal Farm and Nineteen Eighty-Four, refunding the purchase price to those who had bought them, and remotely deleted these titles from purchasers' devices without warning using a backdoor after discovering that the publisher lacked rights to publish these books. The two books were protected by copyright in the United States, but they were in the public domain in Canada, Australia and other countries. Notes and annotations for the books made by users on their devices were left in a separate file but "rendered useless" without the content to which they were directly linked. The move prompted outcry and comparisons to Nineteen Eighty-Four itself: in the novel, books, magazines, and newspapers in public archives that contradict the ruling party are either edited long after being published or destroyed outright; the removed materials go "down the memory hole", the nickname for an incinerator chute used in 1984. Customers and commentators noted the resemblance to the censorship in the novel, and described Amazon's action in Orwellian terms. Ars Technica argued that the deletion violated the Kindle's terms of service, which stated in part: Upon your payment of the applicable fees set by Amazon, Amazon grants you the non-exclusive right to keep a permanent copy of the applicable Digital Content and to view, use and display such Digital Content an unlimited number of times, solely on the Device or as authorized by Amazon as part of the Service and solely for your personal, non-commercial use. Criticism: Company response Amazon spokesman Drew Herdener said that the company is "changing our systems so that in the future we will not remove books from customers' devices in these circumstances." On July 23, 2009, Amazon CEO Jeff Bezos posted on Amazon's official Kindle forum an apology about the company's handling of the matter. Bezos said the action was "stupid", and that the executives at Amazon "deserve the criticism received". Criticism: Aftermath On July 30, 2009, Justin Gawronski, a Michigan high school senior, and Antoine Bruguier, a California engineer, filed suit against Amazon in the United States District Court for the Western District of Washington. Bruguier argued that Amazon had violated its terms of service by remotely deleting the copy of Nineteen Eighty-Four he purchased, in the process preventing him from accessing annotations he had written. Gawronski's copy of the e-book was also deleted without his consent, and found Amazon used deceit in an email exchange. The complaint, which sought class-action status, asked for both monetary and injunctive relief. The case was settled on September 25, 2009, with Amazon agreeing to pay $150,000 divided between the two plaintiffs, on the understanding that the law firm representing them, Kamber Edelson, "will donate its portion of that fee to a charitable organization". In the settlement, Amazon also provided wider rights to Kindle owners over its e-books: For copies of Works purchased pursuant to TOS granting "the non-exclusive right to keep a permanent copy" of each purchased Work and to "view, use and display [such Works] an unlimited number of times, solely on the [Devices]... and solely for [the purchasers'] personal, non-commercial use", Amazon will not remotely delete or modify such Works from Devices purchased and being used in the U.S. unless (a) the user consents to such deletion or modification; (b) the user requests a refund for the work or otherwise fails to pay for the work (e.g., if a credit card issuer declines payment); (c) a judicial or regulatory order requires such deletion or modification; or (d) deletion or modification is reasonably necessary to protect the consumer, the operation of a device or network used for communication (e.g., to remove harmful code embedded within an e-book on a device). Criticism: On September 4, 2009, Amazon offered all affected users a choice of restoring of the deleted e-books or receiving an Amazon gift certificate or check for US$30. Criticism: Other cases In December 2010, Amazon removed three e-books written by Selena Kitt, along with works by several other self-published erotic fiction authors, for "offensive" content regarding consensual incest that violated Amazon's publishing guidelines. Kitt stated her opinion this Amazon policy was selectively applied to some books but not others that feature similar themes. For what Amazon describes as "a brief period of time", the books were unavailable for redownload by users who had already purchased them. This ability was restored after it was brought to Amazon's attention; however, no remote deletion took place.In October 2012, Amazon suspended the account of a Norwegian woman who purchased her Kindle in the United Kingdom, and the company deleted every e-book on her Kindle. Amazon claimed that she had violated their terms of service but did not specify what she had done wrong. After the woman contacted the media, Amazon restored her account and her purchased e-books.Computer programmer Richard Stallman criticized the Kindle, citing Kindle terms of service which can censor users, which require the user's identification, and that can have a negative effect on independent book distributors; he also cited reported restrictions on Kindle users, as well the ability for Amazon to delete e-books and update software without the users' permission.Since 2012, Amazon has sold e-books in China and later began selling the Kindle e-book readers from 2013 onwards. Amazon had also announced that it has sold several million Kindles in the country and that China became the world's biggest regional market for the Kindle in 2016. However, it was reported that Chinese consumers prefer using their smartphones over e-readers, notwithstanding competition from Tencent, Alibaba, JD.com and Douban, each with their own e-book readers or marketplaces. Domestically developed e-book readers from brands like Xiaomi, iReader and Onyx Boox also offer added competition to the Kindle. In 2022, Amazon announced it had stopped selling its Kindles to distributors in China and stated the online bookstore service would shut down in China on June 30, 2023.On January 4, 2022, a Kindle shortage was reported on Amazon's JD.com flagship store. Only the Kindle 10 had remained available for sale while other models like the Paperwhite, Oasis and Kids Edition had become out of stock. On the same day, It was announced that Amazon had also shut its Tmall flagship store, after having already closed its Kindle flagship store on Taobao earlier in October 2021. These led to speculation that Amazon was planning to exit the Chinese market altogether, although an official Amazon representative responded that they remain committed to serving Chinese consumers and they can continue to purchase the Kindle through offline and third-party online retailers.In June 2022, Amazon announced that it will shut down its Kindle bookstore in China and starting July 2023 Kindle users can no longer purchase online books in the country. However, existing customers could still download previously bought titles until June 2024.Also in June 2022, self-published authors protested against Amazon's e-book return policy; whenever an e-book return is made, royalties originally paid to the author at the time of purchase are deducted from their earnings balance, leaving authors with negative balances.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bruce Rosen** Bruce Rosen: Bruce Rosen is an American physicist and radiologist and a leading expert in the area of functional neuroimaging. His research for the past 30 years has focused on the development and application of physiological and functional nuclear magnetic resonance techniques, as well as new approaches to combine functional magnetic resonance imaging (fMRI) data with information from other modalities such as positron emission tomography (PET), magnetoencephalography (MEG) and noninvasive optical imaging. The techniques his group has developed to measure physiological and metabolic changes associated with brain activation and cerebrovascular insult are used by research centers and hospitals throughout the world. Bruce Rosen: As Director of the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, he has overseen significant advances including the introduction and development of functional MRI in the early 1990s. Development of functional MRI: Rosen is the senior author on two seminal papers in the development of functional MRI. He was one of John (Jack) Belliveau's thesis advisers when Belliveau – a Harvard graduate student working in the NMR Center (now the Martinos Center) at Massachusetts General Hospital – performed his early experiments using MRI to reveal regional activity in the brain. Using an MR technique he had developed to track blood flow ("dynamic susceptibility contrast") he imaged the visual cortex of volunteers during periods of both rest and activation. By subtracting one image from the other, he then demonstrated differences in MR signal between the two. Development of functional MRI: Science recognized the significance of this work as the first demonstration of functional MRI in the human brain. It published Belliveau's report in November 1991 – with Rosen as the senior author – and featured one of his images on the cover.The study opened the door to functional imaging the brain with MRI, but because the approach required the use of an intravenous contrast agent it was not suitable for wide application in humans. To address this limitation, Kenneth Kwong, a postdoctoral fellow in the NMR Center, developed a means to measure endogenous signals based on blood oxygenation using gradient echo imaging. He successfully demonstrated the technique in May 1991 and a report of the findings was published in 1992 in the Proceedings of the National Academy of Sciences of the United States of America. Research Activities: Rosen leads the activities of several large interdisciplinary and inter-institutional research programs, including the NIH Blueprint-funded Human Connectome Project; the NIH/National Institute of Biomedical Imaging and Bioengineering Biomedical Technology Resource Center, the Center for Functional Neuroimaging Technologies (CFNT); the Biomedical Informatics Research Network (BIRN) Collaborative Tools Support Network; and an NIH/National Center for Complementary and Integrative Health (NCCIH)-funded Center of Excellence for Research on Complementary and Alternative Medicine (e.g., he is, for instance, involved in animal and fMRI research investigating the neurophysiological responses to acupuncture stimulations). He is Principal Investigator/Program Director for two neuroimaging training programs. He has authored more than 300 peer-reviewed articles as well as over 50 book chapters, editorials and reviews. Education and career: Rosen graduated from Harvard University in 1977 with an A.B. in Astronomy and Astrophysics. In 1980, he received an M.S. in physics from Massachusetts Institute of Technology. He went on to earn an M.D. from Hahnemann Medical College in 1982 and a Ph.D. in medical physics from MIT in 1984. Education and career: He joined the faculty of Harvard Medical School in 1987, when he was also made director of clinical NMR in the Department of Radiology at the Massachusetts General Hospital. Today he is professor of radiology at Harvard Medical School and professor of health science and technology at the Harvard-MIT Division of Health Sciences and Technology, as well as director of the Athinoula A. Martinos Center for Biomedical Imaging at MGH. In 2013, he was named Laurence Lamson Robbins Professor of Radiology at Harvard Medical School. Awards and recognition: Rosen is the recipient of numerous awards in recognition of his contributions to the field of functional MRI, including the 2011 Outstanding Researcher award from the Radiological Society of North America (RSNA), and the Rigshospitalet's International KFJ Award 2011 from the University of Copenhagen/Rigshospitalet. He is a Fellow of the International Society for Magnetic Resonance in Medicine and an ISMRM Gold Medal winner for his contributions to the field of Functional MRI. In addition, he is a Fellow of the American Academy of Arts and Sciences, a Fellow of the American Institute for Medical and Biological Engineering, a member of the Institute of Medicine of the National Academies and a member of the National Academy of Medicine. In 2017 Rosen was elected a Fellow of the National Academy of Inventors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual Storage Platform** Virtual Storage Platform: Virtual Storage Platform is the brand name for a Hitachi Data Systems line of computer data storage systems for data centers. Model numbers include G200, G400, G600, G800, G1000, G1500 and G5500 History: Hitachi Virtual Storage Platform, also known as VSP was first introduced in September, 2010. This storage platform builds on the design of Universal Storage Platform V, originally released in 2007. Architecture: At the heart of the system is the HiStar E-Network, a network crossbar switch matrix. This storage platform is made up of different technologies than USP and USP V. The connectivity to back-end disks is via 6Gbit/s SAS links instead of 4Gbit/s Fibre Channel loop. The internal processors are now Intel multi-core processors, and in addition to 3.5-inch drives support has been added for 2.5 inch small-form factor HDDs. The VSP supports SSD, SAS and SATA drives.Features included: The ability for growth in three ways: Scale up to meet increasing demands by dynamically adding processors, connectivity and capacity in a single unit. This enables tuning the configuration for optimal performance for both open systems and mainframe environments. Architecture: Scale out to meet demands by dynamically combining multiple units into a single logical system with shared resources. Support increased needs in virtualized server environments and ensure safe multitenancy and quality of service through partitioning of cache and ports. Scale deep to extend the functions of Hitachi Virtual Storage Platform to multivendor storage through virtualization. Offload less-critical data to external tiers in order to optimize the availability of the tier one resources. Supports automated storage tiering, known as Dynamic Tiering, to automate the movement of data between tiers to optimize performance. Front to back cooling airflow for more efficient cooling Improved capacity per square foot and lower power consumption compared to the USP V. Enables virtualization of external SAN storage from Hitachi and other vendors into one pool Supports online local and distance replication and migration of data nondisruptively internally and between heterogeneous storage, without interrupting application I/O through use of products such as Tiered Storage Manager, ShadowImage, TrueCopy and Universal Replicator. Single image global cache accessible across all virtual storage directors for maximum performance. Automated wide-striping of data, which allows pool balancing and lets volume grow or shrink dynamically. The system can scale between one and six 19-inch rack cabinets. It can hold a maximum of 2,048 SAS high-density 2.5-inch drives for 1.2 petabytes of capacity, or 1,280 3.5-inch SATA drives for a maximum capacity of 2.5 petabytes. Supports thin provisioning and storage reclamation on internal and external virtual storage Provides encryption, WORM and data shredding services, data resilience and business continuity services and content management services. Specifications: Virtual Storage Platform specifications in 2010 were: Frames (19-inch racks) - Integrated Control Chassis/Disk Chassis Frame (2) and up to 4 optional Disk Chassis Frames HiStar-E Network - Number of grid switches 4 pair (8) Aggregate bandwidth (GB/sec) - 192 Aggregate IOPS - 5,600,000 Cache Memory Number of data cache adapters (DCA) 2-16 Module capacity 2-8GB Maximum cache memory 1,024GB Control Memory Number of control memory modules 2-8 Module capacity 2-4GB Maximum control memory 32GB Front End Directors (Connectivity) Number of Directors 2-24 Fibre Channel host ports per Director - 8 or 16 Fibre Channel port performance - 2, 4, 8 Gbit/s Maximum Fibre Channel host ports - 192 Virtual host ports - 1,024 per physical port Maximum IBM FICON host ports - 192 Maximum IBM FCoE host ports - 96 Logical Devices (LUNs) — Maximum Supported Open systems 65,536 IBM z/OS 65,536 Disks Type: Flash 200GB (2.5"), 400GB (3.5") Type: SAS 146, 300, 600GB (2.5") Type: SATA II 2TB Number of disks per system (max) 2.5" - 2,048; 3.5" - 1,280 Number spare disks per system (min-max) 1-256 Maximum Internal Raw Capacity - (2TB disks) 2.52PB RAID 1, 5, 6 support Maximum internal and external capacity 255PB Max. Usable Internal capacity RAID-5 (7D+1P) OPEN-V 2,080.8TB z/OS 3390M 2,192.2TB Max. Usable Internal Capacity RAID-6 (6D+2P) OPEN-V 1,879TB z/OS 3390M 1,779.7TB Max. Usable Internal Capacity RAID-1+0 (2D+2D) OPEN-V 1,256.6TB z/OS 3390M 1,190.2TB Virtual Storage Machines 32 max Back End Directors 2-8 Operating System Support Mainframe IBM: z/OS, z/OS.e, OS/390, z/VM, VM/ESA, zVSE, VSE/ESA, MVS/XA, MVS/ESA, TPF, Linux for IBM S/390 and zSeries; Open systems HP: HP-UX, Tru64 UNIX, Open VMS IBM: AIX Microsoft: Windows Server 2000, 2003, 2008 Novell: NetWare, SUSE Linux Red Hat: Enterprise Linux Oracle: Solaris VMware: ESX Server Storage Management: Hitachi Command Suite (formerly Hitachi Storage Command Suite) provides integrated storage resource management, tiered storage and business continuity software. Hitachi Command Suite employs a use case-driven, step-by-step wizard-based approach that allows administrators to perform tasks such as new volume provisioning, configuration of external storage, and creation/expansion of storage pools easily on the fly.Hitachi Command Suite is composed of the following software products: Hitachi Basic Operating SystemHitachi Dynamic Provisioning Hitachi Device Manager Hitachi Dynamic Link Manager Advanced Hitachi Basic Operating System VHitachi Universal Volume Manager Hitachi Dynamic Tiering Hitachi Command Director Hitachi Storage Capacity Reporter, powered by APTARE Hitachi Tiered Storage Manager Hitachi Tuning Manager Hitachi Virtual Server Reporter, powered by APTAREHitachi Command Suite also supports management interfaces such as SNMP and SMI-S.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Communications ToolBox** Communications ToolBox: The Macintosh Communications Toolbox, generally shortened to CommToolbox or CTB, was a suite of application programming interfaces, libraries and dynamically loaded code modules for the classic Mac OS that implemented a wide variety of serial and network communication protocols, as well as file transfer protocols and terminal emulations. Communications ToolBox: Using CommToolbox, one could write an application that would seamlessly work over AppleTalk, a modem or any variety of other connections, transfer files using XMODEM, Kermit or other file transfer protocols, and provide DEC VT102, VT220, IBM 3270 and other terminal emulation services. Developers could also write plug-in communications modules known as "Tools", allowing any CommToolbox-aware application to use that connection method. Communications ToolBox: CommToolbox was claimed by some to be slow and buggy, and received mixed support from developers. Examples of applications using it for simple tasks were common, but single-purpose applications would have higher performance when bypassing system API's implementations and rolling their own. CommToolbox was initially released independent of the main Mac system releases but was finally integrated and delivered with System 7. The development team was part of the Apple Networking and Communications Division, not part of the main System Software team. Description: CommToolbox was perhaps one of the first implementations of shared libraries on the early Mac OS. Applications would find installed tools at launch time. In fact, applications could automatically discover and use newly installed tools without having to quit and relaunch. Description: CommToolbox API's consisted of 4 managers: Communications Resource Manager (CRM) Connection Manager (CM) File Transfer Manager (FTM) Terminal Manager (TM)The CRM provided the Mac with its first centralized repository to register and enumerate serial devices. Early Mac's only had two serial ports and with the later Mac's allowing expandability including serial port cards, the CRM filled a critical hole in the Mac OS software architecture. Device manufacturers would create a pair of drivers that provided the same interface as Apple's built-in serial port drivers (but named differently from .AIn/.AOut .BIn/.BOut) and register these drivers with the CRM. Description: The Connection, File Transfer and Terminal Manager's all worked with their respective tools which were dynamically loaded code modules providing the interface between the Manager-specific API and the code implementing the particular functionality. In this manner, an application could be written "agnostically" without implementation-specific knowledge of any particular data connection, file transfer, or terminal emulation protocols. In addition, these tools also provided a set of system-wide standard UI implementations that could be automatically invoked and used for configuration. Description: Connection Tools provided a byte-oriented communication channel interface, implementing basic functionality such as opening/closing a connection, reading/writing data, as well as callbacks to implement a user interface. Terminal Tools implemented the character conversion and command string interpretation needed to support any sort of terminal emulator (mainly text terminals, graphics terminal tools were never released), and would be responsible for handling rendering into a QuickDraw GrafPort, user interactions including copying text out of the terminal buffer, and managing keystrokes to send terminal-specific control strings. Description: File Transfer Tools implemented all of the underlying implementation details involved with file transfers as well as providing callbacks to implement a user interface.Applications could use either a subset or all of the CTB Managers. A typical terminal emulator application would use all of them, connecting a Connection Tool selected in the Connection Manager to Terminal Tool in the Terminal Manager, and then periodically using a File Transfer Tool in the File Transfer Manager on user request. Such was the case for common terminal emulators like VersaTerm and MacTerminal. However, another application might use only one of these, say the Connection Manager to set up communications. QuickMail and Eudora are well-known examples. Applications typically used the GUI elements supplied by the Managers to handle user interaction, but could also enumerate the tools on their own to provide a custom GUI.Perhaps the best known Tool was the Apple Modem Tool, which provided the serial communications drivers as well as a system for storing setup commands. This was during an era where there was a proliferation of different modem vendors, each with subtly and not-so-subtly different AT command strings needed for configuration. When a connection was initiated using the Modem Tool, the link to the modem was opened, commands sent to it, and the link established by dialling. The Apple Modem Tool faced challenges with keeping with the rapidly changing modem landscape with needing to track higher speeds and new features being regularly introduced by modem vendors. Providing CTB updates in general was also a challenge as the CTB development was, at first, not part of the main System Software effort, but rather part of the Networking and Communications division. When a 1.5 version addressing some of the problems was released in 1993, even finding it proved difficult. A further update was needed to support higher speeds when 28kbit/s modems become common.Some of the other Apple provided tools included the simple Serial Tool and AppleTalk Tools using AppleTalk's Apple Data Stream Protocol as additional connection methods, the TTY and VT102 Tools for terminal emulation, and the Text and XModem tools for file transfers.Third party tools were common for supporting connections, including the Global Village TelePort modem which plugged into the Apple Desktop Bus and thus required custom drivers, Apple's own X.25 and ISDN tools, and a variety of other examples. There were also third-party Telnet Connection Tools released when TCP/IP started becoming prevalent. Description: CommToolbox was an important part of the DTO-1208 experiment onboard the Space Shuttle Atlantis which saw the first email from space in 1991. The equipment set-up was a backlit Mac Portable using a PSI Integration internal fax modem (used in half duplex mode due to the nature of the shuttle air-to-ground voice links). The communications software used was a specially modified version of AppleLink that used the CommToolbox Connection Manager (instead of directly accessing serial ports) and a custom Connection Tool written pro bono by three Apple engineers in their spare time to hide the half-duplex nature of the air-to-ground link from the application (which was expecting a full-duplex connection).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metrobús (ticket)** Metrobús (ticket): In Madrid, Metrobús is the name given to the ticket that allows to travel 10 times inside of the bus and Metro system of the city.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metallizing** Metallizing: Metallizing is the general name for the technique of coating metal on the surface of objects. Metallic coatings may be decorative, protective or functional. Metallizing: Techniques for metallization started as early as mirror making. In 1835, Justus von Liebig discovered the process of coating a glass surface with metallic silver, making the glass mirror one of the earliest items being metallized. Plating other non-metallic objects grew rapidly with introduction of ABS plastic. Because a non-metallic object tends to be a poor electrical conductor, the object's surface must be made conductive before plating can be performed. The plastic part is first etched chemically by a suitable process, such as dipping in a hot chromic acid-sulfuric acid mixture. The etched surface is sensitised and activated by first dipping in tin(II) chloride solution, then palladium chloride solution. The processed surface is then coated with electroless copper or nickel before further plating. This process gives useful (about 1 to 6 kgf/cm or 10 to 60 N/cm or 5 to 35 lbf/in) adhesion force, but is much weaker than actual metal-to-metal adhesion strength. Metallizing: Vacuum metallizing involves heating the coating metal to its boiling point in a vacuum chamber, then letting condensation deposit the metal on the substrate's surface. Resistance heating, electron beam, or plasma heating is used to vaporize the coating metal. Vacuum metallizing was used to deposit aluminum on the large glass mirrors of reflecting telescopes, such as with the Hale telescope. Metallizing: Thermal spray processes are often referred to as metallizing. Metals applied in such a manner provide corrosion protection to steel for decades longer than paint alone. Zinc and aluminum are the most commonly used materials for metallizing steel structures.Cold sprayable metal technology is a metallizing process that seamlessly applies cold sprayable or puttyable metal to almost any surface. The composite metal consists of two (waterbased binder) or three different ingredients: metal powder, binder and hardener. Metallizing: The mixture of the ingredients is cast or sprayed on the substrate at room temperature. The desired effect and the necessary final treatment define the thickness of the layer, which normally varies between 80 and 150 µm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Five-cent coin** Five-cent coin: A five-cent coin or five-cent piece is a small-value coin minted for various decimal currencies using the cent as their hundredth subdivision. Five-cent coin: Examples include: the United States five-cent coin, better known as the US nickel the Canadian five-cent coin, better known as the Canadian nickel the Australian five-cent coin the New Zealand five-cent coin (withdrawn in 2006 due to low monetary value) the Hong Kong five-cent coin (withdrawn in 1989 due to low monetary value) the Singapore five-cent coin the Brunei five-cent coin the five-cent coin of the decimal Dutch guilder (Netherlands), also called stuiver (withdrawn in 2001 due to introduction of the euro) the 5 cent euro coin used in several European countries known as the eurozone the five-cent coin of the South African rand
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Domestic sheep reproduction** Domestic sheep reproduction: Domestic sheep reproduce sexually much like other mammals, and their reproductive strategy is furthermore very similar to other domestic herd animals. A flock of sheep is generally mated by a single ram, which has either been chosen by a farmer or has established dominance through physical contest with other rams (in feral populations). Most sheep have a breeding season (tupping) in the autumn, though some are able to breed year-round.Largely as a result of the influence of humans in sheep breeding, ewes often produce multiple lambs. This increase in the lamb births, both in number and birth weight, may cause problems in delivery and lamb survival, requiring the intervention of shepherds. Sexual behavior: Ewes generally reach sexual maturity at six to eight months of age, and rams generally at four to six (ram lambs have occasionally been known to impregnate their mothers at two months). Sheep are seasonally polyoestrus animals. Ewes enter into oestrus cycles about every 17 days, which last for approximately 30 hours. In addition to emitting a scent, they indicate readiness through physical displays towards rams. The phenomenon of the freemartin, a female bovine that is behaviorally masculine and lacks functioning ovaries, is commonly associated with cattle, but does occur to some extent in sheep. The instance of freemartins in sheep may be increasing in concert with the rise in twinning (freemartins are the result of male-female twin combinations). The Flehmen response is exhibited by rams when they smell the urine of a ewe in oestrous. The vomeronasal organ has receptors which detect the oestrogens in the ewe's urine. The ram displays this by extending his neck and curling his lip. Sexual behavior: Rutting Without human intervention, rams may fight during the rut to determine which individuals may mate with ewes. Rams, especially unfamiliar ones, will also fight outside the breeding period to establish dominance; rams can kill one another if allowed to mix freely. During the rut, even normally friendly rams may become aggressive towards humans due to increases in their hormone levels. Sexual behavior: Historically, especially aggressive rams were sometimes blindfolded or hobbled. Today, those who keep rams typically prefer softer preventative measures, such as moving within a clear line to an exit, never turning their back on a ram, and possibly dousing with water or a diluted solution of bleach or vinegar to dissuade charges. Pregnancy: Without ultrasound or other special tools, determining if a sheep is pregnant is difficult. Ewes only begin to visibly show a pregnancy about six weeks before giving birth, so shepherds often rely on the assumption that a ram will impregnate all the ewes in a flock. However, by fitting a ram with a chest harness called a marking harness that holds a special crayon (or raddle, sometimes spelled reddle), ewes that have been mounted are marked with a color. Dye may also be directly applied to the ram's brisket. This measure is not used in flocks where wool is important, since the color of a raddle contaminates it. The crayon in the marking harness can be changed during the breeding cycle to allow for lambing date predictions for each ewe.After mating, sheep have a gestation period of around five months. Within a few days of the impending birth, ewes begin to behave differently. They may lie down and stand erratically, paw the ground, or otherwise act out of sync with normal flock patterns. An ewe's udder will quickly fill out, and her vulva will swell. Vaginal, uterine or anal prolapse may also occur, in which case either stitching or a physical retainer can be used to hold the orifice in if the problem persists. Usually ewes that experience serious issues while lambing such as prolapse, will be discarded from the flock to avoid further complications in upcoming years. Pregnancy: Artificial insemination and embryo transfer In addition to natural insemination by rams, artificial insemination and embryo transfers have been used in sheep breeding programs for many years in Australia and New Zealand. These programs have become more commonplace in the United States during the 2000s as the number of veterinarians qualified to perform these types of procedures with proficiency have grown. However, ovine AI is a relatively complicated procedure compared to other livestock. Unlike cattle or goats, which have straight cervices that can be vaginally inseminated, ewes have a curved cervix which is more difficult to access. Additionally, breeders were until recently unable to control their ewe's estrus cycles. The ability to control the estrus cycle is much easier today because of products that safely assist in aligning heat cycles. Some examples of products are PG600, CIDRs, Estrumate and Folltropin V. These products contain progesterone which will bring on the induction of estrus in ewes (sheep) during seasonal anestrus. Seasonal anestrus is when ewes do not have regular estrous cycles outside the natural breeding season. Pregnancy: Historically, vaginal insemination of sheep only produced 40-60% success rates, and was thus called a "shot in the dark" (SID). In the 1980s, Australian researchers developed a laparoscopic insemination procedure which, combined with the use of progestogen and pregnant mare's serum gonadotropin (PMSG), yielded much higher success rates (50-80% or more), and has become the standard for artificial insemination of sheep in the 21st century.Semen collection is naturally an integral component of this entire process. Once semen has been collected it can be used immediately for insemination or slowly frozen for use at a later date. Fresh semen is recognized as the method of choice as it lives longer and yields higher conception rates. Frozen semen will work but it must be the highest quality of semen and the ewes must be inseminated twice in the same day. The marketing of ram semen is a major part of this industry. Producers owning prize winning rams have found this to be a good avenue to leverage the accolades of their most famous animals. Pregnancy: During embryo transfer (ET) a minor surgical procedure with almost no risk of injury or infection when performed properly, sheep laparoscopy allows the importation of improved genetics, even of breeds which may otherwise be non-existent in certain countries due to the regulation of live animal imports. Embryo transfer procedures are used to allow producers to maximize those females that produce the best lambs/kids either for retention into the flock or for sale to other producers. ET also allows producers to continue to utilize a ewe/doe that may not physically be able to carry or feed a set of lambs. ET can allow a producer to grow his flock quickly with above average individuals of similar bloodlines. The primary industry to utilize this technology in the United States is the club lamb breeders and exhibitors. It is a common practice in the commercial sheep industries of Australia, New Zealand, and South America.Average success rates in Embryo Transfer in terms of embryos recovered can vary widely. Each breed will respond differently to the ET process. Typically white faced ewes and does are more fertile than black faced ewes. A range of zero to the mid 20s in terms of viable embryos recovered from a flush procedure can be expected. Over the course of a year the average is 6.8 transferable eggs per donor with a 75% conception rate for those eggs. Pregnancy: Infertility Infertility can be attributed to many aspects in managerial practices as well as health factors. One of the main reasons low lambing percentages can be seen in a flock is due to mineral and vitamin deficiencies. The main vitamins and minerals that play a major role in fertility are selenium, copper, vitamin A and D. Other factors that affect fertility and potentially cause abortion are infectious diseases, inappropriate body condition or toxins in feed. Lambing: As the time for lambing approaches, the lamb will drop causing the ewe to have a swayback, exhibit restless behaviour and show a sunken appearance in front of the hipbone area. When birth is imminent, contractions begin to take place, and the fitful behavior of the ewe may increase. A normal labor may take one to several hours, depending on how many lambs are present, the age of the ewe, and her physical and nutritional condition prior to the birth. Though some breeds may regularly produce larger litters of lambs (records stand around nine lambs at once), most produce either single or twin lambs. The number of lambs a ewe produces per year is known as the lambing percentage. The condition of the ewe during breeding season will impact the lambing percentage as well as the size of the lambs. At some point, usually at the beginning of labor or soon after the births have occurred, ewes and lambs may be confined to small lambing jugs. These pens, which are generally two to eight feet (0.6 to 2.4 m) in length and width, are designed to aid both careful observation of ewes and to cement the bond between them and their lambs. Ovine obstetrics can be problematic. By selectively breeding ewes that produce multiple offspring with higher birth weights for generations, sheep producers have inadvertently caused some domestic sheep to have difficulty lambing. However, it is a myth that sheep cannot lamb without human assistance; many ewes give birth directly in pasture without aid. Balancing ease of lambing with high productivity is one of the dilemmas of sheep breeding. While the majority of births are relatively normal and do not require intervention, many complications may arise. A lamb may present in the normal fashion (with both legs and head forward), but may simply be too large to slide out of the birth canal. This often happens when large rams are crossed with diminutive ewes (this is related to breed, rams are naturally larger than ewes by comparison). Lambs may also present themselves with one shoulder to the side, completely backward, or with only some of their limbs protruding. Lambs may also be spontaneously aborted or stillborn. Reproductive failure is a common consequence of infections such as toxoplasmosis and foot-and-mouth disease. Some types of abortion in sheep are preventable by vaccinations against these infections.In the case of any such problems, those present at lambing (who may or may not include a veterinarian, most shepherds become accomplished at lambing to some degree) may assist the ewe in extracting or repositioning lambs. In severe cases, a cesarean section will be required to remove the lamb. After the birth, ewes ideally break the amniotic sac (if it is not broken during labor), and begin licking clean the lamb. The licking clears the nose and mouth, dries the lamb, and stimulates it. Lambs that are breathing and healthy at this point begin trying to stand, and ideally do so between a half and full hour, with help from the mother. Generally after lambs stand, the umbilical cord is trimmed to about an inch (2.5 centimeters). Once trimmed, a small container (such as a film canister) of iodine is held against the lamb's belly over the remainder of the cord to prevent infection. Postnatal care: In normal situations, lambs nurse after standing, receiving vital colostrum milk. Lambs that either fail to nurse or are prevented from doing so by the ewe require aid in order to live. If coaxing the pair to accept nursing does not work, one of several steps may then be taken. Ewes may be held or tied to force them to accept a nursing lamb. If a lamb is not eating, a stomach tube may also be used to force feed the lamb in order to save its life. In the case of a permanently rejected lamb, a shepherd may then attempt to foster an orphaned lamb onto another ewe. Lambs are also sometimes fostered after the death of their mother, either from the birth or other event. Postnatal care: Scent plays a large factor in ewes recognizing their lambs, so disrupting the scent of a newborn lamb with washing or over-handling may cause a ewe to reject it. Conversely, various methods of imparting the scent of a ewe's own lamb to an orphaned one may be useful in fostering. If an orphaned lamb cannot be fostered, then it usually becomes what is known as a bottle lamb—a lamb raised by people and fed via bottle. After lambs are stabilized, lamb marking is carried out – this includes ear tagging, docking, castration and usually vaccination. Ear tags with numbers are the primary mode of identification when sheep are not named; it is also the legal manner of animal identification in the European Union: the number may identify the individual sheep or only its flock. When performed at an early age, ear tagging seems to cause little or no discomfort to lambs. However, using tags improperly or using tags not designed for sheep may cause discomfort, largely due to excess weight of tags for other animals.Ram lambs not intended for breeding are castrated, though some shepherds choose to avoid the procedure for ethical, economic or practical reasons. Ram lambs that will be slaughtered or separated from ewes before sexual maturity are not usually castrated. In most breeds, lambs' tails are docked for health reasons. The tail may be removed just below the lamb's caudal tail flaps (docking shorter than this may cause health problems such as rectal prolapse), but in some breeds the tail is left longer, or is not docked at all. Docking is not necessary in short-tailed breeds, and it is not usually done in breeds in which a long tail is valued, such as Zwartbles. Postnatal care: Though docking is often considered cruel and unnatural by animal rights activists, it is considered by sheep producers large and small alike to be a critical step in maintaining the health of sheep. Long, wooly tails make shearing more difficult, interfere with mating, and make sheep extremely susceptible to parasites, especially those that cause flystrike. Both castration and docking can be performed with several instruments. An elastrator places a tight band of rubber around an area, causing it to atrophy and fall off in a number of weeks. This process is bloodless and does not seem to cause extended suffering to lambs, who tend to ignore it after several hours. In addition to the elastrator, a Burdizzo, emasculator, heated chisel or knife are sometimes used. After one to three days in the lambing jugs, ewes and lambs are usually sufficiently stabilized to allow reintroduction to the rest of the flock. Commercial sheep breeding: In the large sheep producing nations of South America, Australia and New Zealand sheep are usually bred on large tracts of land with much less intervention from the graziers or breeders. Merinos, and much of the land in these countries does not lend itself to the mob intervention that is found in smaller flock breeding countries. Commercial sheep breeding: In these countries there is little need, and no option but for ewes to lamb outdoors as there are insufficient structures to handle the large flocks of ewes there. New Zealand ewes produce 36 million lambs each spring time, which is an average of 2,250 lambs per farm. Australian graziers, too, do not receive the financial support that governments in other countries provide to sheep breeders. Low-cost sheep breeding is based on large numbers of sheep per labour unit and having ewes that are capable of unsupervised lambing to produce hardy, active lambs. Managerial aspects: For breeders intent on strict improvements to their flocks, ewes are classed and inferior sheep are removed prior to mating in order to maintain or improve the quality of the flock. Muffled (wooly) faces have long been associated with lower fertility rates. Stud or specially selected rams are chosen with aid of objective measurements, genetic information and evaluation services that are now available in Australia and New Zealand. The choice of mating time is governed by many factors including climate, market requirements and feed availability. Rams are typically mated at about 2.5% depending on the age of the sheep, plus consideration as to the size and type of mating paddocks. The mating period ranges from about 6 to 8 weeks in commercial flocks. Longer mating times result in management problems with lamb marking and shearing etc. Managerial aspects: Good nutrition is vital to ewes during the last 6 weeks of pregnancy in order to prevent pregnancy toxaemia, especially in twin bearing ewes. Overfeeding, however, may result in over large single lambs and dystocia. Shearing ewes before lambing reduces the number of ewes that are cast (i.e. unable to rise unassisted), and the number of lambs and ewes that are lost. Lambs, too, are aided in finding the udder and suckling a shorn ewe. In addition, shearing the ewe before lambing can increase the quality of the fleece as wool breaking can occur since giving birth is such a major stress on the ewe's body. It is important to keep in mind weather conditions prior to shearing ewes, especially in colder climates.After shearing ewes are typically placed in well sheltered paddocks that have good feed and water. Attention to ewes that are lambing varies according to the breed, size and locations of properties. Unless they are stud ewes it unlikely that they will receive intensive care. On stations with large paddocks there is a policy of non-interference. On other properties the mobs are inspected by stockmen at varying intervals to stand cast ewes and deal with dystocia. Producers also sometimes quietly drift pregnant ewes away from ewes that have already lambed, in order to prevent mis-mothering.Lambs are usually marked at three to six weeks of age, but a protracted lambing season may necessitate two markings. Managerial aspects: Inbreeding depression Inbreeding tends to occur in flocks of limited size and where only a single or a few rams are used. Associated with inbreeding is a decline in progeny performance usually referred to as inbreeding depression. Inbreeding depression has been found for lamb birthweight, average daily weight gain from birth until two months, and litter size. Inbreeding depression can be the cause of diseases and deformities to arise in a flock. Other countries: In the major sheep countries of Argentina, Uruguay, Brazil, Peru and Chile, breeders are also utilizing fleece testing and performance recording schemes as a means of improving their flocks. New research: In 2008, for the first time in history, researchers at Chiswick CSIRO research station, between Uralla and Armidale, New South Wales have used stem cells to develop surrogate rams and bulls. These males then produce the viable semen of another male. New research: The approach in these sheep experiments involves irradiating a ram's testes while placing stem cells from a second ram into the testes of the first, ram A. In the following weeks ram A produces semen the usual way, but is using the stem cells of ram B and therefore producing semen carrying the genetics of ram B rather than those of his own. Ram A therefore has effectively become a surrogate ram. New research: The viable semen is then implanted in the ewe and the many lambs born through this process are proving to be normal and healthy. DNA tests have proved that up to 10% of the lambs are sired by the surrogate ram and carry the genetics of the donor ram.Another area of research that is growing in importance is the reduction of greenhouse gas emissions, mainly methane, from livestock. Ruminants are responsible for contributing the highest emissions out of all types of animals. Many researchers are conducting studies to determine how manipulating sheep diets may help reduce these dangerous emissions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HyperCard** HyperCard: HyperCard is a software application and development kit for Apple Macintosh and Apple IIGS computers. It is among the first successful hypermedia systems predating the World Wide Web. HyperCard combines a flat-file database with a graphical, flexible, user-modifiable interface. HyperCard includes a built-in programming language called HyperTalk for manipulating data and the user interface. This combination of features – a database with simple form layout, flexible support for graphics, and ease of programming – suits HyperCard for many different projects such as rapid application development of applications and databases, interactive applications with no database requirements, command and control systems, and many examples in the demoscene. HyperCard: HyperCard was originally released in 1987 for $49.95 and was included free with all new Macs sold afterwards. It was withdrawn from sale in March 2004, having received its final update in 1998 upon the return of Steve Jobs to Apple. HyperCard was not ported to Mac OS X, but can run in the Classic Environment on versions of Mac OS X that support it. Overview: Design HyperCard is based on the concept of a "stack" of virtual "cards". Cards hold data, just as they would in a Rolodex card-filing device. Each card contains a set of interactive objects, including text fields, check boxes, buttons, and similar common graphical user interface (GUI) elements. Users browse the stack by navigating from card to card, using built-in navigation features, a powerful search mechanism, or through user-created scripts.Users build or modify stacks by adding new cards. They place GUI objects on the cards using an interactive layout engine based on a simple drag-and-drop interface. Also, HyperCard includes prototype or template cards called backgrounds; when new cards are created they can refer to one of these background cards, which causes all of the objects on the background to be copied onto the new card. This way, a stack of cards with a common layout and functionality can be created. The layout engine is similar in concept to a form as used in most rapid application development (RAD) environments such as Borland Delphi, and Microsoft Visual Basic and Visual Studio. Overview: The database features of the HyperCard system are based on the storage of the state of all of the objects on the cards in the physical file representing the stack. The database does not exist as a separate system within the HyperCard stack; no database engine or similar construct exists. Instead, the state of any object in the system is considered to be live and editable at any time. From the HyperCard runtime's perspective, there is no difference between moving a text field on the card and typing into it; both operations simply change the state of the target object within the stack. Such changes are immediately saved when complete, so typing into a field causes that text to be stored to the stack's physical file. The system operates in a largely stateless fashion, with no need to save during operation. This is in common with many database-oriented systems, although somewhat different from document-based applications. Overview: The final key element in HyperCard is the script, a single code-carrying element of every object within the stack. The script is a text field whose contents are interpreted in the HyperTalk language. Like any other property, the script of any object can be edited at any time and changes are saved as soon as they were complete. When the user invokes actions in the GUI, like clicking on a button or typing into a field, these actions are translated into events by the HyperCard runtime. The runtime then examines the script of the object that is the target of the event, like a button, to see if its script object contains the event's code, called a handler. If it does, the HyperTalk engine runs the handler; if it does not, the runtime examines other objects in the visual hierarchy. Overview: These concepts make up the majority of the HyperCard system; stacks, backgrounds and cards provide a form-like GUI system, the stack file provides object persistence and database-like functionality, and HyperTalk allows handlers to be written for GUI events. Unlike the majority of RAD or database systems of the era, however, HyperCard combines all of these features, both user-facing and developer-facing, in a single application. This allows rapid turnaround and immediate prototyping, possibly without any coding, allowing users to author custom solutions to problems with their own personalized interface. "Empowerment" became a catchword as this possibility was embraced by the Macintosh community, as was the phrase "programming for the rest of us", that is, anyone, not just professional programmers. Overview: It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web. Unlike the Web, programming, placement, and browsing are all the same tool. Similar systems have been created for HTML, but traditional Web services are considerably more heavyweight. Overview: HyperTalk HyperCard contains an object-oriented scripting language called HyperTalk, which was noted for having a syntax resembling casual English language. HyperTalk language features were predetermined by the HyperCard environment, although they could be extended by the use of externals functions (XFCN) and commands (XCMD), written in a compiled language. The weakly typed HyperTalk supports most standard programming structures such as "if-then" and "repeat". HyperTalk is verbose, hence its ease of use and readability. HyperTalk code segments are referred to as "scripts", a term that is considered less daunting to beginning programmers. Overview: Externals HyperCard can be extended significantly through the use of external command (XCMD) and external function (XFCN) modules. These are code libraries packaged in a resource fork that integrate into either the system generally or the HyperTalk language specifically; this is an early example of the plug-in concept. Unlike conventional plug-ins, these do not require separate installation before they are available for use; they can be included in a stack, where they are directly available to scripts in that stack. Overview: During HyperCard's peak popularity in the late 1980s, a whole ecosystem of vendors offered thousands of these externals such as HyperTalk compilers, graphing systems, database access, Internet connectivity, and animation. Oracle offered an XCMD that allows HyperCard to directly query Oracle databases on any platform, superseded by Oracle Card. BeeHive Technologies offered a hardware interface that allows the computer to control external devices. Connected via the Apple Desktop Bus (ADB), this instrument can read the state of connected external switches or write digital outputs to a multitude of devices. Overview: Externals allow access to the Macintosh Toolbox, which contains many lower-level commands and functions not native to HyperTalk, such as control of the serial and ADB ports. History: Development HyperCard was created by Bill Atkinson following an LSD trip. Work for it began in March 1985 under the name of WildCard (hence its creator code of WILD). In 1986, Dan Winkler began work on HyperTalk and the name was changed to HyperCard for trademark reasons. It was released on 11 August 1987 for the first day of the MacWorld Conference & Expo in Boston, with the understanding that Atkinson would give HyperCard to Apple only if the company promised to release it for free on all Macs. Apple timed its release to coincide with the MacWorld Conference & Expo in Boston, Massachusetts to guarantee maximum publicity. History: Launch HyperCard was successful almost instantly. The Apple Programmer's and Developer's Association (APDA) said, "HyperCard has been an informational feeding frenzy. From August [1987, when it was announced] to October our phones never stopped ringing. It was a zoo." Within a few months of release, there were multiple HyperCard books and a 50 disk set of public domain stacks. Apple's project managers found HyperCard was being used by a huge number of people, internally and externally. Bug reports and upgrade suggestions continued to flow in, demonstrating its wide variety of users. Since it was also free, it was difficult to justify dedicating engineering resources to improvements in the software. Apple and its mainstream developers understood that HyperCard's user empowerment could reduce the sales of ordinary shrink-wrapped products. Stewart Alsop II speculated that HyperCard might replace Finder as the shell of the Macintosh graphical user interface. History: HyperCard 2.0 In late 1989, Kevin Calhoun, then a HyperCard engineer at Apple, led an effort to upgrade the program. This resulted in HyperCard 2.0, released in 1990. The new version included an on-the-fly compiler that greatly increased performance of computationally intensive code, a new debugger and many improvements to the underlying HyperTalk language. History: At the same time HyperCard 2.0 was being developed, a separate group within Apple developed and in 1991 released HyperCard IIGS, a version of HyperCard for the Apple IIGS system. Aimed mainly at the education market, HyperCard IIGS has roughly the same feature set as the 1.x versions of Macintosh HyperCard, while adding support for the color graphics abilities of the IIGS. Although stacks (HyperCard program documents) are not binary-compatible, a translator program (another HyperCard stack) allows them to be moved from one platform to the other. History: Then, Apple decided that most of its application software packages, including HyperCard, would be the property of a wholly owned subsidiary called Claris. Many of the HyperCard developers chose to stay at Apple rather than move to Claris, causing the development team to be split. Claris attempted to create a business model where HyperCard could also generate revenues. At first the freely-distributed versions of HyperCard shipped with authoring disabled. Early versions of Claris HyperCard contain an Easter Egg: typing "magic" into the message box converts the player into a full HyperCard authoring environment. When this trick became nearly universal, they wrote a new version, HyperCard Player, which Apple distributed with the Macintosh operating system, while Claris sold the full version commercially. Many users were upset that they had to pay to use software that had traditionally been supplied free and which many considered a basic part of the Mac. History: Even after HyperCard was generating revenue, Claris did little to market it. Development continued with minor upgrades, and the first failed attempt to create a third generation of HyperCard. During this period, HyperCard began losing market share. Without several important, basic features, HyperCard authors began moving to systems such as SuperCard and Macromedia Authorware. Nonetheless, HyperCard continued to be popular and used for a widening range of applications, from the game The Manhole, an earlier effort by the creators of Myst, to corporate information services. History: Apple eventually folded Claris back into the parent company, returning HyperCard to Apple's core engineering group. In 1992, Apple released the eagerly anticipated upgrade of HyperCard 2.2 and included licensed versions of Color Tools and Addmotion II, adding support for color pictures and animations. However, these tools are limited and often cumbersome to use because HyperCard 2.0 lacks true, internal color support. History: HyperCard 3.0 Several attempts were made to restart HyperCard development once it returned to Apple. Because of the product's widespread use as a multimedia-authoring tool it was rolled into the QuickTime group. A new effort to allow HyperCard to create QuickTime interactive (QTi) movies started, once again under the direction of Kevin Calhoun. QTi extended QuickTime's core multimedia playback features to provide true interactive facilities and a low-level programming language based on 68000 assembly language. The resulting HyperCard 3.0 was first presented in 1996 when an alpha-quality version was shown to developers at Apple's annual Apple Worldwide Developers Conference (WWDC). Under the leadership of Dan Crow development continued through the late 1990s, with public demos showing many popular features such as color support, Internet connectivity, and the ability to play HyperCard stacks (which were now special QuickTime movies) in a web browser. Development upon HyperCard 3.0 stalled when the QuickTime team was focused away from developing QuickTime interactive to the streaming features of QuickTime 4.0. in 1998 Steve Jobs disliked the software because Atkinson had chosen to stay at Apple to finish it instead of joining Jobs at NeXT, and (according to Atkinson) "it had Sculley's stink all over it". In 2000, the HyperCard engineering team was reassigned to other tasks after Jobs decided to abandon the product. Calhoun and Crow both left Apple shortly after, in 2001. History: Its final release was in 1998, and it was totally discontinued in March 2004.HyperCard runs natively only in the classic Mac OS, but it can still be used in Mac OS X's Classic mode on PowerPC based machines (G5 and earlier). The last functional native HyperCard authoring environment is Classic mode in Mac OS X 10.4 (Tiger) on PowerPC-based machines. Applications: HyperCard has been used for a range of hypertext and artistic purposes. Before the advent of PowerPoint, HyperCard was often used as a general-purpose presentation program. Examples of HyperCard applications include simple databases, "choose your own adventure"-type games, and educational teaching aids. Due to its rapid application design facilities, HyperCard was also often used for prototyping applications and sometimes even for version 1.0 implementations. Inside Apple, the QuickTime team was one of HyperCard's biggest customers. Applications: HyperCard has lower hardware requirements than Macromedia Director. Several commercial software products were created in HyperCard, most notably the original version of the graphic adventure game Myst, the Voyager Company's Expanded Books, multimedia CD-ROMs of Beethoven's Ninth Symphony CD-ROM, A Hard Day's Night by the Beatles, and the Voyager MacBeth. An early electronic edition of the Whole Earth Catalog was implemented in HyperCard. and stored on CD-ROM.The prototype and demo of the popular game You Don't Know Jack was written in HyperCard. The French auto manufacturer Renault used it to control their inventory system.In Quebec, Canada, HyperCard was used to control a robot arm used to insert and retrieve video disks at the National Film Board CinéRobothèque. Applications: In 1989, Hypercard was used to control the BBC Radiophonic Workshop Studio Network, using a single Macintosh.HyperCard was used to prototype a fully functional prototype of SIDOCI (one of the first experiments in the world to develop an integrated electronic patient record system) and was heavily used by Montréal Consulting firm DMR to demonstrate how "a typical day in the life of a patient about to get surgery" would look like in a paperless age. Applications: Activision, which was until then mainly a game company, saw HyperCard as an entry point into the business market. Changing its name to Mediagenic, it published several major HyperCard-based applications, most notably Danny Goodman's Focal Point, a personal information manager, and Reports For HyperCard, a program by Nine To Five Software that allows users to treat HyperCard as a full database system with robust information viewing and printing features. Applications: The HyperCard-inspired SuperCard for a while included the Roadster plug-in that allowed stacks to be placed inside web pages and viewed by web browsers with an appropriate browser plug-in. There was even a Windows version of this plug-in allowing computers other than Macintoshes to use the plug-in. Applications: Exploits The first HyperCard virus was discovered in Belgium and the Netherlands in April 1991.Because HyperCard executed scripts in stacks immediately on opening, it was also one of the first applications susceptible to macro viruses. The Merryxmas virus was discovered in early 1993 by Ken Dunham, two years before the Concept virus. Very few viruses were based on HyperCard, and their overall impact was minimal. Reception: Compute!'s Apple Applications in 1987 stated that HyperCard "may make Macintosh the personal computer of choice". While noting that its large memory requirement made it best suited for computers with 2 MB of memory and hard drives, the magazine predicted that "the smallest programming shop should be able to turn out stackware", especially for using CD-ROMs. Compute! predicted in 1988 that most future Mac software would be developed using HyperCard, if only because using it was so addictive that developers "won't be able to tear themselves away from it long enough to create anything else". Byte in 1989 listed it as among the "Excellence" winners of the Byte Awards. While stating that "like any first entry, it has some flaws", the magazine wrote that "HyperCard opened up a new category of software", and praised Apple for bundling it with every Mac. In 2001 Steve Wozniak called HyperCard "the best program ever written". Legacy: HyperCard is one of the first products that made use of and popularized the hypertext concept to a large popular base of users. Legacy: Jakob Nielsen has pointed out that HyperCard was really only a hypermedia program since its links started from regions on a card, not text objects; actual HTML-style text hyperlinks were possible in later versions, but were awkward to implement and seldom used. Deena Larsen programmed links into HyperCard for Marble Springs. Bill Atkinson later lamented that if he had only realized the power of network-oriented stacks, instead of focusing on local stacks on a single machine, HyperCard could have become the first Web browser.HyperCard saw a loss in popularity with the growth of the World Wide Web, since the Web could handle and deliver data in much the same way as HyperCard without being limited to files on a local hard disk. HyperCard had a significant impact on the web as it inspired the creation of both HTTP (through its influence on Tim Berners-Lee's colleague Robert Cailliau), and JavaScript (whose creator, Brendan Eich, was inspired by HyperTalk). It was also a key inspiration for ViolaWWW, an early web browser.The pointing-finger cursor used for navigating stacks was later used in the first web browsers, as the hyperlink cursor.The Myst computer game franchise, initially released as a HyperCard stack and included bundled with some Macs (for example the Performa 5300), still lives on, making HyperCard a facilitating technology for starting one of the best-selling computer games of all time.According to Ward Cunningham, the inventor of Wiki, the wiki concept can be traced back to a HyperCard stack he wrote in the late 1980s.In 2017 the Internet Archive established a project to preserve and emulate HyperCard stacks, allowing users to upload their own.The GUI of the prototype Apple Wizzy Active Lifestyle Telephone was based on HyperCard. Legacy: World Wide Web HyperCard influenced the development of the Web in late 1990 through its influence on Robert Cailliau, who assisted in developing Tim Berners-Lee's first Web browser. Javascript was inspired by HyperTalk.Although HyperCard stacks do not operate over the Internet, by 1988, at least 300 stacks were publicly available for download from the commercial CompuServe network (which was not connected to the official Internet yet). The system can link phone numbers on a user's computer together and enable them to dial numbers without a modem, using a less expensive piece of hardware, the Hyperdialer.In this sense, like the Web, it does form an association-based experience of information browsing via links, though not operating remotely over the TCP/IP protocol then. Like the Web, it also allows for the connections of many different kinds of media. Legacy: Similar systems Other companies have offered their own versions. As of 2010, four products are available which offer HyperCard-like abilities: HyperNext is a software development system that uses many ideas from HyperCard and can create both standalone applications and stacks that run on the freeware Hypernext Player. HyperNext is available for Mac OS 9 & X, and Windows XP & Vista. Legacy: HyperStudio, one of the first HyperCard clones, is as of 2009, developed and published by Software MacKiev. LiveCode, published by LiveCode, Ltd., expands greatly on HyperCard's feature set and offers color and a GUI toolkit which can be deployed on many popular platforms (Android, iOS, Classic Macintosh system software, Mac OS X, Windows 98 through 10, and Linux/Unix). LiveCode directly imports extant HyperCard stacks and provides a migration path for stacks still in use. Legacy: SuperCard, the first HyperCard clone, is similar to HyperCard, but with many added features such as: full color support, pixel and vector graphics, a full GUI toolkit, and support for many modern Mac OS X features. It can create both standalone applications and projects that run on the freeware SuperCard Player. SuperCard can also convert extant HyperCard stacks into SuperCard projects. It runs only on Macs.Past products include: SK8 was a "HyperCard killer" developed within Apple but never released. It extends HyperTalk to allow arbitrary objects which allowed it to build complete Mac-like applications (instead of stacks). The project was never released, although the source code was placed in the public domain. Legacy: Hyper DA by Symmetry was a Desk Accessory for classic single-tasked Mac OS that allows viewing HyperCard 1.x stacks as added windows in any extant application, and is also embedded into many Claris products (like MacDraw II) to display their user documentation. HyperPad from Brightbill-Roberts is a clone of HyperCard, written for DOS. It makes use of ASCII linedrawing to create the graphics of cards and buttons. Plus, later renamed WinPlus, is similar to HyperCard, for Windows and Macintosh. Oracle purchased Plus and created a cross-platform version as Oracle Card, later renamed Oracle Media Objects, used as a 4GL for database access. IBM LinkWay - a mouse-controlled HyperCard-like environment for DOS PCs. It has minimal system requirements, runs in graphics CGA and VGA. It even supported video disc control. Asymetrix's Windows application ToolBook resembles HyperCard, and later included an external converter to read HyperCard stacks (the first was a third-party product from Heizer software). Legacy: TileStack is an attempt to create a web based version of HyperCard that is compatible with the original HyperCard files. The site closed down January 24, 2011.In addition, many of the basic concepts of the original system were later re-used in other forms. Apple built its system-wide scripting engine AppleScript on a language similar to HyperTalk; it is often used for desktop publishing (DTP) workflow automation needs. In the 1990s FaceSpan provided a third-party graphical interface. AppleScript also has a native graphical programming front-end called Automator, released with Mac OS X Tiger in April 2005. One of HyperCard's strengths was its handling of multimedia, and many multimedia systems like Macromedia Authorware and Macromedia Director are based on concepts originating in HyperCard.AppWare, originally named Serius Developer, is sometimes seen to be similar to HyperCard, as both are rapid application development (RAD) systems. AppWare was sold in the early 90s and worked on both Mac and Windows systems. Legacy: Zoomracks, a DOS application with a similar "stack" database metaphor, predates HyperCard by 4 years, which led to a contentious lawsuit against Apple.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hand injury** Hand injury: The hand is a very complex organ with multiple joints, different types of ligament, tendons and nerves. Hand disease injuries are common in society and can result from excessive use, degenerative disorders or trauma. Hand injury: Trauma to the finger or the hand is quite common in society. In some particular cases, the entire finger may be subject to amputation. The majority of traumatic injuries are work-related. Today, skilled hand surgeons can sometimes reattach the finger or thumb using microsurgery. Sometimes, traumatic injuries may result in loss of skin, and plastic surgeons may place skin and muscle grafts. Types: Arthritis of the hand is common in females. Osteoarthritis of the hand joints is much less common than rheumatoid arthritis. As the arthritis progresses, the finger gets deformed and lose its functions. Moreover, many patients with rheumatoid arthritis have this dysfunction present in both hands and become disabled due to chronic pain. Osteoarthritis is most common at the base of thumb and is usually treated with pain pills, splinting or steroid injections.Carpal tunnel syndrome is a common disorder of the hand. This disorder results from compression of the median nerve in the wrist. Disorders like diabetes mellitus, thyroid or rheumatoid arthritis can narrow the tunnel and cause impingement of the nerve. Carpal tunnel syndrome also occurs in people who overuse their hand or perform repetitive actions like using a computer key board, a cashiers machine or a musical instrument. When the nerve is compressed, it can result in disabling symptoms like numbness, tingling, or pain in the middle three fingers. As the condition progresses, it can lead to muscle weakness and inability to hold objects. The pain frequently occurs at night and can even radiate to the shoulder. Even though the diagnosis is straightforward, the treatment is surgical decompression of the median nerve after deroofing of the carpal tunnel.Dupuytren's contracture is another disorder of the fingers that is due to thickening of the underlying skin tissues of the palm. The disorder results in a deformed finger which appears thin and has small bumps on the surface. Dupuytren's contracture does run in families, but is also associated with diabetes, smoking, seizure recurrence and other vascular disorders. Dupuytren's does not need any treatment as the condition can resolve on its own. However, if finger function is compromised, then surgery may be required. Types: Ganglion cysts are soft globular structures that occur on the back of the hand usually near the junction of the wrist joint. These small swellings are usually painless when small but can affect hand motion when they become large. The cysts contain a jelly like substance and usually do disappear on their own. If the ganglion cyst is not bothersome, it should be left alone. Just removing the fluid from the cyst is not curative because fluid will come back in less than a week. Surgery is often done for large cysts but the results are poor. Recurrences are common, and there is always the possibility of nerve or joint damage. Types: Tendinitis is disorder when tendons of the hands become inflamed. Tendons are thick fibrous cords that attach small muscles of the hand to bones. A Tendon is useful for generation of power to bend or extend the finger. When repetitive action is performed, tendons often get inflamed and present with pain and difficulty for moving the finger. In most cases, tendinitis can be treated with rest, ice and wearing splints. In some cases, an injection of corticosteroid may help. Tendinitis is primarily a disorder from overuse but if not treated properly, can become chronic. Severe cases need surgical decompression. Trigger finger is a common disorder which occurs when the sheath through which tendons pass, become swollen or irritated. Initially, the finger may catch during movement but symptoms like pain, swelling and a snap may occur with time. The finger often gets locked in one position and it may be difficult to straighten or bend the finger. Trigger finger has been found to be associated with diabetes, gout and rheumatoid arthritis. Causes: Fractures of the fingers occur when the finger or hands hit a solid object. Fractures are most common at the base of the little finger (boxer's fracture). Causes: Nerve injuries occur as a result of trauma, compression or over-stretching. Nerves send impulses to the brain about sensation and also play an important role in finger movement. When nerves are injured, one can lose ability to move fingers, lose sensation and develop a contracture. Any nerve injury of the hand can be disabling and results in loss of hand function. Thus it is vital to seek medical help as soon as possible after any hand injury.Sprains result from forcing a joint to perform against its normal range of motion. Finger sprains occur when the ligaments which are attached to the bone are overstretched and this results in pain, swelling, and difficulty for moving the finger. Common examples of a sprain are jammed or twisted fingers. These injuries are common among ball players but can also occur in laborers and handy men. When finger sprains are not treated on time, prolonged disability can result. Diagnosis: Finger injuries are usually diagnosed with x-ray and can get to be considerably painful. The majority of finger injuries can be dealt with conservative care and splints. However, if the bone presents an abnormal angularity or if it is displaced, one may need surgery and pins to hold the bones in place. Treatment: Most hand injuries are minor and can heal without difficulty. However, any time the hand or finger is cut, crushed or the pain is ongoing, it is best to see a physician. Hand injuries when not treated on time can result in long term morbidity.Antibiotics in simple hand injuries do not typically require antibiotics as they do not change the chance of infection.Many hand injuries need surgery, but the time from injury to surgery (delays of up to 4 days) doesn't increase the chance of infection Epidemiology: About 1.8 million people go to the emergency department each year due to hand injuries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Death-Shield** Death-Shield: Death-Shield is a fictional character appearing in American comic books published by Marvel Comics. Fictional character biography: Death-Shield is a mercenary trained by the Taskmaster. The Taskmaster is under contract by the Red Skull to create a team of mercenaries who would be capable of defeating Spider-Man. The trio were patterned after the superheroes: Captain America, Hawkeye, and Spider-Man. The characters were called: Death-Shield, Jagged Bow, and Blood Spider. Solo joined the fray on the side of the wall-crawler and helps to defeat the three villains and thwart the machinations of the Red Skull, who was using the mercenaries to guard private files sought by Spider-Man in reference to his parents.Years later, Blood Spider, Death-Shield, and Jagged Bow appear among the criminals vying for the multimillion-dollar bounty that was placed on Agent Venom's head by Lord Ogre. The trio's attempt on Agent Venom's life is interrupted by competing mercenaries Constrictor and Lord Deathstrike.Crime Master, with the help of Blood Spider, Death-Shield, and Jagged Bow, later tries to steal a damaged Rigellian Recorder from Deadpool and the Mercs for Money. Powers and abilities: Death-Shield carries a shield and is very proficient in using it as a weapon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homonuclear molecule** Homonuclear molecule: Homonuclear molecules, or homonuclear species, are molecules composed of only one element. Homonuclear molecules may consist of various numbers of atoms. The size of the molecule an element can form depends on the element's properties, and some elements form molecules of more than one size. The most familiar homonuclear molecules are diatomic molecule, which consist of two atoms, although not all diatomic molecules are homonuclear. Homonuclear diatomic molecules include hydrogen (H2), oxygen (O2), nitrogen (N2) and all of the halogens. Ozone (O3) is a common triatomic homonuclear molecule. Homonuclear tetratomic molecules include arsenic (As4) and phosphorus (P4). Homonuclear molecule: Allotropes are different chemical forms of the same element (not containing any other element). In that sense, allotropes are all homonuclear. Many elements have multiple allotropic forms. In addition to the most common form of gaseous oxygen, O2, and ozone, there are other allotropes of oxygen. Sulfur forms several allotropes containing different numbers of sulfur atoms, including diatomic, triatomic, hexatomic and octatomic (S2, S3, S6, S8) forms, though the first three are rare. The element carbon is known to have a number of homonuclear molecules, including diamond and graphite. Homonuclear molecule: Sometimes a cluster of atoms of a single kind of metallic element is considered a single molecule.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tornado intensity** Tornado intensity: Tornado intensity is the measure of wind speeds and potential risk produced by a tornado. Intensity can be measured by in situ or remote sensing measurements, but since these are impractical for wide-scale use, intensity is usually inferred by proxies, such as damage. The Fujita scale, Enhanced Fujita scale, and the International Fujita scale rate tornadoes by the damage caused. In contrast to other major storms such as hurricanes and typhoons, such classifications are only assigned retroactively. Wind speed alone is not enough to determine the intensity of a tornado. An EF0 tornado may damage trees and peel some shingles off roofs, while an EF5 tornado can rip well-anchored homes off their foundations, leaving them bare; even deforming large skyscrapers. The similar TORRO scale ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. Doppler radar data, photogrammetry, and ground swirl patterns (cycloidal marks) may also be analyzed to determine the intensity and assign a rating. Tornado intensity: Tornadoes vary in intensity regardless of shape, size, and location, though strong tornadoes are typically larger than weak tornadoes. The association with track length and duration also varies, although longer-track (and longer-lived) tornadoes tend to be stronger. In the case of violent tornadoes, only a small portion of the path area is of violent intensity; most of the higher intensity is from subvortices. In the United States, 80% of tornadoes are rated EF0 or EF1 (equivalent to T0 through T3). The rate of occurrence drops off quickly with increasing strength; less than 1% are rated as violent (EF4 or EF5, equivalent to T8 through T11). History of tornado intensity measurements: For many years, before the advent of Doppler radar, scientists relied on educated guesses for tornado wind speed. The only evidence indicating wind speeds found in the tornado was the damage left behind by tornadoes that struck populated areas. Some believed they reach 400 miles per hour (640 kilometers per hour); others thought they might exceed 500 miles per hour (800 km/h), and perhaps even be supersonic. One can still find these incorrect guesses in some old (until the 1960s) literature, such as the original Fujita intensity scale developed by Dr. Tetsuya Theodore "Ted" Fujita in the early 1970s. However, one can find accounts (e.g. [1]; be sure to scroll down) of some remarkable work done in this field by a U.S. Army soldier, Sergeant John Park Finley. History of tornado intensity measurements: In 1971, Dr. Fujita introduced the idea of a scale to measure tornado winds. With the help of colleague Allen Pearson, he created and introduced what came to be called the Fujita scale in 1973. The F in F1, F2, etc. stands for Fujita. The scale was based on a relationship between the Beaufort scale and the Mach number scale; the low end of F1 on his scale corresponds to the low end of B12 on the Beaufort scale, and the low end of F12 corresponds to the speed of sound at sea level, or Mach 1. In practice, tornadoes are only assigned categories F0 through F5. History of tornado intensity measurements: The TORRO scale, created by the Tornado and Storm Research Organization (TORRO), was developed in 1974 and published a year later. The TORRO scale has 12 levels, which cover a broader range with tighter graduations. It ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. T0–T1 roughly corresponds to F0, T2–T3 to F1, and so on. While T10–T11 would be roughly equivalent to F5, the highest tornado rated to date on the TORRO scale was a T8. Some debate exists as to the usefulness of the TORRO scale over the Fujita scale—while it may be helpful for statistical purposes to have more levels of tornado strength, often the damage caused could be created by a large range of winds, rendering it hard to narrow the tornado down to a single TORRO scale category. History of tornado intensity measurements: Research conducted in the late 1980s and 1990s suggested that even with the implication of the Fujita scale, tornado winds were notoriously overestimated, especially in significant and violent tornadoes. Because of this, in 2006, the American Meteorological Society introduced the Enhanced Fujita scale, to help assign realistic wind speeds to tornado damage. The scientists specifically designed the scale so that a tornado assessed on the Fujita scale and the Enhanced Fujita scale would receive the same ranking. The EF-scale is more specific in detailing the degrees of damage on different types of structures for a given wind speed. While the F-scale goes from F0 to F12 in theory, the EF-scale is capped at EF5, which is defined as "winds ≥200 miles per hour (320 km/h)". In the United States, the Enhanced Fujita scale went into effect on February 2, 2007, for tornado damage assessments and the Fujita scale is no longer used. History of tornado intensity measurements: The first observation confirming that F5 winds could occur happened on April 26, 1991. A tornado near Red Rock, Oklahoma, was monitored by scientists using a portable Doppler radar, an experimental radar device that measures wind speed. Near the tornado's peak intensity, they recorded a wind speed of 115–120 meters per second (260–270 miles per hour; 410–430 kilometers per hour). Though the portable radar had the uncertainty of ±5–10 metres per second (11–22 mph; 18–36 km/h), this reading was probably within the F5 range, confirming that tornadoes were capable of violent winds found nowhere else on earth. History of tornado intensity measurements: Eight years later, during the 1999 Oklahoma tornado outbreak of May 3, another scientific team was monitoring an exceptionally violent tornado (one which eventually killed 36 people in the Oklahoma City metropolitan area). Around 7 p.m., they recorded one measurement of 301 ± 20 miles per hour (484 ± 32 km/h), 50 miles per hour (80 km/h) faster than the previous record. Though this reading is just short of the theoretical F6 rating, the measurement was taken more than 100 feet (30 meters) in the air, where winds are typically stronger than at the surface. In rating tornadoes, only surface wind speeds or the wind speeds indicated by the damage resulting from the tornado, are taken into account. Also, in practice, the F6 rating is not used. History of tornado intensity measurements: While scientists have long theorized that extremely low pressures might occur in the center of tornadoes, no measurements confirm it. A few home barometers had survived close passes by tornadoes, recording values as low as 24 inches of mercury (810 hectopascals), but these measurements were highly uncertain. In 2003, a U.S. research team succeeded in dropping devices called "turtles" into an F4 tornado, and one measured a pressure drop of more than 100 hectopascals (3.0 inHg) as the tornado passed directly overhead. Still, tornadoes are widely varied, so meteorologists are still researching to determine if these values are typical or not. History of tornado intensity measurements: In 2018, the International Fujita scale was created by the European Severe Storms Laboratory as well as other various European meteorological agencies. Unlike the other three scales (Fujita, Enhanced Fujita, and TORRO), the International Fujita scale has overlapping wind speeds within the ratings. The highest tornado rated on the IF scale was the 2021 South Moravia tornado, which was rated an IF4. Typical intensity: In the U.S., F0 and F1 (T0 through T3) tornadoes account for 80 percent of all tornadoes. The rate of occurrence drops off quickly with increasing strength—violent tornadoes (stronger than F4, T8), account for less than one percent of all tornado reports. Worldwide, strong tornadoes account for an even smaller percentage of total tornadoes. Violent tornadoes are extremely rare outside of the United States and Canada. Typical intensity: F5 and EF5 tornadoes are rare. In the United States, they typically only occur once every few years, and account for approximately 0.1 percent of confirmed tornadoes. An F5 tornado was reported in Elie, Manitoba, in Canada, on June 22, 2007. Before that, the last confirmed F5 was the 1999 Bridge Creek–Moore tornado, which killed 36 people on May 3, 1999. Nine EF5 tornadoes have occurred in the United States, in Greensburg, Kansas, on May 4, 2007; Parkersburg, Iowa, on May 25, 2008; Smithville, Mississippi, Philadelphia, Mississippi, Hackleburg, Alabama, and Rainsville, Alabama, (four separate tornadoes) on April 27, 2011; Joplin, Missouri, on May 22, 2011, and El Reno, Oklahoma, on May 24, 2011. On May 20, 2013, a confirmed EF5 tornado again struck Moore, Oklahoma. Typical damage: A typical tornado has winds of 110 miles per hour (180 km/h) or less, is about 250 feet (76 m) across, and travels about one mile (1.6 km) before dissipating. However, tornado behavior is variable; these figures represent statistical probabilities only. Typical damage: Two tornadoes that look almost the same can produce drastically different effects. Also, two tornadoes that look very different can produce similar damage, because tornadoes form by several different mechanisms and also follow a lifecycle that causes the same tornado to change in appearance over time. People in the path of a tornado should never attempt to determine its strength as it approaches. Between 1950 and 2014 in the United States, 222 people have been killed by EF1 tornadoes, and 21 have been killed by EF0 tornadoes. Typical damage: Weak tornadoes Around 60–70 percent of tornadoes are designated EF1 or EF0, also known as "weak" tornadoes. But "weak" is a relative term for tornadoes, as even these can cause significant damage. F0 and F1 tornadoes are typically short-lived; since 1980, almost 75 percent of tornadoes rated weak stayed on the ground for 1 mile (1.6 km) or less. In this time, though, they can cause both damage and fatalities. Typical damage: EF0 (T0–T1) damage is characterized by superficial damage to structures and vegetation. Well-built structures are typically unscathed, though sometimes sustaining broken windows, with minor damage to roofs and chimneys. Billboards and large signs can be knocked down. Trees may have large branches broken off and can be uprooted if they have shallow roots. Any tornado that is confirmed, but causes no damage (i.e., remains in open fields) is normally rated EF0, as well, even if the tornado had winds that would give it a higher rating. Some NWS offices, however, have rated these tornadoes EFU (EF-Unknown) due to the lack of damage.EF1 (T2–T3) damage has caused significantly more fatalities than those caused by EF0 tornadoes. At this level, damage to mobile homes and other temporary structures becomes significant, and cars and other vehicles can be pushed off the road or flipped. Permanent structures can suffer major damage to their roofs. Typical damage: Significant tornadoes EF2 (T4–T5) tornadoes are the lower end of "significant" yet are stronger than most tropical cyclones (though tropical cyclones affect a much larger area and their winds take place for much longer duration). Well-built structures can suffer serious damage, including roof loss, and the collapse of some exterior walls may occur in poorly built structures. Mobile homes, however, are destroyed. Vehicles can be lifted off the ground, and lighter objects can become small missiles, causing damage outside of the tornado's main path. Wooded areas have a large percentage of their trees snapped or uprooted.EF3 (T6–T7) damage is a serious risk to life and limb and the point at which a tornado statistically becomes significantly more destructive and deadly. Few parts of affected buildings are left standing; well-built structures lose all outer and some inner walls. Unanchored homes are swept away, and homes with poor anchoring may collapse entirely. Small vehicles and similarly sized objects are lifted off the ground and tossed as projectiles. Wooded areas suffer an almost total loss of vegetation, and some tree debarking may occur. Statistically speaking, EF3 is the maximum level that allows for reasonably effective residential sheltering in place in a first-floor interior room closest to the center of the house (the most widespread tornado sheltering procedure in America for those with no basement or underground storm shelter). Typical damage: Violent tornadoes EF4 (T8–T9) damage typically results in a total loss of the affected structure. Well-built homes are reduced to a short pile of medium-sized debris on the foundation. Homes with poor or no anchoring are swept completely away. Large, heavy vehicles, including airplanes, trains, and large trucks, can be pushed over, flipped repeatedly, or picked up and thrown. Large, healthy trees are entirely debarked and snapped off close to the ground or uprooted altogether and turned into flying projectiles. Passenger cars and similarly sized objects can be picked up and flung for considerable distances. EF4 damage can be expected to level even the most robustly built homes, making the common practice of sheltering in an interior room on the ground floor of a residence insufficient to ensure survival. A storm shelter, reinforced basement, or other subterranean shelter can provide substantial safety against EF4 tornadoes.EF5 (T10–T11) damage represents the upper limit of tornado power, and destruction is almost always total. An EF5 tornado pulls well-built, well-anchored homes off their foundations and into the air before obliterating them, flinging the wreckage for miles, and sweeping the foundation clean. Large, steel-reinforced structures such as schools are completely leveled. Tornadoes of this intensity tend to shred and scour low-lying grass and vegetation from the ground. Very little recognizable structural debris is generated by EF5 damage, with most materials reduced to a coarse mix of small, granular particles and dispersed evenly across the tornado's damage path. Large, multiple-ton steel frame vehicles and farm equipment are often mangled beyond recognition and deposited miles away or reduced entirely to unrecognizable parts. The official description of this damage highlights the extreme nature of the destruction, noting that "incredible phenomena will occur"; historically, this has included such displays of power as twisting skyscrapers, leveling entire communities, and stripping asphalt from roadbeds. Despite their relative rarity, the damage caused by EF5 tornadoes represents a disproportionate hazard to life and limb; since 1950 in the United States, only 59 tornadoes (0.1% of all reports) have been designated F5 or EF5, and yet these have been responsible for more than 1300 deaths and 14,000 injuries (21.5 and 13.6%, respectively).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indexing (motion)** Indexing (motion): Indexing in reference to motion is moving (or being moved) into a new position or location quickly and easily but also precisely. When indexing a machine part, its new location is known to within a few hundredths of a millimeter (thousandths of an inch), or often even to within a few thousandths of a millimeter (ten-thousandths of an inch), despite the fact that no elaborate measuring or layout was needed to establish that location. In reference to multi-edge cutting inserts, indexing is the process of exposing a new cutting edge for use. Indexing is a necessary kind of motion in many areas of mechanical engineering and machining. An object that indexes, or can be indexed, is said to be indexable. Indexing (motion): Usually when the word indexing is used, it refers specifically to rotation. That is, indexing is most often the quick and easy but precise rotation of a machine part through a certain known number of degrees. For example, Machinery's Handbook, 25th edition, in its section on milling machine indexing, says, "Positioning a workpiece at a precise angle or interval of rotation for a machining operation is called indexing." In addition to that most classic sense of the word, the swapping of one part for another, or other controlled movements, are also sometimes referred to as indexing, even if rotation is not the focus. Examples from everyday life: There are various examples of indexing that laypersons (non-engineers and non-machinists) can find in everyday life. These motions are not always called by the name indexing, but the idea is essentially similar: The motion of a retractable utility knife blade, which often will have well-defined discrete positions (fully retracted, ¼-exposed, ½-exposed, ¾-exposed, fully exposed) The indexing of a revolver's cylinder with each shot Manufacturing applications: Indexing is vital in manufacturing, especially mass production, where a well-defined cycle of motions must be repeated quickly and easily—but precisely—for each interchangeable part that is made. Without indexing capability, all manufacturing would have to be done on a craft basis, and interchangeable parts would have very high unit cost because of the time and skill needed to produce each unit. In fact, the evolution of modern technologies depended on the shift in methods from crafts (in which toolpath is controlled via operator skill) to indexing-capable toolpath control. A prime example of this theme was the development of the turret lathe, whose turret indexes tool positions, one after another, to allow successive tools to move into place, take precisely placed cuts, then make way for the next tool. Manufacturing applications: How indexing is achieved in manufacturing Indexing capability is provided in two fundamental ways: with or without Information technology (IT). Manufacturing applications: Non-IT-assisted physical guidance Non-IT-assisted physical guidance was the first means of providing indexing capability, via purely mechanical means. It allowed the Industrial Revolution to progress into the Machine Age. It is achieved by jigs, fixtures, and machine tool parts and accessories, which control toolpath by the very nature of their shape, physically limiting the path for motion. Some archetypal examples, developed to perfection before the advent of the IT era, are drill jigs, the turrets on manual turret lathes, indexing heads for manual milling machines, rotary tables, and various indexing fixtures and blocks that are simpler and less expensive than indexing heads, and serve quite well for most indexing needs in small shops. Although indexing heads of the pre-CNC era are now mostly obsolete in commercial manufacturing, the principle of purely mechanical indexing is still a vital part of current technology, in concert with IT, even as it has been extended to newer uses, such as the indexing of CNC milling machine toolholders or of indexable cutter inserts, whose precisely controlled size and shape allows them to be rotated or replaced quickly and easily without changing overall tool geometry. Manufacturing applications: IT-assisted physical guidance IT-assisted physical guidance (for example, via NC, CNC, or robotics) has been developed since the World War II era and uses electromechanical and electrohydraulic servomechanisms to translate digital information into position control. These systems also ultimately physically limit the path for motion, as jigs and other purely mechanical means do; but they do it not simply through their own shape, but rather using changeable information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trifluralin** Trifluralin: Trifluralin is a commonly used pre-emergence herbicide. With about 14 million pounds (6,400 t) used in the United States in 2001, it is one of the most widely used herbicides. Trifluralin is generally applied to the soil to provide control of a variety of annual grass and broadleaf weed species. It inhibits root development by interrupting mitosis, and thus can control weeds as they germinate. Environmental Regulation: Trifluralin has been banned in the European Union since 20 March 2008, primarily due to high toxicity to aquatic life.Trifluralin is on the United States Environmental Protection Agency list of Hazardous Air Pollutants as a regulated substance under the Clean Air Act. Environmental behavior: Trifluralin undergoes an extremely complex fate in the environment and is transiently transformed into many different products as it degrades, ultimately being incorporated into soil-bound residues or converted to carbon dioxide (mineralized). Among the more unusual behaviors of trifluralin is inactivation in wet soils. This has been linked to transformation of the herbicide by reduced soil minerals, which in turn had been previously reduced by soil microorganisms using them as electron acceptors in the absence of oxygen. This environmental degradation process has been reported for many structurally related herbicides (dinitroanilines) as well as a variety of explosives like TNT and picric acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Healthcare technician** Healthcare technician: A healthcare technician is a health professional that provides care to patients. Healthcare technician's primary position is to assist medical staff complete tasks around their assigned unit or clinic's and accommodate patient needs. Healthcare technician: Healthcare technicians are typically found in specialty clinics, intensive care, emergency departments, or laboratory collection facilities. Technicians will perform basic cardiology reports such as, electrocardiograms and will have basic understanding of bodily function. The technician is an integral member of the unit-based healthcare team, they contribute to the continuity of care by decreasing fragmentation through decentralization of selected diagnostic and therapeutic treatment modalities. Role description: Healthcare technicians provide two levels of care, direct and indirect. Often, technicians are trained and qualified to complete specialty tasks and this varies depending on clinic needs. Healthcare technicians provide a key role in patient care and cleanliness of hospital units. Healthcare technicians or HCT are also known as Patient Care Technician (PCT) or Certified Nursing Assistant (CNA). HCTs' objectives are to provide basic nursing care, use communication skills to assist patients in adapting to common health problems, provide continuity of care, demonstrate acceptance of responsibility for learning purposes, and proper demonstration for accountability purposes. HCTs can assist the medical team while evaluating patients on a routine basis within primary care clinics. They can be trained to ask emotional health questions/surveys to evaluate the standing of patients' mental health. This screening measure performed by HCTs can help create awareness and assist in the diagnosis and prevention of patient depression. Role description: Direct care Qualified members performing direct patient care can have the opportunity work directly with patients and assist with their care and well being. This type of care usually involves the following duties: Specimen collection (blood or bodily fluid) Venipuncture procedures or IV insertion Dressing changes Electrocardiograms Obtaining vital signs Patient monitoring (including direct observation of psychiatric patients) Oral suctioning Assisting patients with bathing and grooming Therapeutic maneuvering and repositioning of patients Transport of patients for medical procedureAccording to a recent article the following tasks met as the highest job priority for healthcare technicians: Blood draws, arterial gases, venous access through IV insertion, central line dressing changes, electrocardiograms, patient monitoring, oxygen requirements, breathing exercises, and vital signs. HCT's are at the forefront of healthcare and can assist in the addressing issues or concerns within the healthcare structure. They can be utilized to mention gaps or strengths when addressing chronic pain patients within the healthcare system. The empathetic approach that HCT's use while on the job can assist medical teams better care for patients facing chronic health illnesses. Role description: Indirect care Aside from routine patient care, healthcare technicians will complete many duties outside of direct care. The indirect care performed by healthcare technicians will ensure continuity of care. Technicians will maintain clinic or units with: Unit cleanliness Stocking of medical instruments Supply procurement Clerical duties Transportation of patient belongings Specialty duties Although healthcare technicians perform several direct and indirect tasks, they also perform specialty duties. These duties are delegated to ensure clinic specific needs are met. These duties include: Cleaning of duty specific equipment Use of atypical equipment Completion of qualification to provide specific care Knowledge based studies to enhance the work environment Education requirements Most allied health programs are of associate degree levels or state issued certification. A potential student will need to complete a certified program and a clinical externship. The duration of most programs is 10–24 weeks and vary with credit load. Medical technician students will complete the following courses: Anatomy/Physiology I & II Clinical Competencies I & II Medical Coding & other various administrative courses Pharmacology Medical Terminology Job field Upon graduation and certification qualification, health technologist will be gainfully employed by: Hospitals, physicians' offices, and specialty clinics.According to the United States Department of Labor, medical assistants (HCT, CMA, MA) held approximately 560,800 jobs in 2012 with a median pay of $29,370 per year. Also, the Bureau of Labor Statistics has projected, 29% employment growth from 2012-2022.There is a growing need for a cohort of competent technicians to support the range of occupational health service delivery and to release nurses from some of the more routine tasks of their day-to-day work. Daily tasks: Most duties performed while working are distributed by several technicians. Although many tasks are at times stressful, one must complete these jobs effortlessly and without error. The most difficult task while working is successfully being able to manage multiple units (or offices) and be efficient. Throughout an 8-hour day a tech may be assigned to 1–4 different areas and have to have direct or indirect care with patients. Duties will often include lifting, prolonged standing, blood-draw, and patient bathing needs. Each job performed will have annual competencies completed to ensure patient safety standards are compliant. Daily tasks: Levels of priority Each technician is trained with levels of priority. This can be described as the time frame at which one completes certain tasks or duties. For example: a technician will be given blood cultures (a lab specimen collected to verify the growth of bacterium in the blood system.), routine lab collections, glucose testing, and equipment processing (cleaning of medical equipment). Depending on staffing levels a technician will have to determine which task will need completed first per priority level. This will be established by the technician based upon patient needs or volume of patients needing testing, staffing levels, and the efficiency level at which they can complete the most work.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Specialty food** Specialty food: A specialty food is a food that is typically considered as a "unique and high-value food item made in small quantities from high-quality ingredients". Consumers typically pay higher prices for specialty foods, and may perceive them as having various benefits compared to non-specialty foods. Compared to staple foods, specialty foods may have higher prices due to more expensive ingredients and labor. Some food stores specialize in or predominantly purvey specialty foods. Several organizations exist that promote specialty foods and its purveyors. Definition: There is no standard definition for "specialty food". Specialty foods: Foods that have been described as specialty foods include: Alici from the Gulf of Trieste near Barcola. Artisanal foods. Caviar. Cheese and artisan cheese. Craft beer. Specialty coffee – sometimes referred to as artisanal coffee. High-quality chocolate. Foie gras. Iberico, Serrano, and other artisanal Dry-cured ham. Morel, Chanterelle, Matsutake and other rare mushrooms. Mostarda. Gourmet pet foods. Edible seaweed. Stinky tofu (Chinese: chòu dòufu) – has been described as a local specialty food in the Old City of Shanghai. Truffles. Truffle oil. Taboo food and drink is also a good way to find "other" speciality foods. Some specialty foods may be ethnic specialties.Foods that have been described as specialty foods as per not precisely corresponding to other food categories include: Kimchi. Olives. Royal jelly, bee pollen and propolis. Sauerkraut. Sea vegetables. Umeboshi. By country: China In China, specialty foods have been described as having "important roles in the food culture..." Some Chinese recipes may be footnoted with a statement that ingredients may only be available in specialty food stores and Chinese markets. Vietnam Pho is a popular Vietnamese dish that is often considered the country's national dish. It is a flavorful soup consisting of broth, rice noodles, herbs, and meat, typically beef or chicken. By country: Jellyfish noodle soup is, also known as jellyfish salad or jellyfish vermicelli, is a unique and popular dish in some Asian cuisines, particularly in Southeast Asian cuisine. Nha Trang jellyfish noodles are a local delicacy in the city of Nha Trang in Vietnam. They are not only known for their unique flavor and reasonable price but also for their high nutritional value. Jellyfish, the primary ingredient in this dish, is a rich source of protein and essential nutrients that are necessary for the proper functioning of the body. By country: United States In the United States, specialty foods and their purveyors are regulated by both federal and state agencies.The Specialty Food Association's annual "State of the Specialty Food Industry 2014" report stated that in 2013 in the U.S., specialty foods and beverages sales totaled $88.3 billion, accounted for an increase of 18.4% since 2011, and was a record high for the fourth consecutive year. The report also stated that around 80% of specialty food sales occur at the retail level, and that seven out of ten specialty food retailers reported that the word "local" had the most importance as a product claim. By country: Bean-to-bar chocolate manufacturers As of March 2015 in the United States, the number of bean-to-bar chocolate manufacturers (companies that process cocoa beans into a product in-house, rather than melting chocolate from another manufacturer) had increased to at least 60. The Fine Chocolate Industry Association (FCIA) stated that this represented "a tenfold increase in the past decade that's outpacing growth in Europe". In April 2020, the FCIA launched the campaign website Make Mine Fine in order to support small scale farmers who rely on cocoa for their livelihoods in tropical countries and highlight the work of chocolate manufacturers who buy their beans. By country: California In 2012 in the United States, the specialty foods market sector was experiencing significant growth, with its annual growth rate at 8–10%. In 2010, specialty foods comprised 13.1% of total retail food sales and totaled $55.9 billion in sales.In 2010 in Oakland, California, it was reported that abandoned industrial spaces previously occupied by large food producers were being inhabited by small specialty food companies.In 1998, the U.S. state of California had the second-highest amount of specialty and gourmet foods of all U.S. states. This has been attributed as possible due a diverse variety of unique fruits and vegetables that can be grown in Southern California. Another possibility for the high quantity and diversity of specialty foods in California is that food innovations often occur in the state, as has occurred in other sectors such as health food and organic produce.In 1991, the Los Angeles Times reported that city officials in Monterey Park, Los Angeles County, California, suspected that significant numbers of non-residents were visiting the city to shop at Asian markets there to obtain specialty foods. By country: Vermont In terms of food-place association perceptions, Vermont has been described as being associated with "homemade-style specialty items", along with maple syrup. Companies and stores: Some companies, grocery stores and food stores specialize in or predominantly purvey specialty foods. Some of these companies include: Asian markets and supermarkets Boulder Specialty Brands Inc. Centennial Specialty Foods Corp., Centennial, Colorado Innovative Food Holdings Organic Food Brokers, Boulder, Colorado The Seven Elements, Amsterdam (The Netherlands) Whole Foods Market Organizations: United States National Association for the Specialty Food Trade Also known as the Specialty Food Association, it is a non-profit trade association founded in 1952 in New York that has over 3,000 members. The organization also oversees its Specialty Food Foundation, a foundation that "works to reduce hunger and increase food recovery efforts via grantmaking, education and industry events". Organizations: Connecticut Connecticut Food Association – has a specialty food division Connecticut Specialty Food Association Massachusetts Massachusetts Specialty Foods Association Michigan Traverse Bay Specialty Foods New York In New York's Finger Lakes region, the Worker Ownership Resource Center established the Specialty Food Network. The network was established to "help clients start or expand small food businesses" and to promote the businesses and products of its members. Establishment of the network was enabled in part with a grant from the John Merck Fund. In 1998, the network had 46 members. Organizations: South Carolina South Carolina Specialty Food Association Vermont Vermont Specialty Food Association
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metabasis paradox** Metabasis paradox: The metabasis paradox is an instance in the received text of Aristotle's Poetics where, according to many scholars, he makes two incompatible statements. In chapter 13 of the book, Aristotle states that for tragedy to end in misfortune is "correct," yet in chapter 14 he judges a kind of tragedy "best" that does not end in misfortune. Since the 16th century, scholars in Classics have puzzled over this contradiction or have proposed solutions, of which there are three from the 21st century. Gotthold Lessing's solution has been the most influential yet there is not a consensus. Metabasis paradox: In chapter 13, Aristotle initially argues that tragedy should consist of a change of fortune from good to bad, and mentions toward the end of the chapter that "ending in misfortune" is "correct". In chapter 14, he identifies the incident that creates fear and pity, killing "among family," in which the killer could either kill or not, and either knowingly or unknowingly. Yet in chapter 14 Aristotle finds that in the "best" version, the killer recognizes the victim and does not kill. Since that narrative does not end in misfortune, scholars often conclude that chapter 14 seems to contradict 13.Arata Takeda has written a detailed history of the problem from the Renaissance up to the late 20th century, omitting 21st century work. Takeda, however, does not offer the standard, consensus description of the solutions of André Dacier, Gotthold Lessing, and Stephen Halliwell. Takeda proposed a name for the problem, "metabasis paradox," from metabasis, "change," Aristotle's term in the Poetics for change of fortune. The problem: In chapter 13, Aristotle discusses what combination of change of fortune, or μετάβασις (metabasis) and character will create fear and pity, which turns out to involve a change of fortune from good to bad. He first rules out all scenarios involving a totally good or totally bad man. Omitting the good man passing from bad to good fortune, he evaluates (1) the wholly good man changing from good to bad fortune (Poetics 1452b34-5), (2) the wholly bad man changing from bad to good fortune (1452b37), and (3) the wholly bad man changing from good to bad fortune (1453a1). Aristotle finds that none of these three creates both fear and pity, and instead, the tragic hero should be ethically like the average person—not completely good or bad but a mean between the two (1453a7), and suffer a change of fortune from good to bad (1453a15). He describes misfortune in tragedy, δυστυχία (dustuchia "adversity, misfortune") as "to suffer or inflict terrible disasters" (1453a21), Aristotle then mentions that "so many" of Euripides's "plays end in misfortune." And he notes that he has just previously shown--"as was said"--that this kind of ending is "ὀρθόν" (orthon "correct") (1453a26).In chapter 14, Aristotle considers the most "dreadful or rather pitiable" deed, "when for instance brother kills brother, or son father, or mother son, or son mother—either kills or intends to kill, or does something of the kind, that is what we must look for" (1453b20-21). The problem: Aristotle notes four ways this incident may be treated. After naming them, he ranks them: The worst of these is to intend the action with full knowledge and not to perform it. That outrages the feelings and is not tragic, for there is no calamity. So nobody does that, except occasionally, as, for instance, Haemon and Creon in the Antigone. Next comes the doing of the deed. It is better to act in ignorance and discover afterwards. Our feelings are not outraged and the discovery is startling. Best of all is the last; in the Cresphontes, for instance, Merope intends to kill her son and does not kill him but discovers; and in the Iphigeneia the case of the sister and brother; and in the Helle the son discovers just as he is on the point of giving up his mother (1453b37-54a8). Killing averted by recognition is considered incompatible with chapter 13's claim that it is "correct" for tragedy to "end in misfortune." Solutions: Piero Vettori Vettori did not try to solve the problem but was first to publish about it, in his Latin commentary on the Poetics in 1560. André Dacier wrote more than a century later, as though unaware of Castelvetro's remarks on the problem, "The wise Victorius [Vettori] is the only one who has seen it; but since he did not know what was the concern in the Chapter, and that it is only by this that it can be solved, he has not attempted to clarify it." Lodovico Castelvetro Castelvetro engaged the problem in his translation and commentary of 1570. He held that Aristotle rightly established ending in misfortune in chapter 13, and wrongly broke this rule in chapter 14, praising as best a kind of tragedy that, as Castelvetro put it, lacks "passion." Castelvetro proposed that a laudable action must involve passion: "[B]y passione Castelvetro meant pathos, that is, suffering, not emotion--and more laudable is such action as involves more passion". And although for Castelvetro killing and recognizing later is no less ethical than killing averted by recognition, in the former the "passion is full and accomplished (piena e auenuta)," whereas in killing averted by recognition passion is "short and threatened (sciema e minacciata)." André Dacier In commentary accompanying his 1692 French edition of the Poetics, Dacier made the first known attempt to resolve the contradiction. As scholars normally understand Dacier, his theory was that Aristotle called Euripides' plays ending in misfortune "correct" because the authoritative, traditional versions of these stories end in misfortune. Dacier believed that, in chapter 14, Aristotle considered stories that are open to change, hence the option of avoiding a death. As Dacier understood him, Aristotle meant that if the killing within family cannot be avoided, then the playwright moves to the next best, and so on. Dacier also shifted Aristotle's numbering by one. The altered numbering is (1) killing knowingly (third best), (2) killing and recognizing later (second best), (3) killing averted by recognition (best), and (4) failing to kill while knowing (worst). Solutions: Thus Astydamas used the second, when he brought Alcmaeon on the stage, who killed Eriphyle. He did not follow the first manner, as Aeschylus did in his Choephoroi, and Sophocles and Euripides in Electra. He chose the second, because the certainty of Eriphyle's death did not allow him to choose the third. But Euripides chose the third in his Cresphontes, as the uncertain tradition of Merope's action gave him the liberty to choose which he pleased. Solutions: Gotthold Lessing Lessing responded to Dacier in Hamburg Dramaturgy. He maintained that Aristotle's preference for ending in misfortune was not relative to tradition. In Lessing's view, Aristotle meant that with regard to endings alone, misfortune is always better for tragedy.Lessing's own solution is that in chapter 13 Aristotle establishes the best plot structure, and in 14 the best treatment of pathos, or scene of suffering. Lessing argued that, regardless of the reason for Aristotle's judging it "best," the scene where death is prevented could occur well before the end of a play. He proposed that this removes Aristotle's contradiction, because to say it is the best terrible incident still leaves the drama open to ending in good or bad fortune, at least in theory. He wrote that "Change of fortune may occur in the middle of the play, and even if it continues thus to the end of the piece, it does not therefore constitute its end." Lessing acknowledged the difficulty of ending in misfortune or death after it has been prevented. Yet he believed it was possible, combining the best pathos and best ending, which D.W. Lucas considered somewhat implausible. Lessing's solution has been the most influential, at least historically. Notable scholars have endorsed the idea during the 19th and 20th centuries, including Gustav Teichmüller, Johannes Vahlen, Daniel de Montmollin (not to be confused with Daniel de Montmollin), Gerald Else (crediting Vahlen), and D.W. Lucas.Ingram Bywater was not persuaded by Lessing on this issue, and instead, he believed Aristotle had changed his mind. Bywater thought that in chapter 14 Aristotle became more concerned with avoiding what is shocking, and that he ultimately regarded the act of killing followed by recognition to be shocking. According to Bywater, this is why the fourth way, "where a timely Discovery saves us from the rude shock to our moral feelings...is pronounced to be κράτιστον." Bywater wrote: In chap. 13 Aristotle was thinking only of the emotional effect of tragedy as produced by the most obvious means; he comes to see that the same effect may be produced in a finer form without their aid. It is his somewhat tardy recognition of the necessity of avoiding τὸ μιαρόν that has caused this change of view. Solutions: John Moles also believed that the contradiction was due to a change of mind, as many secondary sources on Moles note. Moles wrote that "once Aristotle had embarked on his more detailed comparison of the different ways of handling the πάθος, he was induced to change his preference because at that particular point his more detailed approach necessarily involved taking a more restricted perspective." Stephen Halliwell Halliwell maintains that Aristotle did not change his mind, and that neither is Lessing's view satisfactory. Halliwell suggests the contradiction is not to be avoided, and instead finds that Aristotle is torn between two commitments, in chapter 13 the "tragic vision of the poets" and in chapter 14 an ethical view against any inexplicable, undeserved misfortune. Halliwell also argues that for Aristotle the change of fortune from good to bad is more important than ending in misfortune, and he further attempts to explain Aristotle's choice in chapter 14. He claims that, for Aristotle, misfortune has tragic meaning only if it is avoidable and intelligible, and that recognizing before killing, among the four ways, best meets these criteria. In his interpretation of Halliwell, Takeda believed the main point is that Aristotle had emphasized process over ending in misfortune. Others describe Halliwell's view as concerned with Aristotle's ethics as a criterion of tragedy. Solutions: Sheila Murnaghan Murnaghan titled her essay on the problem "sucking the juice without biting the rind," borrowing Gerald Else's phrase characterizing the averted death theme. Murnaghan argues that, instead of a problem to be solved, Aristotle's contradiction expresses the ambivalence of many observers toward tragedy's violence. She finds that the death-avoiding incident Aristotle prefers reflects the essence of theater, since both allow us to confront death safely. Murnaghan also proposes that theater is ethically ambiguous, since although unreal it may encourage violence and desensitize the viewer. She compares the theme of escape from death with philosophy's dispassionate, distanced view of art, and with Aristotle's catharsis, since theories of catharsis often understand tragedy as a homeopathic cure. Solutions: Elsa Bouchard Through a close reading of Aristotle's expression of the two contrary opinions, Bouchard proposes that they refer to different types of audience. In Bouchard's view, the preference for misfortune in chapter 13 reflects the domain of the literary critic, while the judgment that killing averted by recognition is "best," "kratiston," is linked to the popular audience. Bouchard acknowledges that both these types enjoyed viewing drama in the theater at Athens, but she contends that the more intellectual type would have sought less "comfort" in a story's ending, and would even prefer an unhappy ending. As evidence, she notes that "kratiston," "strong," may be the counterpart of the emotionally "weak" popular audience that Aristotle references in chapter 13 in describing the popularity of double plots, in which the good are rewarded and the bad punished. In contrast, the context Aristotle gives (in Bouchard's translation) to the arguments for change of fortune from good to bad, is more intellectual--"the most beautiful tragedy according to art." Malcolm Heath Another 21st century view holds that Aristotle did not mean that "ending in misfortune is correct" absolutely, and instead that he privileges this type of ending because it avoids the inferior "double plot" of a more melodramatic type of tragedy. This solution is put forth by the classicist Malcolm Heath (not to be confused with the English cricketer Malcolm Heath). Toward the end of Poetics chapter 13, Aristotle mentions that Euripides stands out among tragedians because "so many of his tragedies end in misfortune." Aristotle then states that "this is, as was earlier said, correct" (1453a25-26). Malcolm Heath finds that this praise of ending in misfortune is meant to justify a single plot over a double one. As Elsa Bouchard writes, "According to Heath, the prescriptions of chapter 13 are to be understood as essentially preliminary and polemic: they are above all intended to refute the partisans of the double plot," i.e., the plot in which a good man is saved and a bad man is punished or dies. In other words, as Heath explains, ending in misfortune means the ending has only one quality, fitting one individual. Consequently, according to Heath, the contradiction between chapters 13 and 14 is removed. Bouchard also accounts for Heath's explanation of chapter 14: "The reason Heath gives to explain [the] preference rests on the idea of technical purity: plays like Iphigenia in Tauris are devoid of acts of violence (pathos) and thus of the kind of sensational spectacle that Aristotle condemns at the beginning of chapter 14: 'Reliance on visual effect therefore becomes impossible in a plot of averted violence: the poet has to rely on the structure of the plot to achieve tragic effect.'"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Return on modeling effort** Return on modeling effort: Return on modelling effort (ROME) is the benefit resulting from a (supplementary) effort to create and / or improve a model. Purpose: In engineering, modelling always serves a particular goal. For example, the lightning protection of aircraft can be modelled as an electrical circuit, in order to predict whether the protection will still work in 30 years, given the ageing of its electrical components. More and more effort can be put in making this model predict reality perfectly. However, this perfection comes at a price: researchers invest time and money in improving the model. As a Return on investment (ROI), the ROME is a metric for the use of further modelling. It may therefore serve as a 'stopping criterion'.Typically, researchers will pull towards continuing modelling, while management will pull towards stopping modelling. Being explicit about the cost and benefits of continued modeling may help to make informed decisions that are understood by both sides. Continuous communication between model developers and model users increases the probability of models being actually put to profit. Domains: ROME is a metric, which can be evaluated wherever modelling is performed with a quantifiable goal. Examples include: Modeling the impact of federal policy on social problems. Modeling a marketing mix to statistically correlate a number of inputs (or independent variables) – such as a marketing campaign – to outcomes (or dependent variables) – such as sales or profits. Modeling the links between enterprise actors to make an informed choice on splitting organizations. Modeling the coupling of an electromagnetic interference to a PCB to reduce its susceptibility by improving the routing of traces. Research: The initiative "Models at Work" studies the creation, management and use of domain models in scientific and industrial practice, aiming at a diversity of goals, varying from (as truthful as possible) representation of the conceptual structure of the domain that is modelled, via animation, simulation, execution and gamification, until automated (logic-based) reasoning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic acquisition of sense-tagged corpora** Automatic acquisition of sense-tagged corpora: The knowledge acquisition bottleneck is perhaps the major impediment to solving the word sense disambiguation (WSD) problem. Unsupervised learning methods rely on knowledge about word senses, which is barely formulated in dictionaries and lexical databases. Supervised learning methods depend heavily on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises. Existing methods: Therefore, one of the most promising trends in WSD research is using the largest corpus ever accessible, the World Wide Web, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval (IR). In this case, however, the reverse is also true: Web search engines implement simple and robust IR techniques that can be successfully used when mining the Web for information to be employed in WSD. The most direct way of using the Web (and other corpora) to enhance WSD performance is the automatic acquisition of sense-tagged corpora, the fundamental resource to feed supervised WSD algorithms. Although this is far from being commonplace in the WSD literature, a number of different and effective strategies to achieve this goal have already been proposed. Some of these strategies are: acquisition by direct Web searching (searches for monosemous synonyms, hypernyms, hyponyms, parsed gloss' words, etc.), Yarowsky algorithm (bootstrapping), acquisition via Web directories, and acquisition via cross-language meaning evidences. Summary: Optimistic results The automatic extraction of examples to train supervised learning algorithms reviewed has been, by far, the best explored approach to mine the web for word sense disambiguation. Some results are certainly encouraging: In some experiments, the quality of the Web data for WSD equals that of human-tagged examples. This is the case of the monosemous relatives plus bootstrapping with Semcor seeds technique and the examples taken from the ODP Web directories. In the first case, however, Semcor-size example seeds are necessary (and only available for English), and it has only been tested with a very limited set of nouns; in the second case, the coverage is quite limited, and it is not yet clear whether it can be grown without compromising the quality of the examples retrieved. Summary: It has been shown that a mainstream supervised learning technique trained exclusively with web data can obtain better results than all unsupervised WSD systems which participated at Senseval-2. Web examples made a significant contribution to the best Senseval-2 English all-words system. Difficulties There are, however, several open research issues related to the use of Web examples in WSD: High precision in the retrieved examples (i.e., correct sense assignments for the examples) does not necessarily lead to good supervised WSD results (i.e., the examples are possibly not useful for training). The most complete evaluation of Web examples for supervised WSD indicates that learning with Web data improves over unsupervised techniques, but the results are nevertheless far from those obtained with hand-tagged data, and do not even beat the most-frequent-sense baseline. Summary: Results are not always reproducible; the same or similar techniques may lead to different results in different experiments. Compare, for instance, Mihalcea (2002) with Agirre and Martínez (2004), or Agirre and Martínez (2000) with Mihalcea and Moldovan (1999). Results with Web data seem to be very sensitive to small differences in the learning algorithm, to when the corpus was extracted (search engines change continuously), and on small heuristic issues (e.g., differences in filters to discard part of the retrieved examples). Summary: Results are strongly dependent on bias (i.e., on the relative frequencies of examples per word sense). It is unclear whether this is simply a problem of Web data, or an intrinsic problem of supervised learning techniques, or just a problem of how WSD systems are evaluated (indeed, testing with rather small Senseval data may overemphasize sense distributions compared to sense distributions obtained from the full Web as corpus). Summary: In any case, Web data has an intrinsic bias, because queries to search engines directly constrain the context of the examples retrieved. There are approaches that alleviate this problem, such as using several different seeds/queries per sense or assigning senses to Web directories and then scanning directories for examples; but this problem is nevertheless far from being solved. Once a Web corpus of examples is built, it is not entirely clear whether its distribution is safe from a legal perspective. Future Besides automatic acquisition of examples from the Web, there are some other WSD experiments that have profited from the Web: The Web as a social network has been successfully used for cooperative annotation of a corpus (OMWE, Open Mind Word Expert project), which has already been used in three Senseval-3 tasks (English, Romanian and Multilingual). The Web has been used to enrich WordNet senses with domain information: topic signatures and Web directories, which have in turn been successfully used for WSD. Summary: Also, some research benefited from the semantic information that the Wikipedia maintains on its disambiguation pages.It is clear, however, that most research opportunities remain largely unexplored. For instance, little is known about how to use lexical information extracted from the Web in knowledge-based WSD systems; and it is also hard to find systems that use Web-mined parallel corpora for WSD, even though there are already efficient algorithms that use parallel corpora in WSD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Don't Bite the Pavement** Don't Bite the Pavement: Don't Bite the Pavement is a series of contemporary art exhibitions showcasing installation art, expanded video, and experimental film, which toured the west coast of the United States, Canada, and the United Kingdom. Biography: The Don't Bite the Pavement series began in 1999 in Olympia, Washington and eventually became a project of the organisation ArtRod. Each installment was loosely focused around an idea or theme and showcased innovative work by artists from around the world within the field of video art, expanded media, microcinema, or installation art; providing a space for this work to be presented and creating dialogues between the site and the larger community. Around this time, artists and writers also began documenting and discussing this work in a reoccurring column likewise called Don't Bite the Pavement, which regularly featured interviews and articles in the journal Toby Room.With each installment of the video series, Don't Bite the Pavement sought to create exhibitions that "engaged and challenged the community", while providing space and resources where artists could exhibit new work and develop novel approaches and projects. Reception: Artforum contributor Emily Hall wrote that the project was "pulling together some of the most challenging and interesting work happening around Seattle" adding: "Let that be a modest but powerful lesson to all the naysayers and whiners who complain that innovative work isn't happening here."Artist Lauren Steinhart characterized the project as: "Each gathering of Don't Bite the Pavement was a chance for artists and viewers to gather, interact, and to show and discuss their work both completed and in-progress. In this way, DBtP became an integral and vital part of the arts community."Early events included work by artists Wynne Greenwood (Tracy + the Plastics), Denise Baggett (Smith), Jared Pappas-Kelley, Tim Sullivan, Anna Jordan Huff (Anna Oxygen), Cathy de la Cruz, Nathan Howdeshell (from the band the Gossip), April Levy, Michael Lent, Jason Gutz, Devon Damonte, Lauren Steinhart, Bryan Connolly, Bridget Irish, and has also included work by David Blandy and George Kuchar with the average screening consisting of pieces by emerging and established artists.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bond fluctuation model** Bond fluctuation model: The BFM (bond fluctuation model or bond fluctuation method) is a lattice model for simulating the conformation and dynamics of polymer systems. There are two versions of the BFM used: The earlier version was first introduced by I. Carmesin and Kurt Kremer in 1988, and the later version by J. Scott Shaffer in 1994. Conversion between models is possible. Model: Carmesin and Kremer version In this model the monomers are represented by cubes on a regular cubic lattice with each cube occupying eight lattice positions. Each lattice position can only be occupied by one monomer in order to model excluded volume. The monomers are connected by a bond vector, which is taken from a set of typically 108 allowed vectors. There are different definitions for this vector set. One example for a bond vector set is made up from the six base vectors below using permutation and sign variation of the three vector components in each direction: B=P±(200)∪P±(210)∪P±(211)∪P±(221)∪P±(300)∪P±(310) The resulting bond lengths are 2,5,6,3 and 10 The combination of bond vector set and monomer shape in this model ensures that polymer chains cannot cross each other, without explicit test of the local topology. Model: The basic movement of a monomer cube takes place along the lattice axes ΔB=P±(1,0,0) so that each of the possible bond vectors can be realized. Model: Shaffer's version As in the case of the Carmesin-Kremer BFM, the Shaffer BFM is also constructed on a simple-cubic lattice. However, the lattice points, or vertices of each cube are the sites that can be occupied by a monomer. Each lattice point can be occupied by one monomer only. Successive monomers along a polymer backbone are connected by bond vectors. The allowed bond vectors must be one of: (a) A cube edge (b) A face diagonal or (c) A solid diagonal. The resulting bond lengths are 1,2,3 . In addition to the bond length constraint, polymers should not be allowed to cross. This is done most efficiently by the use of a secondary lattice which is twice as fine as the original lattice. The secondary lattice tracks the midpoints of the bonds in the system, and forbids the overlap of bond midpoints. This effectively leads to disallowing polymers from crossing each other. Monte Carlo step: In both versions of the BFM, a single attempt to move one monomer consists of the following steps which are standard for Monte Carlo methods: Select a monomer m and a direction ΔB∈P±(1,0,0) randomly Check list of conditions (see below) If all conditions are fulfilled, perform moveThe conditions to perform a move can be subdivided into mandatory and optional ones. Mandatory conditions for Carmesin–Kremer BFM: Four lattice sites next to monomer m in the direction d are empty. The move does not lead to bonds that are not contained in the bond vector set. Mandatory conditions for Shaffer BFM: The lattice site to which the chosen monomer is going to be moved is empty. The move does not lead to bonds that are not contained in the bond vector set. The move does not lead to overlapping of bond midpoints. Optional conditions: If the move leads to an energetic difference ΔU for example due to an electric field or an adsorbing force to the walls. In this case a Metropolis algorithm is applied: The Metropolis rate pM which is defined as pM=e−ΔU/kBT is compared to a random number r from the interval [0, 1). If the Metropolis rate is smaller than r the move is rejected, otherwise it is accepted. Optional conditions: The number of Monte Carlo steps of the total system is defined as: attempts monomers
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Makaton** Makaton: Makaton is a communication tool with speech, signs, and symbols to enable people with disabilities or learning disabilities to communicate. Makaton supports the development of essential communication skills such as attention, listening, comprehension, memory and expressive speech and language. The Makaton language programme has been used with individuals who have cognitive impairments, autism, Down syndrome, specific language impairment, multisensory impairment and acquired neurological disorders that have negatively affected the ability to communicate, including stroke and dementia patients.The name "Makaton" is derived from the names of three members of the original teaching team at Botleys Park Hospital: Margaret Walker (the designer of the programme and speech therapist at Botleys Park), Katherine Johnston and Tony Cornforth (psychiatric hospital visitors from the Royal Association for Deaf People).Makaton is a registered trademark of the Makaton Charity, which was established in 2007 to replace the original charitable trust, the Makaton Vocabulary Development Project, established in 1983. The original trademark application for Makaton was filed in Britain on 28 August 1979, with registration approved as from that date under trademark registration no. 1119745.In 2004 the Oxford University Press included Makaton as a common usage word in the Oxford English Dictionary. The entry states: "Makaton, n. Brit. A proprietary name for: a language programme integrating speech, manual signs, and graphic symbols, developed to help people for whom communication is very difficult, esp. those with learning disabilities." Programme: The Makaton Language Programme uses a multimodal approach to teach communication, language and, where appropriate literacy skills, through a combination of speech, signs, and graphic symbols used concurrently, or, only with speech with signs, or, only with speech with graphic symbols as appropriate for the student's needs. It consists of a Core Vocabulary of roughly 450 concepts that are taught in a specific order (there are eight different stages). For example, stage one involves teaching vocabulary for immediate needs, like "eat" and "drink". Later stages contain more complex and abstract vocabulary such as time and emotions. Once basic communication has been established, the student can progress in their language use, using whatever modes are most appropriate. Also, although the programme is organised in stages, it can be modified and tailored to the individual's needs. In addition to the Core Vocabulary, there is a Makaton Resource Vocabulary of over 11,000 concepts which are illustrated with signs and graphic symbols. Development: Original research was conducted by Margaret Walker in 1972/73, and resulted in the design of the Makaton Core Vocabulary based on functional need. This research was conducted with institutionalised deaf cognitively impaired adults resident at Botleys Park Hospital in Chertsey, Surrey (which closed in 2008). The aim was to enable them to communicate using signs from British Sign Language. Fourteen deaf and cognitively impaired adults participated in the pilot study, and all were able to learn to use manual signs; improved behaviour was also noted. Shortly after, the Core Vocabulary was revised to include both children and adults with severe communication difficulties (including individuals who could hear), and was used in many schools throughout Britain in order to stimulate communication and language.In the early stages of development, Makaton used only speech and manual signs (without symbols). By 1985, work had begun to include graphic symbols in the Makaton Language Programme and a version including graphic symbols was published in 1986. The Core Vocabulary was revised in 1986 to include additional cultural concepts. Development: The Makaton Vocabulary Development Project was founded in 1976 by Margaret Walker, who worked in a voluntary capacity as director until her retirement in October 2008. The first Makaton training workshop was held in 1976 and supporting resources and further training courses were, and continue, to be developed. In 1983 the Makaton Vocabulary Development Project became a charitable trust, and in 2007 it changed its status to become the Makaton Charity. Use: The Makaton Language Programme is used extensively across Britain and has been adapted for use in different countries; signs from each country's deaf community are used, along with culturally relevant Makaton symbols. For example, within Britain, Makaton uses signs from British Sign Language; the signs are mainly from the London and South East England regional dialect. Makaton has also been adapted for use in over 40 countries, including France, Greece, Japan, Kuwait and the Gulf, Russia, South Africa and Switzerland. Using signs from each country's own existing sign language ensures that they reflect each country's unique culture and also provide a bank of further signs if required for use with the Makaton Language Programme. Use: In 1991 the Makaton Charity produced a video/DVD of children's familiar nursery rhymes, signed, spoken and sung by a well-known children's TV presenter, Dave Benson Phillips, who had previously used Makaton with poems and rhymes in the Children's BBC show Playdays. The aim was for it to be enjoyed by children with developmental disabilities and their peers and siblings. Following this major success, in 2003 it became a significant part of the BBC's Something Special programmes on the CBeebies programme thread, presented by Justin Fletcher, which has won numerous awards and is now into its thirteenth series. Use: On 16 November 2018, comedian Rob Delaney read a book on the BBC's children's channel CBeebies entirely in Makaton and English; he had used Makaton to communicate with his late son Henry, who was rendered unable to talk after a tracheotomy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic circuit network** Dynamic circuit network: A dynamic circuit network (DCN) is an advanced computer networking technology that combines traditional packet-switched communication based on the Internet Protocol, as used in the Internet, with circuit-switched technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure. Implementation: Dynamic circuit networks were pioneered by the Internet2 advanced networking consortium. The experimental Internet2 HOPI infrastructure, decommissioned in 2007, was a forerunner to the current SONET-based Ciena Network underlying the Internet2 DCN. The Internet2 DCN began operation in late 2007 as part of the larger Internet2 network. It provides advanced networking capabilities and resources to the scientific and research communities, such as the Large Hadron Collider (LHC) project.The Internet2 DCN is based on open-source, standards-based software, the Inter-domain Controller (IDC) protocol, developed in cooperation with ESnet and GÉANT2. The entire software set is known as the Dynamic Circuit Network Software Suite (DCN SS). Implementation: Inter-domain Controller protocol The Inter-domain Controller protocol manages the dynamic provisioning of network resources participating in a dynamic circuit network across multiple administrative domain boundaries. It is a SOAP-based XML messaging protocol, secured by Web Services Security (v1.1) using the XML Digital Signature standard. It is transported over HTTP Secure (HTTPS) connections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fruit sours** Fruit sours: Fruit sours is a confectionery that is normally sold in bulk. Each piece is spherical and about 15mm in diameter. They come in a variety of colors; typically red (strawberry), orange, yellow (lemon), green (apple or lime), and purple (berry or black currant). Fruit sours are comparable to jelly beans in texture, with a soft candy center and a glazed outer shell. They are also mildly tart and tangy in flavor, due to citric acid and malic acid which are sometimes crystals that coat the sweets. Gourmet varieties will have a more prominent fruit-flavoring added. Fruit sours: A 4 oz. serving of fruit sours contains about 400 calories with 105 grams of carbohydrates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CIM Schema** CIM Schema: CIM Schema is a computer specification, part of Common Information Model standard, and created by the Distributed Management Task Force.It is a conceptual diagram made of classes, attributes, relations between these classes and inheritances, defined in the world of software and hardware. This set of objects and their relations is a conceptual framework for describing computer elements and organizing information about the managed environment.This schema is the basis of other DMTF standards such as WBEM, SMASH or SMI-S for storage management. Extensibility: The CIM schema is object-based · and extensible, allowing manufacturers to represent their equipment using the elements defined in the core classes of CIM schema. For this, manufacturers provide software extensions called providers, which supplement existing classes by deriving them and adding new attributes. Examples of common core classes: CIM_ComputerSystem for a computer host CIM_DataFile: Computer file CIM_Directory: Files directory CIM_DiskPartition · : disk partition CIM_FIFOPipeFile: Named pipes CIM_OperatingSystem · : Operating system CIM_Process: Computer process CIM_SqlTable · : Database table CIM_SqlTrigger: Database trigger
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Naxos syndrome** Naxos syndrome: Naxos disease (also known as "diffuse non-epidermolytic palmoplantar keratoderma with woolly hair and cardiomyopathy" or "diffuse palmoplantar keratoderma with woolly hair and arrhythmogenic right ventricular cardiomyopathy", first described on the island of Naxos by Nikos Protonotarios) is a cutaneous condition characterized by a palmoplantar keratoderma. The prevalence of the syndrome is up to 1 in every 1000 people in the Greek islands.It has been associated with mutations in the genes encoding the proteins desmoplakin, plakoglobin, desmocollin-2, and SRC-interacting protein (SIP). Naxos disease has the same cutaneous phenotype as the Carvajal syndrome. Symptoms: Between 80 and 99% of those with Naxos disease will display some of the following symptoms: Disease of the heart muscle Thickening of palms and soles Sudden increased heart rate Dizzy spells Kinked hair
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wing of ilium** Wing of ilium: The wing (ala) of ilium is the large expanded portion of the ilium, the bone which bounds the greater pelvis laterally. It presents for examination two surfaces—an external and an internal—a crest, and two borders—an anterior and a posterior. External surface: The external surface, known as the dorsum ossis ilium, is directed backward and lateralward behind, and downward and lateralward in front. It is smooth, convex in front, deeply concave behind; bounded above by the crest, below by the upper border of the acetabulum, in front and behind by the anterior and posterior borders. This surface is crossed in an arched direction by three lines—the posterior, anterior, and inferior gluteal lines. The posterior gluteal line (superior curved line), the shortest of the three, begins at the crest, about 5 cm in front of its posterior extremity; it is at first distinctly marked, but as it passes downward to the upper part of the greater sciatic notch, where it ends, it becomes less distinct, and is often altogether lost. Behind this line is a narrow semilunar surface, the upper part of which is rough and gives origin to a portion of the gluteus maximus; the lower part is smooth and has no muscular fibers attached to it. The anterior gluteal line (middle curved line), the longest of the three, begins at the crest, about 4 cm behind its anterior extremity, and, taking a curved direction downward and backward, ends at the upper part of the greater sciatic notch. The space between the anterior and posterior gluteal lines and the crest is concave, and gives origin to the gluteus medius. Near the middle of this line a nutrient foramen is often seen. The inferior gluteal line (inferior curved line), the least distinct of the three, begins in front at the notch on the anterior border, and, curving backward and downward, ends near the middle of the greater sciatic notch. The surface of bone included between the anterior and inferior gluteal lines is concave from above downward, convex from before backward, and gives origin to the gluteus minimus. Between the inferior gluteal line and the upper part of the acetabulum is a rough, shallow groove, from which the reflected tendon of the rectus femoris arises. Internal surface of the ala: The internal surface of the ala is bounded above by the crest, below, by the arcuate line; in front and behind, by the anterior and posterior borders. It presents a large, smooth, concave surface, called the iliac fossa, which gives origin to the iliacus and is perforated at its inner part by a nutrient canal; and below this a smooth, rounded border, the arcuate line, which runs downward, forward, and medialward. Behind the iliac fossa is a rough surface, divided into two portions, an anterior and a posterior. The anterior surface (auricular surface), so called from its resemblance in shape to the ear, is coated with cartilage in the fresh state, and articulates with a similar surface on the side of the sacrum. The posterior portion, known as the iliac tuberosity, is elevated and rough, for the attachment of the posterior sacroiliac ligaments and for the origins of the sacrospinalis and multifidus. Below and in front of the auricular surface is the preauricular sulcus, more commonly present and better marked in the female than in the male; to it is attached the pelvic portion of the anterior sacroiliac ligament. Crest of the ilium: The crest of the ilium is convex in its general outline but is sinuously curved, being concave inward in front, concave outward behind. It is thinner at the center than at the extremities, and ends in the anterior and posterior superior iliac spines. The surface of the crest is broad, and divided into external and internal lips, and an intermediate line. About 5 cm behind the anterior superior iliac spine there is a prominent tubercle on the outer lip. To the external lip are attached the tensor fasciæ latæ, obliquus externus abdominis, and latissimus dorsi, and along its whole length the fascia lata; to the intermediate line the obliquus internus abdominis; to the internal lip, the fascia iliaca, the transversus abdominis, quadratus lumborum, sacrospinalis, and iliacus. Anterior border of the ala: The anterior border of the ala is concave. It presents two projections, separated by a notch. Anterior border of the ala: Of these, the uppermost, situated at the junction of the crest and anterior border, is called the anterior superior iliac spine; its outer border gives attachment to the fascia lata, and the tensor fasciæ latæ, its inner border, to the iliacus; while its extremity affords attachment to the inguinal ligament and gives origin to the sartorius Beneath this eminence is a notch from which the sartorius takes origin and across which the lateral femoral cutaneous nerve passes. Anterior border of the ala: Below the notch is the anterior inferior iliac spine, which ends in the upper lip of the acetabulum; it gives attachment to the straight tendon of the rectus femoris and to the iliofemoral ligament of the hip-joint. Medial to the anterior inferior spine is a broad, shallow groove, over which the iliacus and psoas major pass. This groove is bounded medially by an eminence, the iliopectineal eminence, which marks the point of union of the ilium and pubis. Posterior border of the ala: The posterior border of the ala, shorter than the anterior, also presents two projections separated by a notch, the posterior superior iliac spine and the posterior inferior iliac spine. The former serves for the attachment of the oblique portion of the posterior sacroiliac ligaments and the multifidus; the latter corresponds with the posterior extremity of the auricular surface. Below the posterior inferior spine is a deep notch, the greater sciatic notch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Driver circuit** Driver circuit: In electronics, a driver is a circuit or component used to control another circuit or component, such as a high-power transistor, liquid crystal display (LCD), stepper motors, SRAM memory,: 30  and numerous others. Driver circuit: They are usually used to regulate current flowing through a circuit or to control other factors such as other components and some other devices in the circuit. The term is often used, for example, for a specialized integrated circuit that controls high-power switches in switched-mode power converters. An amplifier can also be considered a driver for loudspeakers, or a voltage regulator that keeps an attached component operating within a broad range of input voltages. Driver circuit: Typically the driver stage(s) of a circuit requires different characteristics to other circuit stages. For example, in a transistor power amplifier circuit, typically the driver circuit requires current gain, often the ability to discharge the following transistor bases rapidly, and low output impedance to avoid or minimize distortion. In SRAM memory driver circuits are used to rapidly discharge necessary bit lines from a precharge level to the write margin or below.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proofs involving the addition of natural numbers** Proofs involving the addition of natural numbers: This article contains mathematical proofs for some properties of addition of the natural numbers: the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers. Definitions: This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S(a) by the two rules For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is, 1 = S(0).For every natural number a, one has Proof of associativity: We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c. For the base case c = 0, (a+b)+0 = a+b = a+(b+0)Each equation follows by definition [A1]; the first with a + b, the second with b. Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c, (a+b)+c = a+(b+c)Then it follows, In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete. Proof of identity element: Definition [A1] states directly that 0 is a right identity. We prove that 0 is a left identity by induction on the natural number a. For the base case a = 0, 0 + 0 = 0 by definition [A1]. Now we assume the induction hypothesis, that 0 + a = a. Then This completes the induction on a. Proof of commutativity: We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything). The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above: a + 0 = a = 0 + a. Proof of commutativity: Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). We have proved that 0 commutes with everything, so in particular, 0 commutes with 1: for a = 0, we have 0 + 1 = 1 + 0. Now, suppose a + 1 = 1 + a. Then This completes the induction on a, and so we have proved the base case b = 1. Now, suppose that for all natural numbers a, we have a + b = b + a. We must show that for all natural numbers a, we have a + S(b) = S(b) + a. We have This completes the induction on b.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IL13RA2** IL13RA2: Interleukin-13 receptor subunit alpha-2 (IL-13Rα2), also known as CD213A2 (cluster of differentiation 213A2), is a membrane bound protein that in humans is encoded by the IL13RA2 gene. Function: IL-13Rα2 is closely related to IL-13Rα1, a subunit of the interleukin-13 receptor complex. This protein binds IL13 with high affinity, but lacks any significant cytoplasmic domain, and does not appear to function as a signal mediator. It is, however, able to regulate the effects of both IL-13 and IL-4, despite the fact it is unable to bind directly to the latter. It is also reported to play a role in the internalization of IL13. Clinical Significance: IL-13Rα2 has been found to be over-expressed in a variety of cancers, including pancreatic, ovarian, melanomas, and malignant gliomas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biomedical Optics Express** Biomedical Optics Express: Biomedical Optics Express is a monthly peer-reviewed scientific journal published by Optica. The journal's scope encompasses fundamental research and technology development of optics applied to biomedical studies and clinical applications. The founding and first editor-in-chief is Joseph A. Izatt (Duke University). The current editor-in-chief is Ruikang (Ricky) Wang at the University of Washington, USA. Abstracting and indexing: The journal is abstracted and indexed by: Science Citation Index Expanded Current Contents/Engineering, Computing & Technology Chemical Abstracts Service/CASSI PubMedAccording to the Journal Citation Reports, the journal has a 2021 impact factor of 3.562.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CFHR5** CFHR5: Complement factor H-related protein 5 is a protein that in humans is encoded by the CFHR5 gene. Function: CFHR5 is structurally related to complement factor H which plays an important role in the regulation of a branch of the innate immune system called the alternative complement pathway. Like complement factor H, CFHR5 is able to bind to complement C3. Clinical Significance: A mutation in CHFR5 was found in patients with the disease CFHR5 nephropathy, which is a common cause of renal disease in Cyprus. The mutated form of the protein found in patients with this disease has impaired ability to bind to complement C3, suggesting that CFHR5 is important in protecting the kidneys from attack by the complement system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1-alkylglycerophosphocholine O-acyltransferase** 1-alkylglycerophosphocholine O-acyltransferase: In enzymology, a 1-alkylglycerophosphocholine O-acyltransferase (EC 2.3.1.63) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-alkyl-sn-glycero-3-phosphocholine ⇌ CoA + 2-acyl-1-alkyl-sn-glycero-3-phosphocholineThus, the two substrates of this enzyme are acyl-CoA and 1-alkyl-sn-glycero-3-phosphocholine, whereas its two products are CoA and 2-acyl-1-alkyl-sn-glycero-3-phosphocholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-alkyl-sn-glycero-3-phosphocholine O-acyltransferase. This enzyme participates in ether lipid metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rare-earth element** Rare-earth element: The rare-earth elements (REE), also called the rare-earth metals or rare earths or, in context, rare-earth oxides, and sometimes the lanthanides (although yttrium and scandium, which do not belong to this series, are usually included as rare earths), are a set of 17 nearly indistinguishable lustrous silvery-white soft heavy metals. Compounds containing rare earths have diverse applications in electrical and electronic components, lasers, glass, magnetic materials, and industrial processes. Rare-earth element: Scandium and yttrium are considered rare-earth elements because they tend to occur in the same ore deposits as the lanthanides and exhibit similar chemical properties, but have different electronic and magnetic properties. The term 'rare-earth' is a misnomer because they are not actually scarce, although historically it took a long time to isolate these elements.These metals tarnish slowly in air at room temperature and react slowly with cold water to form hydroxides, liberating hydrogen. They react with steam to form oxides and ignite spontaneously at a temperature of 400 °C (752 °F). These elements and their compounds have no biological function other than in several specialized enzymes, such as in lanthanide-dependent methanol dehydrogenases in bacteria. The water-soluble compounds are mildly to moderately toxic, but the insoluble ones are not. All isotopes of promethium are radioactive, and it does not occur naturally in the earth's crust, except for a trace amount generated by spontaneous fission of uranium-238. They are often found in minerals with thorium, and less commonly uranium. Rare-earth element: Though rare-earth elements are technically relatively plentiful in the entire Earth's crust (cerium being the 25th-most-abundant element at 68 parts per million, more abundant than copper), in practice this is spread thin across trace impurities, so to obtain rare earths at usable purity requires processing enormous amounts of raw ore at great expense, thus the name "rare" earths. Rare-earth element: Because of their geochemical properties, rare-earth elements are typically dispersed and not often found concentrated in rare-earth minerals. Consequently, economically exploitable ore deposits are sparse. The first rare-earth mineral discovered (1787) was gadolinite, a black mineral composed of cerium, yttrium, iron, silicon, and other elements. This mineral was extracted from a mine in the village of Ytterby in Sweden; four of the rare-earth elements bear names derived from this single location. List: A table listing the 17 rare-earth elements, their atomic number and symbol, the etymology of their names, and their main uses (see also Applications of lanthanides) is provided here. Some of the rare-earth elements are named after the scientists who discovered them, or elucidated their elemental properties, and some after the geographical locations where discovered. A mnemonic for the names of the sixth-row elements in order is "Lately college parties never produce sexy European girls that drink heavily even though you look". Discovery and early history: Rare earths were mainly discovered as components of minerals. Ytterbium was found in the "ytterbite" (renamed to gadolinite in 1800) discovered by Lieutenant Carl Axel Arrhenius in 1787 at a quarry in the village of Ytterby, Sweden and termed "rare" because it had never yet been seen. Arrhenius's "ytterbite" reached Johan Gadolin, a Royal Academy of Turku professor, and his analysis yielded an unknown oxide ("earth" in the geological parlance of the day), which he called yttria. Anders Gustav Ekeberg isolated beryllium from the gadolinite but failed to recognize other elements in the ore. After this discovery in 1794, a mineral from Bastnäs near Riddarhyttan, Sweden, which was believed to be an iron–tungsten mineral, was re-examined by Jöns Jacob Berzelius and Wilhelm Hisinger. In 1803 they obtained a white oxide and called it ceria. Martin Heinrich Klaproth independently discovered the same oxide and called it ochroia. It took another 30 years for researchers to determine that other elements were contained in the two ores ceria and yttria (the similarity of the rare-earth metals' chemical properties made their separation difficult). Discovery and early history: In 1839 Carl Gustav Mosander, an assistant of Berzelius, separated ceria by heating the nitrate and dissolving the product in nitric acid. He called the oxide of the soluble salt lanthana. It took him three more years to separate the lanthana further into didymia and pure lanthana. Didymia, although not further separable by Mosander's techniques, was in fact still a mixture of oxides. Discovery and early history: In 1842 Mosander also separated the yttria into three oxides: pure yttria, terbia, and erbia (all the names are derived from the town name "Ytterby"). The earth giving pink salts he called terbium; the one that yielded yellow peroxide he called erbium. In 1842 the number of known rare-earth elements had reached six: yttrium, cerium, lanthanum, didymium, erbium, and terbium. Discovery and early history: Nils Johan Berlin and Marc Delafontaine tried also to separate the crude yttria and found the same substances that Mosander obtained, but Berlin named (1860) the substance giving pink salts erbium, and Delafontaine named the substance with the yellow peroxide terbium. This confusion led to several false claims of new elements, such as the mosandrium of J. Lawrence Smith, or the philippium and decipium of Delafontaine. Due to the difficulty in separating the metals (and determining the separation is complete), the total number of false discoveries was dozens, with some putting the total number of discoveries at over a hundred. Discovery and early history: Spectroscopic identification There were no further discoveries for 30 years, and the element didymium was listed in the periodic table of elements with a molecular mass of 138. In 1879, Delafontaine used the new physical process of optical flame spectroscopy and found several new spectral lines in didymia. Also in 1879, Paul Émile Lecoq de Boisbaudran isolated the new element samarium from the mineral samarskite. Discovery and early history: The samaria earth was further separated by Lecoq de Boisbaudran in 1886, and a similar result was obtained by Jean Charles Galissard de Marignac by direct isolation from samarskite. They named the element gadolinium after Johan Gadolin, and its oxide was named "gadolinia". Further spectroscopic analysis between 1886 and 1901 of samaria, yttria, and samarskite by William Crookes, Lecoq de Boisbaudran and Eugène-Anatole Demarçay yielded several new spectral lines that indicated the existence of an unknown element. The fractional crystallization of the oxides then yielded europium in 1901. Discovery and early history: In 1839 the third source for rare earths became available. This is a mineral similar to gadolinite called uranotantalum (now called "samarskite") an oxide of a mixture of elements such as yttrium, ytterbium, iron, uranium, thorium, calcium, niobium, and tantalum. This mineral from Miass in the southern Ural Mountains was documented by Gustav Rose. The Russian chemist R. Harmann proposed that a new element he called "ilmenium" should be present in this mineral, but later, Christian Wilhelm Blomstrand, Galissard de Marignac, and Heinrich Rose found only tantalum and niobium (columbium) in it. Discovery and early history: The exact number of rare-earth elements that existed was highly unclear, and a maximum number of 25 was estimated. The use of X-ray spectra (obtained by X-ray crystallography) by Henry Gwyn Jeffreys Moseley made it possible to assign atomic numbers to the elements. Moseley found that the exact number of lanthanides had to be 15, but that element 61 had not yet been discovered. (This is promethium, a radioactive element whose most stable isotope has a half-life of just 18 years.) Using these facts about atomic numbers from X-ray crystallography, Moseley also showed that hafnium (element 72) would not be a rare-earth element. Moseley was killed in World War I in 1915, years before hafnium was discovered. Hence, the claim of Georges Urbain that he had discovered element 72 was untrue. Hafnium is an element that lies in the periodic table immediately below zirconium, and hafnium and zirconium have very similar chemical and physical properties. Sources and purification: During the 1940s, Frank Spedding and others in the United States (during the Manhattan Project) developed chemical ion-exchange procedures for separating and purifying rare-earth elements. This method was first applied to the actinides for separating plutonium-239 and neptunium from uranium, thorium, actinium, and the other actinides in the materials produced in nuclear reactors. Plutonium-239 was very desirable because it is a fissile material. Sources and purification: The principal sources of rare-earth elements are the minerals bastnäsite (RCO3F, where R is a mixture of rare-earth elements), monazite (XPO4, where X is a mixture of rare-earth elements and sometimes thorium), and loparite ((Ce,Na,Ca)(Ti,Nb)O3), and the lateritic ion-adsorption clays. Despite their high relative abundance, rare-earth minerals are more difficult to mine and extract than equivalent sources of transition metals (due in part to their similar chemical properties), making the rare-earth elements relatively expensive. Their industrial use was very limited until efficient separation techniques were developed, such as ion exchange, fractional crystallization, and liquid–liquid extraction during the late 1950s and early 1960s.Some ilmenite concentrates contain small amounts of scandium and other rare-earth elements, which could be analysed by XRF. Classification: Before the time that ion exchange methods and elution were available, the separation of the rare earths was primarily achieved by repeated precipitation or crystallization. In those days, the first separation was into two main groups, the cerium earths (scandium, lanthanum, cerium, praseodymium, neodymium, and samarium) and the yttrium earths (yttrium, dysprosium, holmium, erbium, thulium, ytterbium, and lutetium). Europium, gadolinium, and terbium were either considered as a separate group of rare-earth elements (the terbium group), or europium was included in the cerium group, and gadolinium and terbium were included in the yttrium group. The reason for this division arose from the difference in solubility of rare-earth double sulfates with sodium and potassium. The sodium double sulfates of the cerium group are poorly soluble, those of the terbium group slightly, and those of the yttrium group are very soluble. Sometimes, the yttrium group was further split into the erbium group (dysprosium, holmium, erbium, and thulium) and the ytterbium group (ytterbium and lutetium), but today the main grouping is between the cerium and the yttrium groups. Today, the rare-earth elements are classified as light or heavy rare-earth elements, rather than in cerium and yttrium groups. Classification: Light versus heavy classification The classification of rare-earth elements is inconsistent between authors. The most common distinction between rare-earth elements is made by atomic numbers; those with low atomic numbers are referred to as light rare-earth elements (LREE), those with high atomic numbers are the heavy rare-earth elements (HREE), and those that fall in between are typically referred to as the middle rare-earth elements (MREE). Commonly, rare-earth elements with atomic numbers 57 to 61 (lanthanum to promethium) are classified as light and those with atomic numbers 62 and greater are classified as heavy rare-earth elements. Increasing atomic numbers between light and heavy rare-earth elements and decreasing atomic radii throughout the series causes chemical variations. Europium is exempt of this classification as it has two valence states: Eu2+ and Eu3+. Yttrium is grouped as heavy rare-earth element due to chemical similarities. The break between the two groups is sometimes put elsewhere, such as between elements 63 (europium) and 64 (gadolinium). The actual metallic densities of these two groups overlap, with the "light" group having densities from 6.145 (lanthanum) to 7.26 (promethium) or 7.52 (samarium) g/cc, and the "heavy" group from 6.965 (ytterbium) to 9.32 (thulium), as well as including yttrium at 4.47. Europium has a density of 5.24. Origin: Rare-earth elements, except scandium, are heavier than iron and thus are produced by supernova nucleosynthesis or by the s-process in asymptotic giant branch stars. In nature, spontaneous fission of uranium-238 produces trace amounts of radioactive promethium, but most promethium is synthetically produced in nuclear reactors. Due to their chemical similarity, the concentrations of rare earths in rocks are only slowly changed by geochemical processes, making their proportions useful for geochronology and dating fossils. Compounds: Rare-earth elements occur in nature in combination with phosphate (monazite), carbonate-fluoride (bastnäsite), and oxygen anions. Compounds: In their oxides, most rare-earth elements only have a valence of 3 and form sesquioxides (cerium forms CeO2). Five different crystal structures are known, depending on the element and the temperature. The X-phase and the H-phase are only stable above 2000 K. At lower temperatures, there are the hexagonal A-phase, the monoclinic B-phase, and the cubic C-phase, which is the stable form at room temperature for most of the elements. The C-phase was once thought to be in space group I213 (no. 199), but is now known to be in space group Ia3 (no. 206). The structure is similar to that of fluorite or cerium dioxide (in which the cations form a face-centred cubic lattice and the anions sit inside the tetrahedra of cations), except that one-quarter of the anions (oxygen) are missing. The unit cell of these sesquioxides corresponds to eight unit cells of fluorite or cerium dioxide, with 32 cations instead of 4. This is called the bixbyite structure, as it occurs in a mineral of that name ((Mn,Fe)2O3). Geological distribution: As seen in the chart, rare-earth elements are found on earth at similar concentrations to many common transition metals. The most abundant rare-earth element is cerium, which is actually the 25th most abundant element in Earth's crust, having 68 parts per million (about as common as copper). The exception is the highly unstable and radioactive promethium "rare earth" is quite scarce. The longest-lived isotope of promethium has a half-life of 17.7 years, so the element exists in nature in only negligible amounts (approximately 572 g in the entire Earth's crust). Promethium is one of the two elements that do not have stable (non-radioactive) isotopes and are followed by (i.e. with higher atomic number) stable elements (the other being technetium). Geological distribution: The rare-earth elements are often found together. During the sequential accretion of the Earth, the dense rare-earth elements were incorporated into the deeper portions of the planet. Early differentiation of molten material largely incorporated the rare earths into mantle rocks. The high field strength and large ionic radii of rare earths make them incompatible with the crystal lattices of most rock-forming minerals, so REE will undergo strong partitioning into a melt phase if one is present. REE are chemically very similar and have always been difficult to separate, but the gradual decrease in ionic radius from light REE (LREE) to heavy REE (HREE), called the lanthanide contraction, can produce a broad separation between light and heavy REE. The larger ionic radii of LREE make them generally more incompatible than HREE in rock-forming minerals, and will partition more strongly into a melt phase, while HREE may prefer to remain in the crystalline residue, particularly if it contains HREE-compatible minerals like garnet. The result is that all magma formed from partial melting will always have greater concentrations of LREE than HREE, and individual minerals may be dominated by either HREE or LREE, depending on which range of ionic radii best fits the crystal lattice.Among the anhydrous rare-earth phosphates, it is the tetragonal mineral xenotime that incorporates yttrium and the HREE, whereas the monoclinic monazite phase incorporates cerium and the LREE preferentially. The smaller size of the HREE allows greater solid solubility in the rock-forming minerals that make up Earth's mantle, and thus yttrium and the HREE show less enrichment in Earth's crust relative to chondritic abundance than does cerium and the LREE. This has economic consequences: large ore bodies of LREE are known around the world and are being exploited. Ore bodies for HREE are more rare, smaller, and less concentrated. Most of the current supply of HREE originates in the "ion-absorption clay" ores of Southern China. Some versions provide concentrates containing about 65% yttrium oxide, with the HREE being present in ratios reflecting the Oddo–Harkins rule: even-numbered REE at abundances of about 5% each, and odd-numbered REE at abundances of about 1% each. Similar compositions are found in xenotime or gadolinite.Well-known minerals containing yttrium, and other HREE, include gadolinite, xenotime, samarskite, euxenite, fergusonite, yttrotantalite, yttrotungstite, yttrofluorite (a variety of fluorite), thalenite, and yttrialite. Small amounts occur in zircon, which derives its typical yellow fluorescence from some of the accompanying HREE. The zirconium mineral eudialyte, such as is found in southern Greenland, contains small but potentially useful amounts of yttrium. Of the above yttrium minerals, most played a part in providing research quantities of lanthanides during the discovery days. Xenotime is occasionally recovered as a byproduct of heavy-sand processing, but is not as abundant as the similarly recovered monazite (which typically contains a few percent of yttrium). Uranium ores from Ontario have occasionally yielded yttrium as a byproduct.Well-known minerals containing cerium, and other LREE, include bastnäsite, monazite, allanite, loparite, ancylite, parisite, lanthanite, chevkinite, cerite, stillwellite, britholite, fluocerite, and cerianite. Monazite (marine sands from Brazil, India, or Australia; rock from South Africa), bastnäsite (from Mountain Pass rare earth mine, or several localities in China), and loparite (Kola Peninsula, Russia) have been the principal ores of cerium and the light lanthanides.Enriched deposits of rare-earth elements at the surface of the Earth, carbonatites and pegmatites, are related to alkaline plutonism, an uncommon kind of magmatism that occurs in tectonic settings where there is rifting or that are near subduction zones. In a rift setting, the alkaline magma is produced by very small degrees of partial melting (<1%) of garnet peridotite in the upper mantle (200 to 600 km depth). This melt becomes enriched in incompatible elements, like the rare-earth elements, by leaching them out of the crystalline residue. The resultant magma rises as a diapir, or diatreme, along pre-existing fractures, and can be emplaced deep in the crust, or erupted at the surface. Typical REE enriched deposits types forming in rift settings are carbonatites, and A- and M-Type granitoids. Near subduction zones, partial melting of the subducting plate within the asthenosphere (80 to 200 km depth) produces a volatile-rich magma (high concentrations of CO2 and water), with high concentrations of alkaline elements, and high element mobility that the rare earths are strongly partitioned into. This melt may also rise along pre-existing fractures, and be emplaced in the crust above the subducting slab or erupted at the surface. REE-enriched deposits forming from these melts are typically S-Type granitoids.Alkaline magmas enriched with rare-earth elements include carbonatites, peralkaline granites (pegmatites), and nepheline syenite. Carbonatites crystallize from CO2-rich fluids, which can be produced by partial melting of hydrous-carbonated lherzolite to produce a CO2-rich primary magma, by fractional crystallization of an alkaline primary magma, or by separation of a CO2-rich immiscible liquid from. These liquids are most commonly forming in association with very deep Precambrian cratons, like the ones found in Africa and the Canadian Shield. Ferrocarbonatites are the most common type of carbonatite to be enriched in REE, and are often emplaced as late-stage, brecciated pipes at the core of igneous complexes; they consist of fine-grained calcite and hematite, sometimes with significant concentrations of ankerite and minor concentrations of siderite. Large carbonatite deposits enriched in rare-earth elements include Mount Weld in Australia, Thor Lake in Canada, Zandkopsdrift in South Africa, and Mountain Pass in the USA. Peralkaline granites (A-Type granitoids) have very high concentrations of alkaline elements and very low concentrations of phosphorus; they are deposited at moderate depths in extensional zones, often as igneous ring complexes, or as pipes, massive bodies, and lenses. These fluids have very low viscosities and high element mobility, which allows for the crystallization of large grains, despite a relatively short crystallization time upon emplacement; their large grain size is why these deposits are commonly referred to as pegmatites. Economically viable pegmatites are divided into Lithium-Cesium-Tantalum (LCT) and Niobium-Yttrium-Fluorine (NYF) types; NYF types are enriched in rare-earth minerals. Examples of rare-earth pegmatite deposits include Strange Lake in Canada and Khaladean-Buregtey in Mongolia. Nepheline syenite (M-Type granitoids) deposits are 90% feldspar and feldspathoid minerals. They are deposited in small, circular massifs and contain high concentrations of rare-earth-bearing accessory minerals. For the most part, these deposits are small but important examples include Illimaussaq-Kvanefeld in Greenland, and Lovozera in Russia.Rare-earth elements can also be enriched in deposits by secondary alteration either by interactions with hydrothermal fluids or meteoric water or by erosion and transport of resistate REE-bearing minerals. Argillization of primary minerals enriches insoluble elements by leaching out silica and other soluble elements, recrystallizing feldspar into clay minerals such kaolinite, halloysite, and montmorillonite. In tropical regions where precipitation is high, weathering forms a thick argillized regolith, this process is called supergene enrichment and produces laterite deposits; heavy rare-earth elements are incorporated into the residual clay by absorption. This kind of deposit is only mined for REE in Southern China, where the majority of global heavy rare-earth element production occurs. REE-laterites do form elsewhere, including over the carbonatite at Mount Weld in Australia. REE may also be extracted from placer deposits if the sedimentary parent lithology contains REE-bearing, heavy resistate minerals.In 2011, Yasuhiro Kato, a geologist at the University of Tokyo who led a study of Pacific Ocean seabed mud, published results indicating the mud could hold rich concentrations of rare-earth minerals. The deposits, studied at 78 sites, came from "[h]ot plumes from hydrothermal vents pull[ing] these materials out of seawater and deposit[ing] them on the seafloor, bit by bit, over tens of millions of years. One square patch of metal-rich mud 2.3 kilometers wide might contain enough rare earths to meet most of the global demand for a year, Japanese geologists report in Nature Geoscience." "I believe that rare[-]earth resources undersea are much more promising than on-land resources," said Kato. "[C]oncentrations of rare earths were comparable to those found in clays mined in China. Some deposits contained twice as much heavy rare earths such as dysprosium, a component of magnets in hybrid car motors." Notes on rare-earth elements geochemistry: The REE geochemical classification is usually done on the basis of their atomic weight. One of the most common classifications divides REE into 3 groups: light rare earths (LREE - from 57La to 60Nd), intermediate (MREE - from 62Sm to 67Ho) and heavy (HREE - from 68Er to 71Lu). REE usually appear as trivalent ions, except for Ce and Eu which can take the form of Ce4+ and Eu2+ depending on the redox conditions of the system. Consequentially, REE are characterized by a substantial identity in their chemical reactivity, which results in a serial behaviour during geochemical processes rather than being characteristic of a single element of the series. Sc, Y, and Lu can be electronically distinguished from the other rare earths because they do not have f valence electrons, whereas the others do, but the chemical behaviour is almost the same. Notes on rare-earth elements geochemistry: A distinguishing factor in the geochemical behaviour of the REE is linked to the so-called "lanthanide contraction" which represents a higher-than-expected decrease in the atomic/ionic radius of the elements along the series. This is determined by the variation of the shielding effect towards the nuclear charge due to the progressive filling of the 4f orbital which acts against the electrons of the 6s and 5d orbitals. The lanthanide contraction has a direct effect on the geochemistry of the lanthanides, which show a different behaviour depending on the systems and processes in which they are involved. The effect of the lanthanide contraction can be observed in the REE behaviour both in a CHARAC-type geochemical system (CHArge-and-RAdius-Controlled) where elements with similar charge and radius should show coherent geochemical behaviour, and in non-CHARAC systems, such as aqueous solutions, where the electron structure is also an important parameter to consider as the lanthanide contraction affects the ionic potential. A direct consequence is that, during the formation of coordination bonds, the REE behaviour gradually changes along the series. Furthermore, the lanthanide contraction causes the ionic radius of Ho3+ (0.901 Å) to be almost identical to that of Y3+ (0.9 Å), justifying the inclusion of the latter among the REE. Notes on rare-earth elements geochemistry: Geochemistry applications The application of rare-earth elements to geology is important to understanding the petrological processes of igneous, sedimentary and metamorphic rock formation. In geochemistry, rare-earth elements can be used to infer the petrological mechanisms that have affected a rock due to the subtle atomic size differences between the elements, which causes preferential fractionation of some rare earths relative to others depending on the processes at work. Notes on rare-earth elements geochemistry: The geochemical study of the REE is not carried out on absolute concentrations – as it is usually done with other chemical elements – but on normalized concentrations in order to observe their serial behaviour. In geochemistry, rare-earth elements are typically presented in normalized "spider" diagrams, in which concentration of rare-earth elements are normalized to a reference standard and are then expressed as the logarithm to the base 10 of the value. Notes on rare-earth elements geochemistry: Commonly, the rare-earth elements are normalized to chondritic meteorites, as these are believed to be the closest representation of unfractionated solar system material. However, other normalizing standards can be applied depending on the purpose of the study. Normalization to a standard reference value, especially of a material believed to be unfractionated, allows the observed abundances to be compared to the initial abundances of the element. Normalization also removes the pronounced 'zig-zag' pattern caused by the differences in abundance between even and odd atomic numbers. Normalization is carried out by dividing the analytical concentrations of each element of the series by the concentration of the same element in a given standard, according to the equation: REE REE sam REE std where n indicates the normalized concentration, REE sam the analytical concentration of the element measured in the sample, and REE ref the concentration of the same element in the reference material.It is possible to observe the serial trend of the REE by reporting their normalized concentrations against the atomic number. The trends that are observed in "spider" diagrams are typically referred to as "patterns", which may be diagnostic of petrological processes that have affected the material of interest.According to the general shape of the patterns or thanks to the presence (or absence) of so-called "anomalies", information regarding the system under examination and the occurring geochemical processes can be obtained. The anomalies represent enrichment (positive anomalies) or depletion (negative anomalies) of specific elements along the series and are graphically recognizable as positive or negative “peaks” along the REE patterns. The anomalies can be numerically quantified as the ratio between the normalized concentration of the element showing the anomaly and the predictable one based on the average of the normalized concentrations of the two elements in the previous and next position in the series, according to the equation: REE REE REE REE REE i+1]n where REE i]n is the normalized concentration of the element whose anomaly has to be calculated, REE i−1]n and REE i+1]n the normalized concentrations of the respectively previous and next elements along the series. Notes on rare-earth elements geochemistry: The rare-earth elements patterns observed in igneous rocks are primarily a function of the chemistry of the source where the rock came from, as well as the fractionation history the rock has undergone. Fractionation is in turn a function of the partition coefficients of each element. Partition coefficients are responsible for the fractionation of trace elements (including rare-earth elements) into the liquid phase (the melt/magma) into the solid phase (the mineral). If an element preferentially remains in the solid phase it is termed ‘compatible’, and if it preferentially partitions into the melt phase it is described as ‘incompatible’. Each element has a different partition coefficient, and therefore fractionates into solid and liquid phases distinctly. These concepts are also applicable to metamorphic and sedimentary petrology. Notes on rare-earth elements geochemistry: In igneous rocks, particularly in felsic melts, the following observations apply: anomalies in europium are dominated by the crystallization of feldspars. Hornblende, controls the enrichment of MREE compared to LREE and HREE. Depletion of LREE relative to HREE may be due to the crystallization of olivine, orthopyroxene, and clinopyroxene. On the other hand, the depletion of HREE relative to LREE may be due to the presence of garnet, as garnet preferentially incorporates HREE into its crystal structure. The presence of zircon may also cause a similar effect.In sedimentary rocks, rare-earth elements in clastic sediments are a representation of provenance. The rare-earth element concentrations are not typically affected by sea and river waters, as rare-earth elements are insoluble and thus have very low concentrations in these fluids. As a result, when sediment is transported, rare-earth element concentrations are unaffected by the fluid and instead the rock retains the rare-earth element concentration from its source.Sea and river waters typically have low rare-earth element concentrations. However, aqueous geochemistry is still very important. In oceans, rare-earth elements reflect input from rivers, hydrothermal vents, and aeolian sources; this is important in the investigation of ocean mixing and circulation.Rare-earth elements are also useful for dating rocks, as some radioactive isotopes display long half-lives. Of particular interest are the 138La-138Ce, 147Sm-143Nd, and 176Lu-176Hf systems. Production: Until 1948, most of the world's rare earths were sourced from placer sand deposits in India and Brazil. Through the 1950s, South Africa was the world's rare earth source, from a monazite-rich reef at the Steenkampskraal mine in Western Cape province. Through the 1960s until the 1980s, the Mountain Pass rare earth mine in California made the United States the leading producer. Today, the Indian and South African deposits still produce some rare-earth concentrates, but they were dwarfed by the scale of Chinese production. In 2017, China produced 81% of the world's rare-earth supply, mostly in Inner Mongolia, although it had only 36.7% of reserves. Australia was the second and only other major producer with 15% of world production. All of the world's heavy rare earths (such as dysprosium) come from Chinese rare-earth sources such as the polymetallic Bayan Obo deposit. The Browns Range mine, located 160 km south east of Halls Creek in northern Western Australia, is currently under development and is positioned to become the first significant dysprosium producer outside of China.Increased demand has strained supply, and there is growing concern that the world may soon face a shortage of the rare earths. In several years from 2009 worldwide demand for rare-earth elements is expected to exceed supply by 40,000 tonnes annually unless major new sources are developed. In 2013, it was stated that the demand for REEs would increase due to the dependence of the EU on these elements, the fact that rare-earth elements cannot be substituted by other elements and that REEs have a low recycling rate. Furthermore, due to the increased demand and low supply, future prices are expected to increase and there is a chance that countries other than China will open REE mines. REE is increasing in demand due to the fact that they are essential for new and innovative technology that is being created. These new products that need REEs to be produced are high-technology equipment such as smart phones, digital cameras, computer parts, semiconductors, etc. In addition, these elements are more prevalent in the following industries: renewable energy technology, military equipment, glass making, and metallurgy. Production: China These concerns have intensified due to the actions of China, the predominant supplier. Specifically, China has announced regulations on exports and a crackdown on smuggling. On September 1, 2009, China announced plans to reduce its export quota to 35,000 tons per year in 2010–2015 to conserve scarce resources and protect the environment. On October 19, 2010, China Daily, citing an unnamed Ministry of Commerce official, reported that China will "further reduce quotas for rare-earth exports by 30 percent at most next year to protect the precious metals from over-exploitation." The government in Beijing further increased its control by forcing smaller, independent miners to merge into state-owned corporations or face closure. At the end of 2010, China announced that the first round of export quotas in 2011 for rare earths would be 14,446 tons, which was a 35% decrease from the previous first round of quotas in 2010. China announced further export quotas on 14 July 2011 for the second half of the year with total allocation at 30,184 tons with total production capped at 93,800 tonnes. In September 2011, China announced the halt in production of three of its eight major rare-earth mines, responsible for almost 40% of China's total rare-earth production. In March 2012, the US, EU, and Japan confronted China at WTO about these export and production restrictions. China responded with claims that the restrictions had environmental protection in mind. In August 2012, China announced a further 20% reduction in production. Production: The United States, Japan, and the European Union filed a joint lawsuit with the World Trade Organization in 2012 against China, arguing that China should not be able to deny such important exports.In response to the opening of new mines in other countries (Lynas in Australia and Molycorp in the United States), prices of rare earths dropped. Production: The price of dysprosium oxide was 994 USD/kg in 2011, but dropped to US$265/kg by 2014.On August 29, 2014, the WTO ruled that China had broken free-trade agreements, and the WTO said in the summary of key findings that "the overall effect of the foreign and domestic restrictions is to encourage domestic extraction and secure preferential use of those materials by Chinese manufacturers." China declared that it would implement the ruling on September 26, 2014, but would need some time to do so. By January 5, 2015, China had lifted all quotas from the export of rare earths, but export licenses will still be required.In 2019, China supplied between 85% and 95% of the global demand for the 17 rare-earth powders, half of them sourced from Myanmar. After the 2021 military coup in that country, future supplies of critical ores were possibly constrained. Additionally, it was speculated that the PRC could again reduce rare-earth exports to counter-act economic sanctions imposed by the US and EU countries. Rare-earth metals serve as crucial materials for electric vehicle manufacturing and high-tech military applications. Production: Myanmar (Burma) Kachin State in Myanmar is the world's largest source of rare earths. In 2021, China imported US$200 million of rare earths from Myanmar in December 2021, exceeding 20,000 tonnes. Rare earths were discovered near Pangwa in Chipwi Township along the China–Myanmar border in the late 2010s. As China has shut down domestic mines due to the detrimental environmental impact, it has largely outsourced rare-earth mining to Kachin State. Chinese companies and miners illegally set up operations in Kachin State without government permits, and instead circumvent the central government by working with a Border Guard Force militia under the Tatmadaw, formerly known as the New Democratic Army – Kachin, which has profited from this extractive industry. As of March 2022, 2,700 mining collection pools scattered across 300 separate locations were found in Kachin State, encompassing the area of Singapore, and an exponential increase from 2016. Land has also been seized from locals to conduct mining operations. Production: Other countries As a result of the increased demand and tightening restrictions on exports of the metals from China, some countries are stockpiling rare-earth resources. Searches for alternative sources in Australia, Brazil, Canada, South Africa, Tanzania, Greenland, and the United States are ongoing. Mines in these countries were closed when China undercut world prices in the 1990s, and it will take a few years to restart production as there are many barriers to entry. Significant sites under development outside China include Steenkampskraal in South Africa, the world's highest grade rare earths and thorium mine, closed in 1963, but has been gearing to go back into production. Over 80% of the infrastructure is already complete. Other mines include the Nolans Project in Central Australia, the Bokan Mountain project in Alaska, the remote Hoidas Lake project in northern Canada, and the Mount Weld project in Australia. The Hoidas Lake project has the potential to supply about 10% of the $1 billion of REE consumption that occurs in North America every year. Vietnam signed an agreement in October 2010 to supply Japan with rare earths from its northwestern Lai Châu Province, however the deal was never realized due to disagreements.The largest rare-earth deposit in the U.S. is at Mountain Pass, California, sixty miles south of Las Vegas. Originally opened by Molycorp, the deposit has been mined, off and on, since 1951. A second large deposit of REEs at Elk Creek in southeast Nebraska is under consideration by NioCorp Development Ltd who hopes to open a niobium, scandium, and titanium mine there. That mine may be able to produce as much as 7200 tonnes of ferro niobium and 95 tonnes of scandium trioxide annually, although, as of 2022, financing is still in the works.In the UK, Pensana has begun construction of their US$195 million rare-earth processing plant which secured funding from the UK government's Automotive Transformation Fund. The plant will process ore from the Longonjo mine in Angola and other sources as they become available. The company are targeting production in late 2023, before ramping up to full capacity in 2024. Pensana aim to produce 12,500 metric tons of separated rare earths, including 4,500 tons of magnet metal rare earths.Also under consideration for mining are sites such as Thor Lake in the Northwest Territories, and various locations in Vietnam. Additionally, in 2010, a large deposit of rare-earth minerals was discovered in Kvanefjeld in southern Greenland. Pre-feasibility drilling at this site has confirmed significant quantities of black lujavrite, which contains about 1% rare-earth oxides (REO). The European Union has urged Greenland to restrict Chinese development of rare-earth projects there, but as of early 2013, the government of Greenland has said that it has no plans to impose such restrictions. Many Danish politicians have expressed concerns that other nations, including China, could gain influence in thinly populated Greenland, given the number of foreign workers and investment that could come from Chinese companies in the near future because of the law passed December 2012.In central Spain, Ciudad Real Province, the proposed rare-earth mining project 'Matamulas' may provide, according to its developers, up to 2,100 Tn/year (33% of the annual UE demand). However, this project has been suspended by regional authorities due to social and environmental concerns.Adding to potential mine sites, ASX listed Peak Resources announced in February 2012, that their Tanzanian-based Ngualla project contained not only the 6th largest deposit by tonnage outside of China but also the highest grade of rare-earth elements of the 6.North Korea has been reported to have exported rare-earth ore to China, about US$1.88 million worth during May and June 2014.In May 2012, researchers from two universities in Japan announced that they had discovered rare earths in Ehime Prefecture, Japan.On 12 January 2023, Swedish state-owned mining company LKAB announced that it had discovered a deposit of over 1 million tonnes of rare earths in the country's Kiruna area, which would make it the largest such deposit in Europe. Production: Malaysian refining plans In early 2011, Australian mining company Lynas was reported to be "hurrying to finish" a US$230 million rare-earth refinery on the eastern coast of Peninsular Malaysia's industrial port of Kuantan. The plant would refine ore — lanthanides concentrate from the Mount Weld mine in Australia. The ore would be trucked to Fremantle and transported by container ship to Kuantan. Within two years, Lynas was said to expect the refinery to be able to meet nearly a third of the world's demand for rare-earth materials, not counting China. The Kuantan development brought renewed attention to the Malaysian town of Bukit Merah in Perak, where a rare-earth mine operated by a Mitsubishi Chemical subsidiary, Asian Rare Earth, closed in 1994 and left continuing environmental and health concerns. In mid-2011, after protests, Malaysian government restrictions on the Lynas plant were announced. At that time, citing subscription-only Dow Jones Newswire reports, a Barrons report said the Lynas investment was $730 million, and the projected share of the global market it would fill put at "about a sixth." An independent review initiated by the Malaysian Government, and conducted by the International Atomic Energy Agency (IAEA) in 2011 to address concerns of radioactive hazards, found no non-compliance with international radiation safety standards.However, the Malaysian authorities confirmed that as of October 2011, Lynas was not given any permit to import any rare-earth ore into Malaysia. On February 2, 2012, the Malaysian AELB (Atomic Energy Licensing Board) recommended that Lynas be issued a temporary operating license subject to meeting a number of conditions. On 2 September 2014, Lynas was issued a 2-year full operating stage license by the AELB. Production: Other sources Mine tailings Significant quantities of rare-earth oxides are found in tailings accumulated from 50 years of uranium ore, shale, and loparite mining at Sillamäe, Estonia. Due to the rising prices of rare earths, extraction of these oxides has become economically viable. The country currently exports around 3,000 tonnes per year, representing around 2% of world production. Similar resources are suspected in the western United States, where gold rush-era mines are believed to have discarded large amounts of rare earths, because they had no value at the time. Production: Ocean mining In January 2013 a Japanese deep-sea research vessel obtained seven deep-sea mud core samples from the Pacific Ocean seafloor at 5,600 to 5,800 meters depth, approximately 250 kilometres (160 mi) south of the island of Minami-Tori-Shima. The research team found a mud layer 2 to 4 meters beneath the seabed with concentrations of up to 0.66% rare-earth oxides. A potential deposit might compare in grade with the ion-absorption-type deposits in southern China that provide the bulk of Chinese REO mine production, which grade in the range of 0.05% to 0.5% REO. Production: Waste Another recently developed source of rare earths is electronic waste and other wastes that have significant rare-earth components. Advances in recycling technology have made the extraction of rare earths from these materials less expensive. Recycling plants operate in Japan, where an estimated 300,000 tons of rare earths are found in unused electronics. In France, the Rhodia group is setting up two factories, in La Rochelle and Saint-Fons, that will produce 200 tons of rare earths a year from used fluorescent lamps, magnets, and batteries. Coal and coal by-products are a potential source of critical elements including rare-earth elements (REE) with estimated amounts in the range of 50 million metric tons. Production: Methods One study mixed fly ash with carbon black and then sent a 1-second current pulse through the mixture, heating it to 3,000 °C (5,430 °F). The fly ash contains microscopic bits of glass that encapsulate the metals. The heat shatters the glass, exposing the rare earths. Flash heating also converts phosphates into oxides, which are more soluble and extractable. Using hydrochloric acid at concentrations less than 1% of conventional methods, the process extracted twice as much material. Properties: According to chemistry professor Andrea Sella, rare-earth elements differ from other elements, in that when looked at analytically, they are virtually inseparable, having almost the same chemical properties. However, in terms of their electronic and magnetic properties, each one occupies a unique technological niche that nothing else can. For example, "the rare-earth elements praseodymium (Pr) and neodymium (Nd) can both be embedded inside glass and they completely cut out the glare from the flame when one is doing glass-blowing." Uses: The uses, applications, and demand for rare-earth elements have expanded over the years. Globally, most REEs are used for catalysts and magnets. In the USA, more than half of REEs are used for catalysts; ceramics, glass, and polishing are also main uses.Other important uses of rare-earth elements are applicable to the production of high-performance magnets, alloys, glasses, and electronics. Ce and La are important as catalysts, and are used for petroleum refining and as diesel additives. Nd is important in magnet production in traditional and low-carbon technologies. Rare-earth elements in this category are used in the electric motors of hybrid and electric vehicles, generators in some wind turbines, hard disc drives, portable electronics, microphones, and speakers.Ce, La, and Nd are important in alloy making, and in the production of fuel cells and nickel-metal hydride batteries. Ce, Ga, and Nd are important in electronics and are used in the production of LCD and plasma screens, fiber optics, and lasers, and in medical imaging. Additional uses for rare-earth elements are as tracers in medical applications, fertilizers, and in water treatment.REEs have been used in agriculture to increase plant growth, productivity, and stress resistance seemingly without negative effects for human and animal consumption. REEs are used in agriculture through REE-enriched fertilizers which is a widely used practice in China. In addition, REEs are feed additives for livestock which has resulted in increased production such as larger animals and a higher production of eggs and dairy products. However, this practice has resulted in REE bioaccumulation within livestock and has impacted vegetation and algae growth in these agricultural areas. Additionally while no ill effects have been observed at current low concentrations the effects over the long term and with accumulation over time are unknown prompting some calls for more research into their possible effects. Environmental considerations: REEs are naturally found in very low concentrations in the environment. Mines are often in countries where environmental and social standards are very low, leading to human rights violations, deforestation, and contamination of land and water.Near mining and industrial sites, the concentrations of REEs can rise to many times the normal background levels. Once in the environment, REEs can leach into the soil where their transport is determined by numerous factors such as erosion, weathering, pH, precipitation, groundwater, etc. Acting much like metals, they can speciate depending on the soil condition being either motile or adsorbed to soil particles. Depending on their bio-availability, REEs can be absorbed into plants and later consumed by humans and animals. The mining of REEs, use of REE-enriched fertilizers, and the production of phosphorus fertilizers all contribute to REE contamination. Furthermore, strong acids are used during the extraction process of REEs, which can then leach out into the environment and be transported through water bodies and result in the acidification of aquatic environments. Another additive of REE mining that contributes to REE environmental contamination is cerium oxide (CeO2), which is produced during the combustion of diesel and released as exhaust, contributing heavily to soil and water contamination. Environmental considerations: Mining, refining, and recycling of rare earths have serious environmental consequences if not properly managed. Low-level radioactive tailings resulting from the occurrence of thorium and uranium in rare-earth ores present a potential hazard and improper handling of these substances can result in extensive environmental damage. In May 2010, China announced a major, five-month crackdown on illegal mining in order to protect the environment and its resources. This campaign is expected to be concentrated in the South, where mines – commonly small, rural, and illegal operations – are particularly prone to releasing toxic waste into the general water supply. However, even the major operation in Baotou, in Inner Mongolia, where much of the world's rare-earth supply is refined, has caused major environmental damage. China's Ministry of Industry and Information Technology estimated that cleanup costs in Jiangxi province at $5.5 billion.It is, however, possible to filter out and recover any rare-earth elements that flow out with the wastewater from mining facilities. However, such filtering and recovery equipment may not always be present on the outlets carrying the wastewater. Environmental considerations: Recycling and reusing REEs Potential methods The rare-earth elements (REEs) are vital to modern technologies and society and are amongst the most critical elements. Despite this, typically only around 1% of REEs are recycled from end-products, with the rest deporting to waste and being removed from the materials cycle. Recycling and reusing REEs play an important role in high technology fields and manufacturing environmentally friendly products all around the world.REE recycling and reuse have been increasingly focused on in recent years. The main concerns include environmental pollution during REE recycling and increasing recycling efficiency. Literature published in 2004 suggests that, along with previously established pollution mitigation, a more circular supply chain would help mitigate some of the pollution at the extraction point. This means recycling and reusing REEs that are already in use or reaching the end of their life cycle. A study published in 2014 suggests a method to recycle REEs from waste nickel-metal hydride batteries, demonstrating a recovery rate of 95.16%. Rare-earth elements could also be recovered from industrial wastes with practical potential to reduce environmental and health impacts from mining, waste generation, and imports if known and experimental processes are scaled up. A study suggests that "fulfillment of the circular economy approach could reduce up to 200 times the impact in the climate change category and up to 70 times the cost due to the REE mining." In most of the reported studies reviewed by a scientific review, "secondary waste is subjected to chemical and or bioleaching followed by solvent extraction processes for clean separation of REEs."Currently, people take two essential resources into consideration for the secure supply of REEs: one is to extract REEs from primary resources like mines harboring REE-bearing ores, regolith-hosted clay deposits, ocean bed sediments, coal fly ash, etc. A work developed a green system for recovery of REEs from coal fly ash by using citrate and oxalate who are strong organic ligand and capable of complexing or precipItating with REE. The other one is from secondary resources such as electronic, industrial waste and municipal waste. E-waste contains a significant concentration of REEs, and thus is the primary option for REE recycling now. According to a study, approximately 50 million metric tons of electronic waste are dumped in landfills worldwide each year. Despite the fact that e-waste contains a significant amount of rare-earth elements (REE), only 12.5% of e-waste is currently being recycled for all metals. Environmental considerations: Challenges For now, there are some obstacles during REE recycling and reuse. One big challenge is REE separation chemistry. Specifically, the process of isolating and refining individual rare earth elements (REE) presents a difficulty due to their similar chemical properties. In order to reduce the environmental pollution released during REE isolation and also diversify their sources, there is a clear necessity for the development of novel separation technologies that can lower the cost of large-scale REE separation and recycling. In this condition, the Critical Materials Institute (CMI) under the Department of Energy has devised a technique that involves utilizing Gluconobacter bacteria to metabolize sugars, producing acids that can dissolve and separate rare-earth elements (REE) from shredded electronic waste. Environmental considerations: Impact of REE contamination On vegetation The mining of REEs has caused the contamination of soil and water around production areas, which has impacted vegetation in these areas by decreasing chlorophyll production, which affects photosynthesis and inhibits the growth of the plants. However, the impact of REE contamination on vegetation is dependent on the plants present in the contaminated environment: some plants retain and absorb REEs and some don't. Also, the ability of the vegetation to intake the REE is dependent on the type of REE present in the soil, hence there are a multitude of factors that influence this process. Agricultural plants are the main type of vegetation affected by REE contamination in the environment, the two plants with a higher chance of absorbing and storing REEs being apples and beets. Furthermore, there is a possibility that REEs can leach out into aquatic environments and be absorbed by aquatic vegetation, which can then bio-accumulate and potentially enter the human food chain if livestock or humans choose to eat the vegetation. An example of this situation was the case of the water hyacinth (Eichhornia crassipes) in China, where the water was contaminated due to a REE-enriched fertilizer being used in a nearby agricultural area. The aquatic environment became contaminated with cerium and resulted in the water hyacinth becoming three times more concentrated in cerium than its surrounding water. Environmental considerations: On human health REEs are a large group with many different properties and levels in the environment. Because of this, and limited research, it has been difficult to determine safe levels of exposure for humans. A number of studies have focused on risk assessment based on routes of exposure and divergence from background levels related to nearby agriculture, mining, and industry. It has been demonstrated that numerous REEs have toxic properties and are present in the environment or in work places. Exposure to these can lead to a wide range of negative health outcomes such as cancer, respiratory issues, dental loss, and even death. However REEs are numerous and present in many different forms and at different levels of toxicity, making it difficult to give blanket warnings on cancer risk and toxicity as some of these are harmless while others pose a risk.What toxicity is shown appears to be at very high levels of exposure through ingestion of contaminated food and water, through inhalation of dust/smoke particles either as an occupational hazard, or due to proximity to contaminated sites such as mines and cities. Therefore, the main issues that these residents would face is bioaccumulation of REEs and the impact on their respiratory system but overall, there can be other possible short-term and long-term health effects. It was found that people living near mines in China had many times the levels of REEs in their blood, urine, bone, and hair compared to controls far from mining sites. This higher level was related to the high levels of REEs present in the vegetables they cultivated, the soil, and the water from the wells, indicating that the high levels were caused by the nearby mine. While REE levels varied between men and women, the group most at risk were children because REEs can impact the neurological development of children, affecting their IQ and potentially causing memory loss.The rare-earth mining and smelting process can release airborne fluoride which will associate with total suspended particles (TSP) to form aerosols that can enter human respiratory systems and cause damage and respiratory diseases. Research from Baotou, China shows that the fluoride concentration in the air near REE mines is higher than the limit value from WHO, which can affect the surrounding environment and become a risk to those who live or work nearby.Residents blamed a rare-earth refinery at Bukit Merah for birth defects and eight leukemia cases within five years in a community of 11,000 — after many years with no leukemia cases. Seven of the leukemia victims died. Osamu Shimizu, a director of Asian Rare Earth, said "the company might have sold a few bags of calcium phosphate fertilizer on a trial basis as it sought to market byproducts; calcium phosphate is not radioactive or dangerous" in reply to a former resident of Bukit Merah who said that "The cows that ate the grass [grown with the fertilizer] all died." Malaysia's Supreme Court ruled on 23 December 1993 that there was no evidence that the local chemical joint venture Asian Rare Earth was contaminating the local environment. Environmental considerations: On animal health Experiments exposing rats to various cerium compounds have found accumulation primarily in the lungs and liver. This resulted in various negative health outcomes associated with those organs. REEs have been added to feed in livestock to increase their body mass and increase milk production. They are most commonly used to increase the body mass of pigs, and it was discovered that REEs increase the digestibility and nutrient use of pigs' digestive systems. Studies point to a dose-response when considering toxicity versus positive effects. While small doses from the environment or with proper administration seem to have no ill effects, larger doses have been shown to have negative effects specifically in the organs where they accumulate. The process of mining REEs in China has resulted in soil and water contamination in certain areas, which when transported into aquatic bodies could potentially bio-accumulate within aquatic biota. Furthermore, in some cases, animals that live in REE-contaminated areas have been diagnosed with organ or system problems. REEs have been used in freshwater fish farming because it protects the fish from possible diseases. One main reason why they have been avidly used in animal livestock feeding is that they have had better results than inorganic livestock feed enhancers. Environmental considerations: Remediation after pollution After the 1982 Bukit Merah radioactive pollution, the mine in Malaysia has been the focus of a US$100 million cleanup that is proceeding in 2011. After having accomplished the hilltop entombment of 11,000 truckloads of radioactively contaminated material, the project is expected to entail in summer, 2011, the removal of "more than 80,000 steel barrels of radioactive waste to the hilltop repository."In May 2011, after the Fukushima nuclear disaster, widespread protests took place in Kuantan over the Lynas refinery and radioactive waste from it. The ore to be processed has very low levels of thorium, and Lynas founder and chief executive Nicholas Curtis said "There is absolutely no risk to public health." T. Jayabalan, a doctor who says he has been monitoring and treating patients affected by the Mitsubishi plant, "is wary of Lynas's assurances. The argument that low levels of thorium in the ore make it safer doesn't make sense, he says, because radiation exposure is cumulative." Construction of the facility has been halted until an independent United Nations IAEA panel investigation is completed, which is expected by the end of June 2011. New restrictions were announced by the Malaysian government in late June.An IAEA panel investigation was completed and no construction has been halted. Lynas is on budget and on schedule to start producing in 2011. The IAEA concluded in a report issued in June 2011 that it did not find any instance of "any non-compliance with international radiation safety standards" in the project.If the proper safety standards are followed, REE mining is relatively low impact. Molycorp (before going bankrupt) often exceeded environmental regulations to improve its public image.In Greenland, there is a significant dispute on whether to start a new rare-earth mine in Kvanefjeld due to environmental concerns. Geopolitical considerations: China has officially cited resource depletion and environmental concerns as the reasons for a nationwide crackdown on its rare-earth mineral production sector. However, non-environmental motives have also been imputed to China's rare-earth policy. According to The Economist, "Slashing their exports of rare-earth metals… is all about moving Chinese manufacturers up the supply chain, so they can sell valuable finished goods to the world rather than lowly raw materials." Furthermore, China currently has an effective monopoly on the world's REE Value Chain. (All of the refineries and processing plants that transform the raw ore into valuable elements.) In the words of Deng Xiaoping, a Chinese politician from the late 1970s to the late 1980s, "The Middle East has oil; we have rare earths ... it is of extremely important strategic significance; we must be sure to handle the rare earth issue properly and make the fullest use of our country's advantage in rare-earth resources."One possible example of market control is the division of General Motors that deals with miniaturized magnet research, which shut down its US office and moved its entire staff to China in 2006 (China's export quota only applies to the metal but not products made from these metals such as magnets). Geopolitical considerations: It was reported, but officially denied, that China instituted an export ban on shipments of rare-earth oxides (but not alloys) to Japan on 22 September 2010, in response to the detainment of a Chinese fishing boat captain by the Japanese Coast Guard. On September 2, 2010, a few days before the fishing boat incident, The Economist reported that "China...in July announced the latest in a series of annual export reductions, this time by 40% to precisely 30,258 tonnes."The United States Department of Energy in its 2010 Critical Materials Strategy report identified dysprosium as the element that was most critical in terms of import reliance.A 2011 report "China's Rare-Earth Industry", issued by the US Geological Survey and US Department of the Interior, outlines industry trends within China and examines national policies that may guide the future of the country's production. The report notes that China's lead in the production of rare-earth minerals has accelerated over the past two decades. In 1990, China accounted for only 27% of such minerals. In 2009, world production was 132,000 metric tons; China produced 129,000 of those tons. According to the report, recent patterns suggest that China will slow the export of such materials to the world: "Owing to the increase in domestic demand, the Government has gradually reduced the export quota during the past several years." In 2006, China allowed 47 domestic rare-earth producers and traders and 12 Sino-foreign rare-earth producers to export. Controls have since tightened annually; by 2011, only 22 domestic rare-earth producers and traders and 9 Sino-foreign rare-earth producers were authorized. The government's future policies will likely keep in place strict controls: "According to China's draft rare-earth development plan, annual rare-earth production may be limited to between 130,000 and 140,000 [metric tons] during the period from 2009 to 2015. The export quota for rare-earth products may be about 35,000 [metric tons] and the Government may allow 20 domestic rare-earth producers and traders to export rare earths."The United States Geological Survey is actively surveying southern Afghanistan for rare-earth deposits under the protection of United States military forces. Since 2009 the USGS has conducted remote sensing surveys as well as fieldwork to verify Soviet claims that volcanic rocks containing rare-earth metals exist in Helmand Province near the village of Khanashin. The USGS study team has located a sizable area of rocks in the center of an extinct volcano containing light rare-earth elements including cerium and neodymium. It has mapped 1.3 million metric tons of desirable rock, or about ten years of supply at current demand levels. The Pentagon has estimated its value at about $7.4 billion.It has been argued that the geopolitical importance of rare earths has been exaggerated in the literature on the geopolitics of renewable energy, underestimating the power of economic incentives for expanded production. This especially concerns neodymium. Due to its role in permanent magnets used for wind turbines, it has been argued that neodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. But this perspective has been criticized for failing to recognize that most wind turbines have gears and do not use permanent magnets. Rare Earth Elements in popular culture: The plot of Eric Ambler's now-classic 1967 international crime-thriller Dirty Story (aka This Gun for Hire, but not to be confused with the movie This Gun for Hire (1942)) features a struggle between two rival mining cartels to control a plot of land in a fictional African country, which contains rich minable rare-earth ore deposits. Prices: The Institute of Rare Earths Elements and Strategic Metals is an informal network in the international raw metal market. The main interest of the institute's customers is the database, which is available on a subscription basis with daily updated prices: In addition to the eponymous rare-earth elements, 900 pure metals and 4,500 other metallic products are listed there. The company's headquarters are located in Lucerne, Switzerland.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inverted bell curve** Inverted bell curve: In statistics, an inverted bell curve is a term used loosely or metaphorically to refer to a bimodal distribution that falls to a trough between two peaks, rather than (as in a standard bell curve) rising to a single peak and then falling off on both sides.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brodmann area 11** Brodmann area 11: Brodmann area 11 is one of Brodmann's cytologically defined regions of the brain. It is in the orbitofrontal cortex which is above the eye sockets (orbitae). It is involved in decision making and processing rewards, planning, encoding new information into long-term memory, and reasoning. In humans: Brodmann area 11, or BA11, is part of the frontal cortex in the human brain. BA11 is the part of the orbitofrontal cortex that covers the medial portion of the ventral surface of the frontal lobe. Prefrontal area 11 of Brodmann-1909 is a subdivision of the frontal lobe in the human defined on the basis of cytoarchitecture. Defined and illustrated in Brodmann-1909, it included the areas subsequently illustrated in Brodmann-10 as prefrontal area 11 and rostral area 12. In humans: Area 11 is a subdivision of the cytoarchitecturally defined frontal region of cerebral cortex of the human. As illustrated in Brodmann-10, It constitutes most of the orbital gyri, gyrus rectus and the most rostral portion of the superior frontal gyrus. It is bounded medially by the inferior rostral sulcus (H) and laterally approximately by the frontomarginal sulcus (H). Cytoarchitecturally it is bounded on the rostral and lateral aspects of the hemisphere by the frontopolar area 10, the orbital area 47, and the triangular area 45; on the medial surface it is bounded dorsally by the rostral area 12 and caudally by the subgenual area 25. In an earlier map, the area labeled 11, i.e., prefrontal area 11 of Brodmann-1909, was larger; it included the area now designated rostral area 12. In guenon monkeys: Brodmann area 11 is a subdivision of the frontal lobe of the guenon monkey defined on the basis of cytoarchitecture (Brodmann-1905). Distinctive features: area 11 lacks an internal granular layer (IV); larger pyramidal cells of sublayer 3b of the external pyramidal layer (III) merge with a denser self-contained collection of cells in the internal pyramidal layer (V); similar to area 10 of Brodmann-1909 is the presence in the multiform layer (VI) of trains of cells oriented parallel to the cortical surface separated by acellular fiber bundles; a thick molecular layer (I); a relatively narrow overall cortical thickness; and a gradual transition from the multiform layer (VI) to the subcortical white matter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elastolefin** Elastolefin: Elastolefin is a fiber composed of at least 95% (by weight) of macromolecules partially cross-linked, made of ethylene and at least one other olefin. When stretched to one and a half times its original length, it recovers rapidly to its original length. It therefore will stretch up to 50% and recover. Recent updates to EU fabric labelling directive to include elastolefin in Anex I and II.Low crystallinity polyolefin elastomers that have a cross-linked structure have been developed by the DOW Chemical Company in 2002. The trade name of the elastolefin fibers is DOW XLA, the fibers when under lower stress have the ability to expand when larger strains are applied. The DOW XLA fibers were designed to have high thermal and chemical resistance, stretch performance, and durability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kullback's inequality** Kullback's inequality: In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P << Q, and whose first moments exist, then where ΨQ∗ is the rate function, i.e. the convex conjugate of the cumulant-generating function, of Q , and μ1′(P) is the first moment of P. Kullback's inequality: The Cramér–Rao bound is a corollary of this result. Proof: Let P and Q be probability distributions (measures) on the real line, whose first moments exist, and such that P << Q. Consider the natural exponential family of Q given by for every measurable set A, where MQ is the moment-generating function of Q. (Note that Q0 = Q.) Then By Gibbs' inequality we have DKL(P∥Qθ)≥0 so that Simplifying the right side, we have, for every real θ where MQ(θ)<∞: where μ1′(P) is the first moment, or mean, of P, and log ⁡MQ is called the cumulant-generating function. Taking the supremum completes the process of convex conjugation and yields the rate function: Corollary: the Cramér–Rao bound: Start with Kullback's inequality Let Xθ be a family of probability distributions on the real line indexed by the real parameter θ, and satisfying certain regularity conditions. Then where Ψθ∗ is the convex conjugate of the cumulant-generating function of Xθ and μθ+h is the first moment of Xθ+h. Left side The left side of this inequality can be simplified as follows: which is half the Fisher information of the parameter θ. Right side The right side of the inequality can be developed as follows: This supremum is attained at a value of t=τ where the first derivative of the cumulant-generating function is Ψθ′(τ)=μθ+h, but we have Ψθ′(0)=μθ, so that Moreover, Putting both sides back together We have: which can be rearranged as:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supracondylar humerus fracture** Supracondylar humerus fracture: A supracondylar humerus fracture is a fracture of the distal humerus just above the elbow joint. The fracture is usually transverse or oblique and above the medial and lateral condyles and epicondyles. This fracture pattern is relatively rare in adults, but is the most common type of elbow fracture in children. In children, many of these fractures are non-displaced and can be treated with casting. Some are angulated or displaced and are best treated with surgery. In children, most of these fractures can be treated effectively with expectation for full recovery. Some of these injuries can be complicated by poor healing or by associated blood vessel or nerve injuries with serious complications. Signs and symptoms: A child will complain of pain and swelling over the elbow immediately post trauma with loss of function of affected upper limb. Late onset of pain (hours after injury) could be due to muscle ischaemia (reduced oxygen supply). This can lead to loss of muscle function.It is important to check for viability of the affected limb post trauma. Clinical parameters such as temperature of the limb extremities (warm or cold), capillary refilling time, oxygen saturation of the affected limb, presence of distal pulses (radial and ulnar pulses), assessment of peripheral nerves (radial, median, and ulnar nerves), and any wounds which would indicate open fracture. Doppler ultrasonography should be performed to ascertain blood flow of the affected limb if the distal pulses are not palpable. Anterior interosseus branch of the median nerve most often injured in postero-lateral displacement of the distal humerus as the proximal fragment is displaced antero-medially. This is evidenced by the weakness of the hand with a weak "OK" sign on physical examination (Unable to do an "OK" sign; instead a pincer grasp is performed). Radial nerve would be injured if the distal humerus is displaced postero-medially. This is because the proximal fragment will be displaced antero-laterally. Ulnar nerve is most commonly injured in the flexion type of injury because it crosses the elbow below the medial epidcondyle of the humerus.A puckered, dimple, or an ecchymosis of the skin just anterior to the distal humerus is a sign of difficult reduction because the proximal fragment may have already penetrated the brachialis muscle and the subcutaneous layer of the skin. Signs and symptoms: Complications Volkmann's contracture Swelling and vascular injury following the fracture can lead to the development of the compartment syndrome which leads to long-term complication of Volkmann's contracture (fixed flexion of the elbow, pronation of the forearm, flexion at the wrist, and joint extension of the metacarpophalangeal joint ). Therefore, early surgical reduction is indicated to prevent this type of complication. Signs and symptoms: Malunion The distal humerus grows slowly post fracture (only contributes 10 to 20% of the longitudinal growth of the humerus), therefore, there is a high rate of malunion if the supracondylar fracture is not corrected appropriately. Such malunion can result in cubitus varus deformity. Mechanism: Extension type of supracondylar humerus fractures typically result from a fall on to an outstretched hand, usually leading to a forced hyperextension of the elbow. The olecranon acts as a fulcrum which focuses the stress on distal humerus (supracondylar area), predisposing the distal humerus to fracture. The supracondylar area undergoes remodeling at the age of 6 to 7, making this area thin and prone to fractures. Important arteries and nerves (median nerve, radial nerve, brachial artery, and ulnar nerve) are located at the supracondylar area and can give rise to complications if these structures are injured. Most vulnerable structure to get damaged is Median Nerve. Meanwhile, the flexion-type of supracondylar humerus fracture is less common. It occurs by falling on the point of the elbow, or falling with the arm twisted behind the back. This causes anterior dislocation of the proximal fragment of the humerus. Diagnosis: A supracondylar humerus facture is diagnosed by x-ray and the injured limb will be examined to assess the surrounding soft tissue, neurovascular status, and to identify any other injuries to the affected area. Pain, swelling, and deformity near the elbow or arm area is common and a bleed near the fracture may result in an effusion in the elbow joint. With severe displacement, there may be an anterior dimple from the proximal bone end trapped within the biceps muscle. The skin is usually intact. If there is a laceration that communicates with the fracture site, it is an open fracture, which increases infection risk. For fractures with significant displacement, the bone end can be trapped within the biceps muscle with resulting tension producing an indentation to the skin, which is called a "pucker sign". Diagnosis: Vascular system examination The vascular status must be assessed, including the warmth and perfusion of the hand, the time for capillary refill, and the presence of a palpable radial pulse. Limb vascular status is categorized as "normal," "pulseless with a (warm, pink) perfused hand," or "pulseless–pale (nonperfused)" (see "neurovascular complications" below). Diagnosis: Sensory and motor nerve examination The neurologic status must be assessed including the sensory and motor function of the radial, ulnar, and median nerves (see "neurovascular complications" below). Neurologic deficits are found in 10-20% of patients. The mostly commonly injured nerve is the median nerve (specifically, the anterior interosseous portion of the median nerve). Injuries to the ulnar and radial nerves are less common. Diagnosis: X-rays Diagnosis is confirmed by x-ray imaging. Antero-posterior (AP) and lateral view of the elbow joint should be obtained. Any other sites of pain, deformity, or tenderness should warrant an X-ray for that area too. X-ray of the forearm (AP and lateral) should also be obtained for because of the common association of supracondylar fractures with the fractures of the forearm. Ideally, splintage should be used to immobilise the elbow at 20 to 30 degrees flexion in order to prevent further injury of the blood vessels and nerves while doing X-rays. Splinting of fracture site with full flexion or extension of the elbow is not recommended as it can stretch the blood vessels and nerves over the bone fragments or can cause impingement of these structures into the fracture site.Depending on the child's age, parts of the bone will still be developing and if not yet calcified, will not show up on the X-rays. The capitulum of the humerus is the first to ossify at the age of one year. Head of radius and medial epicondyle of the humerus starts to ossify at 4 to 5 years of age, followed by trochlea of humerus and olecranon of the ulna at 8 to 9 years of age, and lateral epicondyle of the humerus to ossify at 10 years of age. Diagnosis: Anterior X-ray Carrying angle can be evaluated through AP view of the elbow by looking at the Baumann’s angle. There are two definitions of Bowmann's angle: The first definition of Baumann's angle is an angle between a line parallel to the longitudinal axis of the humeral shaft and a line drawn along the lateral epicondyle.Another definition of Baumann's angle is also known as the humeral-capitellar angle. It is the angle between the line perpendicular to the long axis of the humerus and the growth plate of the lateral condyle. Reported normal values for Baumann's angle range between 9 and 26°. An angle of more than 10° is regarded as acceptable. Diagnosis: Lateral X-ray On lateral view of the elbow, there are five radiological features should be looked for: tear drop sign, anterior humeral line, coronoid line, fish-tail sign, and fat pad sign/sail sign (anterior and posterior).Tear drop sign - Tear drop sign is seen on a normal radiograph, but is disturbed in supracondylar fracture.Anterior humeral line - It is a line drawn down along the front of the humerus on the lateral view and it should pass through the middle third of the capitulum of the humerus. If it passes through the anterior third of the capitulum, it indicates the posterior displacement of distal fragment.Fat pad sign/sail sign - A non-displaced fracture can be difficult to identify and a fracture line may not be visible on the X-rays. However, the presence of a joint effusion is helpful in identifying a non-displaced fracture. Bleeding from the fracture expands the joint capsule and is visualized on the lateral view as a darker area anteriorly and posteriorly, and is known as the sail sign.Coronoid line - A line drawn along the anterior border of the coronoid process of the ulna should touch the anterior part of the lateral condyle of the humerus. If lateral condyle appears posterior to this line, it indicates the posterior displacement of lateral condyle.Fish-tail sign - The distal fragment is rotated away from the proximal fragment, thus the sharp ends of the proximal fragment looks like a shape of a fish-tail. Management: Treatment options for supracondyl humerus fractures vary depending if the bone is displaced (out of position) or not displaced (see classification section above). Management: Gartland type I Undisplaced or minimally displaced fractures can be treated by using an above elbow splint in 90 degrees flexion for 3 weeks. Orthopaedic cast and extreme flexion should be avoided to prevent compartment syndrome and vascular compromise. In case the varus of the fracture site is more than 10 degrees when compared to the normal elbow, closed reduction and percutaneous pinning using X-ray image intensifier inside operating theater is recommended. In one study, for those children who was done percutaneous pinning, immobilisation using a posterior splint and an arm sling has earlier resumption of activity when compared to immobilisation using collar and cuff sling. Both methods gives similar pain scores and activity level at two weeks of treatment. Management: Gartland type II Gartland Type II fractures requires closed reduction and casting at 90 degrees flexion. Percutaneous pinning is required if more than 90 degrees flexion is required to maintain the reduction. Closed reduction with percutaneous pinning has low complication rates. Closed reduction can be done by applying traction along the long axis of the humerus with elbow in slight flexion. Full extension of the elbow is not recommended because the neurovascular structures can hook around the proximal fragment of the humerus. If the proximal humerus is suspected to have pierced the brachialis muscle, gradual traction over the proximal humerus should be given instead. After that, reduction can be done through hyperflexion of the elbow can be done with the olecranon pushing anteriorly. If the distal fragment is internally rotated, reduction maneuver can be applied with extra stress applied over medial elbow with pronation of the forearm at the same time. Management: Gartland type III and IV Gartland III and IV are unstable and prone to neurovascular injury. Therefore, closed or open reduction together with percutaneous pinning within 24 hours is the preferred method of management with low complication rates. Straight arm lateral traction can be a safe method to deal with Gartland Type III fractures. Although Gartland Type III fractures with posteromedial displacement of distal fragment can be reduced with closed reduction and casting, those with posterolateral displacement should preferably be fixed by percutaneous pinning. Management: Percutaneous pinning Percutaneous pinning are usually inserted over the medial or lateral sides of the elbow under X-ray image intensifier guidance. There is 1.8 times higher risk of getting nerve injury when inserting both medial and lateral pins compared to lateral pin insertion alone. However, medial and lateral pins insertions are able to stabilise the fractures more properly than lateral pins alone. Therefore, medial and lateral pins insertion should be done with care to prevent nerve injuries around elbow region.Percutaneous pinning should be done when close manipulation fails to achieve the reduction, unstable fracture after closed reduction, neurological deficits occurs during or after the manipulation of fracture, and surgical exploration is required to determine the integrity of the blood vessels and nerves. In open fractures, surgical wound debridement should be performed to prevent any infection into the elbow joint. All Type II and III fractures requiring elbow flexion of more than 90° to maintain the reduction needs to be fixed by percutaneous pinning. All Type IV fractures of supracondylar humerus are unstable; therefore, requires percutaneous pinning. Besides, any polytrauma with multiple fractures of the same side requiring surgical intervention is another indication for percutaneous pinning. Management: Follow up For routine displaced supracondylar fractures requiring percutaneous pinning, radiographic evaluation and clinical assessment can be delayed until the pin removal. Pins are only removed when there is no tenderness over the elbow region at 3 to 4 weeks. After pin removal, mobilisation of the elbow can begin. Management: Neurovascular complications Absence of radial pulse is reported in 6 to 20% of the supracondylar fracture cases. This is because brachial artery is frequently injured in Gartland Type II and Type III fractures, especially when the distal fragment is displaced postero-laterally (proximal fragment displaced antero-medially). Open/closed reduction with percutaneous pinning would the first line of management. However, if there is no improvement of pulse after the reduction, surgical exploration of brachial artery and nerves is indicated, especially when there is persistent pain at the fracture site (indicating limb ischaemia), neurological deficits (paresthesia, tingling, numbness), and additional signs of poor perfusion (prolonged capillary refilling time, and bluish discolouration of the fingers). Meanwhile, for pink, pulseless hand (absent radial pulse but with good perfusion at extremities) after successful reduction and percutaneous pinning, the patient could still be observed until additional signs of ischaemia develops which warrants a surgical exploration.Isolated neurological deficits occurred in 10 to 20% of the cases and can reach as high as 49% in Type III Gartland fractures. Neurapraxia (temporary neurological deficits due to blockage of nerve conduction) is the most common cause of the neurological deficits in supracondylar fractures. Such neurological deficits would resolve in two or three months. However, if the neurology is not resolved for this time frame, surgical exploration is indicated. Epidemiology: Supracondylar humerus fractures is commonly found in children between 5 and 7 years (90% of the cases), after the clavicle and forearm fractures. It is more often occurs in males, accounting of 16% of all pediatric fractures and 60% of all paediatric elbow fractures. The mechanism of injury is most commonly due to fall on an outstretch hand. Extension type of injury (70% of all elbow fractures) is more common than the flexion type of injury (1% to 11% of all elbow injuries). Injury often occurs on the non-dominant part of the limb. Flexion type of injury is more commonly found in older children. Open fractures can occur for up to 30% of the cases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Explanatory combinatorial dictionary** Explanatory combinatorial dictionary: An explanatory combinatorial dictionary (ECD) is a type of monolingual dictionary designed to be part of a meaning-text linguistic model of a natural language. It is intended to be a complete record of the lexicon of a given language. As such, it identifies and describes, in separate entries, each of the language's lexemes (roughly speaking, each word or set of inflected forms based on a single stem) and phrasemes (roughly speaking, idioms and other multi-word fixed expressions). Among other things, each entry contains (1) a definition that incorporates a lexeme's semantic actants (for example, the definiendum of give takes the form X gives Y to Z, where its three actants are expressed — the giver X, the thing given Y, and the person given to, Z) (2) complete information on lexical co-occurrence (e.g. the entry for attack tells you that one of its collocations is launch an attack, the entry for party provides throw a party, and the entry for lecture provides deliver a lecture — enabling the user to avoid making an error like *deliver a party); (3) an extensive set of examples. The ECD is a production dictionary — that is, it aims to provide all the information needed for a foreign learner or automaton to produce perfectly formed utterances of the language. Since the lexemes and phrasemes of a natural language number in the hundreds of thousands, a complete ECD, in paper form, would occupy the space of a large encyclopaedia. Such a work has yet to be achieved; while ECDs of Russian and French have been published, each describes less than one percent of the vocabulary of the respective languages. Explanatory combinatorial dictionary: The ECD was proposed in the late 1960s by Aleksandr Žolkovskij and Igor Mel'čuk and was later further developed by Jurij Apresjan. Three ECDs are currently available in print, one for Russian, and two for French. A dictionary of Spanish collocations—DICE (= Diccionario de colocaciones del español)—is under development. Characteristics of an ECD: A complete ECD of a language would provide an entry for every lexeme, construction, or idiom—referred to collectively as "Lexical Units" (LUs)—in use in the language. Entries in the ECD are based on the semantic definition of an LU, and each entry contains a complete list of its collocations and lexical functions as well.Entries for historically-related Lexical Units which are homophones and share significant semantic component (i.e., meanings) are grouped into larger units called "vocables," thereby acknowledging polysemy while maintaining the distinct status of the independent items in question. The English vocable improve, for example, includes six Lexical Units, each of which is provided a separate lexical entry: IMPROVE, verb IMPROVEI.1a X improves ≡ ‘The value or the quality of X becomes higher’ [The weather suddenly improved; The system will improve over time] IMPROVEI.1b X improves Y ≡ ‘X causes1 that Y improvesI.1a’ [The most recent changes drastically improved the system] IMPROVEI.2 X improves ≡ ‘The health of a sick person X improvesI.1a’ [Jim is steadily improving] IMPROVEI.3 X improves at Y ≡ ‘X’s execution of Y improvesI.1a, which is caused1 by X’s having practiced or practicing Y’ [Jim is steadily improving at algebra] IMPROVEII X improves Y by Z-ing ≡ ‘X voluntarily causes2 that the market value of a piece of real estate Y becomes higher by doing Z-ing to Y’ [Jim improved his house by installing indoor plumbing] IMPROVEIII X improves upon Y ≡ ‘X creates a new Y´ by improvingI.1b Y’ [Jim has drastically improved upon Patrick’s translation]The lexicographic numbers (given in bold after the entry word) reflect degrees or levels of semantic distance between Lexical Units within a vocable: Roman numerals mark the highest-level semantic groupings, while Arabic numerals mark the next highest level, and letters indicate the lowest level distances. The four lexemes grouped under IMPROVEI, for example, are considered to be closer to each other than to IMPROVEII or IMPROVEIII, because the meanings of each of IMPROVEI.1b and IMPROVEI.2 actually include the meaning of IMPROVEI.1a. IMPROVEI.1a and IMPROVEI.1b are even more closely related because in English there are many pairs of words—specifically, labile or ambitransitive verbs—that are related by the semantic alternation ’P’ ~ ‘cause1 to P’ (as per above, ‘improve’ ~ ‘cause to improve’). Characteristics of an ECD: The subscript and superscript numbers attached to words in the definition refer to subsenses (subscripts) and homophonous entries (superscripts) for a word as given in the Longman Dictionary of Contemporary English —thus, “device11” refers to the first entry for device in this dictionary, first subsense. Structure of the ECD entry: An ECD entry for a given Lexical Unit, let’s call it "L", is divided into three major sections or "zones": The semantic zone The semantic zone describes the semantic properties of L and consists of two sub-zones: 1) the definition of L, which fully specifies L’s meaning; and 2) L’s connotations (meanings that the language associates with L, but that are not part of its definition). Structure of the ECD entry: The phonological/graphematic zone The phonological/graphematic zone gives all of the data on L’s phonological properties. Here again we find two sub-zones: 1) L’s pronunciation, including its syllabification, and any non-standard prosodic properties; and 2) orthographic information about L’s spelling variants, etc. The co-occurrence zone The co-occurrence zone presents all of the data on L’s combinatorial properties. It is organized into five sub-zones—morphological, syntactic, lexical, stylistic, and pragmatic. The morphological sub-zone contains inflectional data including conjugation/declension class, irregular forms, missing forms, permitted alternations, etc. The syntactic sub-zone has two parts: a) Government pattern, which describes the elements that L can syntactically govern (arguments, complements, etc.); b) Part of speech and syntactic features, which describes the constructions in which L can appear as a syntactic dependent. The lexical sub-zone specifies the lexical functions that L participates in, covering both semantic derivations and collocations of L with other individual LUs or very small and irregular groups of LUs. The stylistic sub-zone specifies L’s speech register (informal, colloquial, vulgar, poetic, etc.), temporal (obsolescent, archaic) and geographical (British, Indian, Australian) variability, and the like. The pragmatic sub-zone describes the real-life situations in which a particular expression is appropriate or inappropriate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cuckold** Cuckold: A cuckold is the husband of an adulterous wife; the wife of an adulterous husband is a cuckquean. In biology, a cuckold is a male who unwittingly invests parental effort in juveniles who are not genetically his offspring. A husband who is aware of and tolerates his wife's infidelity is sometimes called a wittol or wittold. History of the term: The word cuckold derives from the cuckoo bird, alluding to its habit of laying its eggs in other birds' nests. The association is common in medieval folklore, literature, and iconography. History of the term: English usage first appears about 1250 in the medieval debate poem The Owl and the Nightingale. It was characterized as an overtly blunt term in John Lydgate's The Fall of Princes, c. 1440. Shakespeare's writing often referred to cuckolds, with several of his characters suspecting they had become one.The word often implies that the husband is deceived; that he is unaware of his wife's unfaithfulness and may not know until the arrival or growth of a child plainly not his (as with cuckoo birds).The female equivalent cuckquean first appears in English literature in 1562, adding a female suffix to the cuck. History of the term: A related word, first appearing in 1520, is wittol, which substitutes wit (in the sense of knowing) for the first part of the word, referring to a man aware of and reconciled to his wife's infidelity. Cuck An abbreviation of cuckold, the term cuck has been used by the alt-right to attack the masculinity of an opponent. It was originally aimed at other conservatives. Metaphor and symbolism: In Western traditions, cuckolds have sometimes been described as "wearing the horns of a cuckold" or just "wearing the horns". This is an allusion to the mating habits of stags, who forfeit their mates when they are defeated by another male.In Italy (especially in Southern Italy, where it is a major personal offence), the insult "cornuto" is often accompanied by the sign of the horns. In French, the term is "porter des cornes". In German, the term is "jemandem Hörner aufsetzen", or "Hörner tragen", the husband is "der gehörnte Ehemann". Metaphor and symbolism: In Brazil and Portugal, the term used is "corno", meaning exactly "horned". The term is quite offensive, especially for men, and cornos are a common subject of jokes and anecdotes. Rabelais's Tiers Livers of Gargantua and Pantagruel (1546) portrays a horned fool as a cuckold. In Molière's L'École des femmes (1662), a man named Arnolphe (see below) who mocks cuckolds with the image of the horned buck (becque cornu) becomes one at the end. Metaphor and symbolism: In Chinese usage, the cuckold (or wittol) is said to be "戴綠帽子" dài lǜmàozi, translated into English as 'wearing the green hat'. The term is an allusion to the sumptuary laws used from the 13th to the 18th centuries that required males in households with prostitutes to wrap their heads in a green scarf (or later a hat). Associations: A saint Arnoul(t), Arnolphe, or Ernoul, possibly Arnold of Soissons, is often cited as the patron saint of cuckolded husbands, hence the name of Molière's character Arnolphe.The Greek hero Actaeon is often associated with cuckoldry, as when he is turned into a stag, he becomes "horned". This is alluded to in Shakespeare's The Merry Wives of Windsor, Robert Burton's The Anatomy of Melancholy, and others. Cross-cultural parallels: In Islamic cultures, the related term dayouth (Arabic: دَيُّوث) can be used to describe a person who is viewed as apathetic or permissive with regards to unchaste behaviour by female relatives or a spouse, or who lacks the demeanor (ghayrah) of paternalistic protectiveness. Variations on the spelling include dayyuth, dayuuth, or dayoos. The term has been criticised for its use as a pejorative while also suggestive of acceptance of vain paternalistic gender roles, stigmatization of sexuality or overprotective intrusive sexual gatekeeping. Cuckoldry as a fetish: Unlike the traditional definition of the term, in fetish usage, a cuckold (also known as "cuckolding fetish") is complicit in their partner's sexual "infidelity"; the wife who enjoys "cuckolding" her husband is called a "cuckoldress" if the man is more submissive. The dominant man engaging with the cuckold's partner is called a "bull".If a couple can keep the fantasy in the bedroom, or come to an agreement where being cuckolded in reality does not damage the relationship, they may try it out in reality. This, like other sexual acts, can improve the sexual relationship between partners. However, the primary proponent of the fantasy is almost always the one being humiliated, or the "cuckold": the cuckold convinces his lover to participate in the fantasy for them, though other "cuckolds" may prefer their lover to initiate the situation instead. The fetish fantasy does not work at all if the cuckold is being humiliated against their will.Psychology regards cuckold fetishism as a variant of masochism, with the cuckold deriving pleasure from being humiliated. In his book Masochism and the Self, psychologist Roy Baumeister advanced a Self Theory analysis that cuckolding (or specifically, all masochism) was a form of escaping from self-awareness, at times when self-awareness becomes burdensome, such as with perceived inadequacy. According to this theory, the physical or mental pain from masochism brings attention away from the self, which would be desirable in times of "guilt, anxiety, or insecurity", or at other times when self-awareness is unpleasant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Incompressibility method** Incompressibility method: In mathematics, the incompressibility method is a proof method like the probabilistic method, the counting method or the pigeonhole principle. To prove that an object in a certain class (on average) satisfies a certain property, select an object of that class that is incompressible. If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved (not just the average). To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits (are incompressible). History: The incompressibility method depends on an objective, fixed notion of incompressibility. Such a notion was provided by the Kolmogorov complexity theory, named for Andrey Kolmogorov.One of the first uses of the incompressibility method with Kolmogorov complexity in the theory of computation was to prove that the running time of a one-tape Turing machine is quadratic for accepting a palindromic language and sorting algorithms require at least log ⁡n time to sort n items. One of the early influential papers using the incompressibility method was published in 1980. The method was applied to a number of fields, and its name was coined in a textbook. Applications: Number theory According to an elegant Euclidean proof, there is an infinite number of prime numbers. Bernhard Riemann demonstrated that the number of primes less than a given number is connected with the 0s of the Riemann zeta function. Jacques Hadamard and Charles Jean de la Vallée-Poussin proved in 1896 that this number of primes is asymptotic to ln ⁡n ; see Prime number theorem (use ln for the natural logarithm an log for the binary logarithm). Using the incompressibility method, G. J. Chaitin argued as follows: Each n can be described by its prime factorization n=p1n1⋯pknk (which is unique), where p1,…,pk are the first k primes which are (at most) n and the exponents (possibly) 0. Each exponent is (at most) log ⁡n , and can be described by log log ⁡n bits. The description of n can be given in log log ⁡n bits, provided we know the value of log log ⁡n (enabling one to parse the consecutive blocks of exponents). To describe log log ⁡n requires only log log log ⁡n bits. Using the incompressibility of most positive integers, for each k>0 there is a positive integer n of binary length log ⁡n which cannot be described in fewer than l bits. This shows that the number of primes, π(n) less than n , satisfies log log log ⁡n−o(1). Applications: A more-sophisticated approach attributed to Piotr Berman (present proof partially by John Tromp) describes every incompressible n by k and n/pk , where pk is the largest prime number dividing n . Since n is incompressible, the length of this description must exceed log ⁡n . To parse the first block of the description k must be given in prefix form log log log log ⁡ε(k) , where ε(k) is an arbitrary, small, positive function. Therefore, log ⁡pk≤P(k) . Hence, pk≤nk with log ⁡k for a special sequence of values n1,n2,… . This shows that the expression below holds for this special sequence, and a simple extension shows that it holds for every n>0 log ⁡n. Applications: Both proofs are presented in more detail. Graph theory A labeled graph G=(V,E) with n nodes can be represented by a string E(G) of (n2) bits, where each bit indicates the presence (or absence) of an edge between the pair of nodes in that position. K(G)≥(n2) , and the degree d of each vertex satisfies log ⁡n). To prove this by the incompressibility method, if the deviation is larger we can compress the description of G below K(G) ; this provides the required contradiction. This theorem is required in a more complicated proof, where the incompressibility argument is used a number of times to show that the number of unlabeled graphs is ∼2(n2)n!. Applications: Combinatorics A transitive tournament is a complete directed graph, G=(V,E) ; if (i,j),(j,k)∈E , (i,k)∈E . Consider the set of all transitive tournaments on n nodes. Since a tournament is a labeled, directed complete graph, it can be encoded by a string E(G) of (n2) bits where each bit indicates the direction of the edge between the pair of nodes in that position. Using this encoding, every transitive tournament contains a transitive subtournament on (at least) v(n) vertices with log ⁡n⌋. Applications: This was shown as the first problem. It is easily solved by the incompressibility method, as are the coin-weighing problem, the number of covering families and expected properties; for example, at least a fraction of 1−1/n of all transitive tournaments on n vertices have transitive subtournaments on not more than log ⁡n⌉ vertices. n is large enough. Applications: If a number of events are independent (in probability theory) of one another, the probability that none of the events occur can be easily calculated. If the events are dependent, the problem becomes difficult. Lovász local lemma is a principle that if events are mostly independent of one another and have an individually-small probability, there is a positive probability that none of them will occur. It was proven by the incompressibility method. Using the incompressibility method, several versions of expanders and superconcentrator graphs were shown to exist. Applications: Topological combinatorics In the Heilbronn triangle problem, throw n points in the unit square and determine the maximum of the minimal area of a triangle formed by three of the points over all possible arrangements. This problem was solved for small arrangements, and much work was done on asymptotic expression as a function of n . The original conjecture of Heilbronn was O(1/n2) during the early 1950s. Paul Erdős proved that this bound is correct for n , a prime number. The general problem remains unsolved, apart from the best-known lower bound log ⁡n)/n2) (achievable; hence, Heilbronn's conjecture is not correct for general n ) and upper bound exp log ⁡n)/n8/7 (proven by Komlos, Pintsz and Szemeredi in 1982 and 1981, respectively). Using the incompressibility method, the average case was studied. It was proven that if the area is too small (or large) it can be compressed below the Kolmogorov complexity of a uniformly-random arrangement (high Kolmogorov complexity). This proves that for the overwhelming majority of the arrangements (and the expectation), the area of the smallest triangle formed by three of n points thrown uniformly at random in the unit square is Θ(1/n3) . In this case, the incompressibility method proves the lower and upper bounds of the property involved. Applications: Probability The law of the iterated logarithm, the law of large numbers and the recurrence property were shown to hold using the incompressibility method and Kolmogorov's zero–one law, with normal numbers expressed as binary strings (in the sense of E. Borel) and the distribution of 0s and 1s in binary strings of high Kolmogorov complexity. Applications: Turing-machine time complexity The basic Turing machine, as conceived by Alan Turing in 1936, consists of a memory: a tape of potentially-infinite cells on which a symbol can be written and a finite control, with a read-write head attached, which scans a cell on the tape. At each step, the read-write head can change the symbol in the cell being scanned and move one cell left, right, or not at all according to instruction from the finite control. Turing machines with two tape symbols may be considered for convenience, but this is not essential. Applications: In 1968, F. C. Hennie showed that such a Turing machine requires order n2 to recognize the language of binary palindromes in the worst case. In 1977, W. J. Paul presented an incompressibility proof which showed that order n2 time is required in the average case. For every integer n , consider all words of that length. For convenience, consider words with the middle third of the word consisting of 0s. The accepting Turing machine ends with an accept state on the left (the beginning of the tape). A Turing-machine computation of a given word gives for each location (the boundary between adjacent cells) a sequence of crossings from left to right and right to left, each crossing in a particular state of the finite control. Positions in the middle third of a candidate word all have a crossing sequence of length O(n) (with a total computation time of O(n2) ), or some position has a crossing sequence of o(n) . In the latter case, the word (if it is a palindrome) can be identified by that crossing sequence. Applications: If other palindromes (ending in an accepting state on the left) have the same crossing sequence, the word (consisting of a prefix up to the position of the involved crossing sequence) of the original palindrome concatenated with a suffix the remaining length of the other palindrome would be accepted as well. Taking the palindrome of Ω(n) , the Kolmogorov complexity described by o(n) bits is a contradiction. Applications: Since the overwhelming majority of binary palindromes have a high Kolmogorov complexity, this gives a lower bound on the average-case running time. The result is much more difficult, and shows that Turing machines with k+1 work tapes are more powerful than those with k work tapes in real time (here one symbol per step).In 1984, W. Maass and M. Li and P. M. B. Vitanyi showed that the simulation of two work tapes by one work tape of a Turing machine takes Θ(n2) time deterministically (optimally, solving a 30-year open problem) and log log log ⁡n)) time nondeterministically (in, this is log log log ⁡n)) . More results concerning tapes, stacks and queues, deterministically and nondeterministically, were proven with the incompressibility method. Applications: Theory of computation Heapsort is a sorting method, invented by J. W. J. Williams and refined by R. W. Floyd, which always runs in log ⁡n) time. It is questionable whether Floyd's method is better than Williams' on average, although it is better in the worst case. Using the incompressibility method, it was shown that Williams' method runs on average in log ⁡n+O(n) time and Floyd's method runs on average in log ⁡n+O(n) time. The proof was suggested by Ian Munro. Applications: Shellsort, discovered by Donald Shell in 1959, is a comparison sort which divides a list to be sorted into sublists and sorts them separately. The sorted sublists are then merged, reconstituting a partially-sorted list. This process repeats a number of times (the number of passes). The difficulty of analyzing the complexity of the sorting process is that it depends on the number n of keys to be sorted, on the number p of passes and the increments governing the scattering in each pass; the sublist is the list of keys which are the increment parameter apart. Although this sorting method inspired a large number of papers, only the worst case was established. For the average running time, only the best case for a two-pass Shellsort and an upper bound of 23 15 ) for a particular increment sequence for three-pass Shellsort were established. A general lower bound on an average p -pass Shellsort was given which was the first advance in this problem in four decades. In every pass, the comparison sort moves a key to another place a certain distance (a path length). All these path lengths are logarithmically coded for length in the correct order (of passes and keys). This allows the reconstruction of the unsorted list from the sorted list. If the unsorted list is incompressible (or nearly so), since the sorted list has near-zero Kolmogorov complexity (and the path lengths together give a certain code length) the sum must be at least as large as the Kolmogorov complexity of the original list. The sum of the path lengths corresponds to the running time, and the running time is lower-bounded in this argument by Ω(pn1+1/p) . This was improved to a lower bound of Ω(n∑k=1phk−1/hk) where h0=n . This implies for example the Jiang-Li-Vitanyi lower bound for all p -pass increment sequences and improves that lower bound for particular increment sequences; the Janson-Knuth upper bound is matched by a lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses 23 15 ) inversions. Applications: Another example is as follows. n,r,s are natural numbers and log ⁡n≤r,s≤n/4 , it was shown that for every n there is a Boolean n×n matrix; every s×(n−r) submatrix has a rank at least n/2 by the incompressibility method. Applications: Logic According to Gödel's first incompleteness theorem, in every formal system with computably enumerable theorems (or proofs) strong enough to contain Peano arithmetic, there are true (but unprovable) statements or theorems. This is proved by the incompressibility method; every formal system F can be described finitely (for example, in f bits). In such a formal system, we can express K(x)≥|x| since it contains arithmetic. Given F and a natural number n≫f , we can search exhaustively for a proof that some string y of length n satisfies K(y)≥n . In this way, we obtain the first such string; log ⁡n+f : contradiction. Comparison with other methods: Although the probabilistic method generally shows the existence of an object with a certain property in a class, the incompressibility method tends to show that the overwhelming majority of objects in the class (the average, or the expectation) have that property. It is sometimes easy to turn a probabilistic proof into an incompressibility proof or vice versa. In some cases, it is difficult or impossible to turn a proof by incompressibility into a probabilistic (or counting proof). In virtually all the cases of Turing-machine time complexity cited above, the incompressibility method solved problems which had been open for decades; no other proofs are known. Sometimes a proof by incompressibility can be turned into a proof by counting, as happened in the case of the general lower bound on the running time of Shellsort.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semi-formal wear** Semi-formal wear: Semi-formal wear or half dress is a grouping of dress codes indicating the sort of clothes worn to events with a level of formality between informal wear and formal wear. In the modern era, the typical interpretation for men is black tie for evening wear and black lounge suit for day wear, corresponded by either a pant suit or an evening gown for women.Whether one would choose to wear morning or evening semi-formal has traditionally been defined by whether the event will commence before or after 6:00 p.m. Semi-formal wear: In addition, equivalent versions may be permitted such as ceremonial dresses (including court dresses, diplomatic uniforms and academic dresses), religious clothing, national costumes, and military mess dresses. Evening wear: "black tie" dinner suit: For evening wear (after 6 p.m.), the code is black tie. In formal evening dress, or white tie dress, this practice of substituting colors in ties is much less common since men's fashion tends to follow tradition more deeply as it becomes more formal. Evening wear: "black tie" dinner suit: The origins of evening semi-formal attire date back to the later 19th century when Edward, Prince of Wales (subsequently Edward VII), wanted a more comfortable dinner attire than the swallowtail coat.In spring 1886, the Prince invited James Potter, a rich New Yorker, and his wife Cora to Sandringham House, the Prince's hunting estate in Norfolk. When Potter asked for the Prince's dinner dress code, the Prince sent him to his tailor, Henry Poole & Co., in London, where he was given a suit made to the Prince's specifications with the dinner jacket.On returning to Tuxedo Park, New York, in 1886, Potter's dinner suit proved popular at the Tuxedo Park Club. Not long afterward, when a group of men from the club chose to wear such suits to a dinner at Delmonico's Restaurant in New York City, other diners were surprised. They were told that such clothing was popular at Tuxedo Park, so the particular cut then became known as the "tuxedo".From its creation into the 1920s, this dinner jacket was considered appropriate dress for dining in one's home or club, while the tailcoat remained in place as appropriate for public appearance. Supplementary alternatives: Mess dress For formal dining, uniformed services officers and non-commissioned officers often wear mess dress equivalents to the civilian black tie and evening dress. Mess uniforms may vary according to the wearers' respective branches of the armed services, regiments, or corps, but usually include a short Eton-style coat reaching to the waist. Some include white shirts, black bow ties, and low-cut waistcoats, while others feature high collars that fasten around the neck and corresponding high-gorge waistcoats. Some nations' armed services have black tie and white tie equivalent variants in their mess dress. Supplementary alternatives: Red Sea rig In tropical areas, primarily in Western diplomatic and expatriate communities, Red Sea rig is sometimes worn, in which the jacket and waistcoat are omitted and a red cummerbund and trousers with red piping are worn instead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Left-brain interpreter** Left-brain interpreter: The left-brain interpreter is a neuropsychological concept developed by the psychologist Michael S. Gazzaniga and the neuroscientist Joseph E. LeDoux. It refers to the construction of explanations by the left brain hemisphere in order to make sense of the world by reconciling new information with what was known before. The left-brain interpreter attempts to rationalize, reason and generalize new information it receives in order to relate the past to the present.Left-brain interpretation is a case of the lateralization of brain function that applies to "explanation generation" rather than other lateralized activities. Although the concept of the left-brain interpreter was initially based on experiments on patients with split-brains, it has since been shown to apply to the everyday behavior of people at large. Discovery: The concept was first introduced by Michael Gazzaniga while he performed research on split-brain patients during the early 1970s with Roger Sperry at the California Institute of Technology. Sperry eventually received the 1981 Nobel Prize in Medicine for his contributions to split-brain research.In performing the initial experiments, Gazzaniga and his colleagues observed what happened when the left and right hemispheres in the split brains of patients were unable to communicate with each other. In these experiments when patients were shown an image within the right visual field (which maps to the left brain hemisphere), an explanation of what was seen could be provided. However, when the image was only presented to the left visual field (which maps to the right brain hemisphere) the patients stated that they didn't see anything.However, when asked to point to objects similar to the image, the patients succeeded. Gazzaniga interpreted this by postulating that although the right brain could see the image it could not generate a verbal response to describe it. Experiments: Since the initial discovery, a number of more detailed experiments have been performed to further clarify how the left brain "interprets" new information to assimilate and justify it. These experiments have included the projection of specific images, ranging from facial expressions to carefully constructed word combinations, and functional magnetic resonance (fMRI) tests.Many of the studies and experiments build on the initial approach of Gazzaniga in which the right hemisphere is instructed to do things that the left hemisphere is unaware of, e.g. by providing the instructions within the visual field that is only accessible to the right brain. The left-brain interpreter will nonetheless construct a contrived explanation for the action, unaware of the instruction the right brain had received.The typical fMRI experiments have a "block mode", in which specific behavioral tasks are arranged into blocks and are performed over a period of time. The fMRI responses from the blocks are then compared. In fMRI studies by Koutstaal the level of sensitivity of the right visual cortex with respect to the single exposure of an object (e.g. a table) on two occasions was measured against the display of two distinct tables at once. This contrasted with the left hemisphere's lower level of sensitivity to variations.Although the concept of the left-brain interpreter was initially based on experiments on patients with split brains, it has since been shown to apply to the everyday behavior of people at large.A hierarchical organization of the lateral prefrontal cortex has been developed in which different regions are categorized according to different "levels" of explanation. The left lateral orbitofrontal cortex and ventrolateral prefrontal cortex generate causal inferences and explanations of events, which are then evaluated by the dorsolateral prefrontal cortex. The subjective evaluation of different internally generated explanations is then performed by the anterolateral prefrontal cortex. Reconciling the past with the present: The drive to seek explanations and provide interpretations is a general human trait, and the left-brain interpreter can be seen as the glue that attempts to hold the story together, in order to provide a sense of coherence to the mind. In reconciling the past and the present, the left-brain interpreter may confer a sense of comfort to a person, by providing a feeling of consistency and continuity in the world. This may in turn produce feelings of security that the person knows how "things will turn out" in the future.However, the facile explanations provided by the left-brain interpreter may also enhance the opinion of a person about themselves and produce strong biases which prevent the person from seeing themselves in the light of reality and repeating patterns of behavior which led to past failures. The explanations generated by the left-brain interpreter may be balanced by right brain systems which follow the constraints of reality to a closer degree. The suppression of the right hemisphere by electroconvulsive therapy leaves patients inclined to accept conclusions that are absurd but based on strictly-true logic. After electroconsulsive therapy to the left hemisphere the same absurd conclusions are indignantly rejected.The checks and balances provided by the right brain hemisphere may thus avoid scenarios that eventually lead to delusion via the continued construction of biased explanations. In 2002 Gazzaniga stated that the three decades of research in the field had taught him that the left hemisphere is far more inventive in interpreting facts than the right hemisphere's more truthful, literal approach to information management.Studies on the neurological basis of different defense mechanisms have revealed that the use of immature defense mechanisms, such as denial, projection, and fantasy, is tied to glucose metabolization in the left prefrontal cortex, while more mature defense mechanisms, such as intellectualization, reaction formation, compensation, and isolation, are associated with glucose metabolization in the right hemisphere. It has also been found that grey matter volume of the left lateral orbitofrontal cortex correlates with scores on measures of Machiavellian intelligence, while volume of the right medial orbitofrontal cortex correlates with scores on measures of social comprehension and declarative episodic memory. These studies illustrate the role of the left prefrontal cortex in exerting control over one's environment in contrast to the role of the right prefrontal cortex in inhibition and self-evaluation. Further development and similar models: Michael Gazzaniga, while working on the model of left-brain interpreter, came to the conclusion that simple right-brain/left-brain model of the mind is a gross oversimplification and the brain is organized into hundreds maybe even thousands of modular-processing systems.Similar models (which also claim that mind is formed from many little agents, i.e. the brain is made up of a constellation of independent or semi-independent agents) were also described by: Thomas R. Blakeslee described the brain model similar to Michael Gazzaniga's. Thomas R. Blakeslee renamed Michael Gazzaniga's interpreter module into the self module. Further development and similar models: Neurocluster Brain Model describes the brain as a massively parallel computing machine in which huge numbers of neuroclusters process information independently from each other. The neurocluster which most of the time has the access to actuators (i.e. neurocluster which most of the time acts upon an environment using actuators) is called the main personality. The term main personality is equivalent to Michael Gazzaniga's term interpreter module. Further development and similar models: Michio Kaku described the brain model similar to Michael Gazzaniga's using the analogy of large corporation in which Michael Gazzaniga's interpreter module is equivalent to CEO of large corporation. Marvin Minsky's “Society of Mind” model claims that mind is built up from the interactions of simple parts called agents, which are themselves mindless. Robert E. Ornstein claimed that the mind is a squadron of simpletons. Ernest Hilgard described neodissociationist theory which claims that a “hidden observer” is created in the mind while hypnosis is taking place and this “hidden observer” has his own separate consciousness. Further development and similar models: George Ivanovich Gurdjieff in year 1915 taught his students that man has no single, big I; man is divided into a multiplicity of small I’s. George Ivanovich Gurdjieff described the model similar to Michael Gazzaniga's using the analogy in which the man is compared to a house which has a multitude of servants and Michael Gazzaniga's interpreter module is equivalent to master of the servants. Further development and similar models: Julian Jaynes hypothesized a bicameral mind theory (which relies heavily on Gazzaniga's research on split-brain patients), where the communication between Wernicke's area and its right-hemisphere analogue was the "bicameral" structure. This structure resulted in voices/images that represented mostly warning and survival instruction, originating from the right brain, and interpreted by the left brain as non-self communication to the self. The breakdown of this bicameral mentality — brought about by changes mainly from cultural forces — resulted in what we term consciousness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toilet seat riser** Toilet seat riser: Toilet seat risers, toilet risers, or raised toilet seats are assistive technology devices to improve the accessibility of toilets to older people or those with disabilities. They can aid in transfer from wheelchairs, and may help prevent falls. Inappropriately high risers may actually increase fall risk.Some people may find plastic risers to be unattractive or carry a stigma. They may also interfere with the toilet habits of other users.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RTX Red Rock** RTX Red Rock: RTX Red Rock is an action-adventure game developed and published by LucasArts for the PlayStation 2. It was announced and later canceled for the GameCube. Plot: In the year 2113 aliens of unknown origin, known simply as LEDs (Light-Emitting Demons) launch an attack on Earth, resulting in heavy casualties on both sides. Earth comes out of the fighting victorious, but advanced US intelligence discovers that the LEDs have invaded Earth's colony on Mars. Believing that the LEDs intend to launch another attack on Earth, but unsure how to deal with the problem, the US army chief decides to send in an RTX (Radical Tactics Expert) to properly evaluate the situation. Plot: This brings him to Major Wheeler. Wheeler undertakes the mission, despite his fear of Mars and goes off along with his robotic sidekick IRIS. Reception: The game received "generally unfavorable reviews" according to the review aggregation website Metacritic.GameSpot's Giancarlo Varanini thought that the game had a promising concept but the end result was disappointing: "It's just not a very fun game to play, and unless you're absolutely desperate for something to take up your time, it should probably be avoided." Kaiser Hwang of IGN found the game had "a lot of features and varied gameplay elements but fails to deliver where it counts." with "unfocused and bland" gameplay which made it hard to recommend. In a review for Eurogamer, Kristan Reed said "RTX tries hard to appeal to a broad audience, but feels so lacking in polish, you wonder how such a prestigious company could allow it to be released in such a state." He did concede, however, that if people could accept the game's limitations it was "actually a rather solid enjoyable, well paced adventure game."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Babel (protocol)** Babel (protocol): The Babel routing protocol is a distance-vector routing protocol for Internet Protocol packet-switched networks that is designed to be robust and efficient on both wireless mesh networks and wired networks. Babel is described in RFC 8966.Babel is based on the ideas in Destination-Sequenced Distance Vector routing (DSDV), Ad hoc On-Demand Distance Vector Routing (AODV), and Cisco's Enhanced Interior Gateway Routing Protocol (EIGRP), but uses different techniques for loop avoidance. Babel has provisions for using multiple dynamically computed metrics; by default, it uses hop-count on wired networks and a variant of expected transmission count on wireless links, but can be configured to take radio diversity into account or to automatically compute a link's latency and include it in the metric.Babel operates on IPv4 and IPv6 networks. It has been reported to be a robust protocol and to have fast convergence properties.In October 2015, Babel was chosen as the mandatory-to-implement protocol by the IETF Homenet working group, albeit on an Experimental basis. In June 2016, an IETF working group was created whose main goal is to produce a standard version of Babel. In January 2021, the working group produced a standard version of Babel, then proceeded to publish a number of extensions, including for authentication, source-specific routing, and routing of IPv4 through IPv6 routers. Implementations: Several implementations of Babel are freely available: The standalone "reference" implementation A complete reimplementation integrated in the BIRD routing platform A version integrated into the FRR routing suite (previously Quagga, from which Babel has been removed). A tiny, stub-only subset implementation A minimal, IPv6-only reimplementation in Python An independent implementation in Java, part of the freeRouter projectBoth BIRD and the reference version have support for Source-specific routing and for cryptographic authentication.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TCP/IP Illustrated** TCP/IP Illustrated: TCP/IP Illustrated is the name of a series of 3 books written by W. Richard Stevens. Unlike traditional books which explain the RFC specifications, Stevens goes into great detail using actual network traces to describe the protocol, hence its 'Illustrated' title. The first book in the series, "Volume 1: The Protocols", is cited by hundreds of technical papers in ACM journals. Volumes: Volume 1: The Protocols After a brief introduction to TCP/IP, Stevens takes a bottom-up approach by describing the protocol from the link layer and working up the protocol stack. The Second Edition was published on 15 November 2011. Volume 2: The Implementation 500 illustrations, combined with 15,000 lines of actual code from the 4.4BSD-Lite release, serves as concrete examples of the concepts covered in Volume 1. Volume 3: TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols This volume goes into detail on four topics: T/TCP (TCP for Transactions) HTTP (Hypertext Transfer Protocol) NNTP (Network News Transfer Protocol) UNIX Domain Protocols (see Unix domain socket)As with Volume 2, examples from 4.4BSD-Lite are used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adaptive design (medicine)** Adaptive design (medicine): In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm (i.e. non-randomized) clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA (A Practical Adaptive & Novel Designs and Analysis toolkit) provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting. Purpose: The aim of an adaptive trial is to more quickly identify drugs or devices that have a therapeutic effect, and to zero in on patient populations for whom the drug is appropriate. When conducted efficiently, adaptive trials have the potential to find new treatments while minimizing the number of patients exposed to the risks of clinical trials. Specifically, adaptive trials can efficiently discover new treatments by reducing the number of patients enrolled in treatment groups that show minimal efficacy or higher adverse-event rates. Adaptive trials can adjust almost any part of its design, based on pre-set rules and statistical design, such as sample size, adding new groups, dropping less effective groups and changing the probability of being randomized to a particular group, for example. History: In 2004, a Strategic Path Initiative was introduced by the United States Food and Drug Administration (FDA) to modify the way drugs travel from lab to market. This initiative aimed at dealing with the high attrition levels observed in the clinical phase. It also attempted to offer flexibility to investigators to find the optimal clinical benefit without affecting the study's validity. Adaptive clinical trials initially came under this regime.The FDA issued draft guidance on adaptive trial design in 2010. In 2012, the President's Council of Advisors on Science and Technology (PCAST) recommended that FDA "run pilot projects to explore adaptive approval mechanisms to generate evidence across the lifecycle of a drug from the pre-market through the post-market phase." While not specifically related to clinical trials, the council also recommended that FDA "make full use of accelerated approval for all drugs meeting the statutory standard of addressing an unmet need for a serious or life-threatening disease, and demonstrating an impact on a clinical endpoint other than survival or irreversible morbidity, or on a surrogate endpoint, likely to predict clinical benefit."By 2019, the FDA updated their 2010 recommendations and issued "Adaptive Design Clinical Trials for Drugs and Biologics Guidance". Characteristics: Traditionally, clinical trials are conducted in three steps: The trial is designed. The trial is conducted as prescribed by the design. Once the data are ready, they are analysed according to a pre-specified analysis plan. Types: Overview Any trial design that can change its design, during active enrollment, could be considered an adaptive clinical trial. There are a number of different types, and real life trials may combine elements from these different trial types: In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Types: Dose finding design Phase I of clinical research focuses on selecting a particular dose of a drug to carry forward into future trials. Historically, such trials have had a "rules-based" (or "algorithm-based") design, such as the 3+3 design. However, these "A+B" rules-based designs are not appropriate for phase I studies and are inferior to adaptive, model-based designs. An example of a superior design is the continual reassessment method (CRM). Types: Group sequential design Group sequential design is the application of sequential analysis to clinical trials. At each interim analysis, investigators will use the current data to decide whether the trial should either stop or should continue to recruit more participants. The trial might stop either because the evidence that the treatment is working is strong ("stopping for benefit") or weak ("stopping for futility"). Whether a trial may stop for futility only, benefit only, or either, is stated in advance. A design has "binding stopping rules" when the trial must stop when a particular threshold of (either strong or weak) evidence is crossed at a particular interim analysis. Otherwise it has "non-binding stopping rules", in which case other information can be taken into account, for example safety data. The number of interim analyses is specified in advance, and can be anything from a single interim analysis (a "two-stage" design") to an interim analysis after every participant ("continuous monitoring"). Types: For trials with a binary (response/no response) outcome and a single treatment arm, a popular and simple group sequential design with two stages is the Simon design. In this design, there is a single interim analysis partway through the trial, at which point the trial either stops for futility or continues to the second stage. Mander and Thomson also proposed a design with a single interim analysis, at which point the trial could stop for either futility or benefit.For single-arm, single-stage binary outcome trials, a trial's success or failure is determined by the number of responses observed by the end of the trial. This means that it may be possible to know the conclusion of the trial (success or failure) with certainty before all the data are available. Planning to stop a trial once the conclusion is known with certainty is called non-stochastic curtailment. This reduces the sample size on average. Planning to stop a trial when the probability of success, based on the results so far, is either above or below a certain threshold is called stochastic curtailment. This reduces the average sample size even more than non-stochastic curtailment. Stochastic and non-stochastic curtailment can also be used in two-arm binary outcome trials, where a trial's success or failure is determined by the number of responses observed on each arm by the end of the trial. Usage: The adaptive design method developed mainly in the early 21st century. In November 2019, the US Food and Drug Administration provided guidelines for using adaptive designs in clinical trials. Usage: in 2020 COVID-19 related trials In April 2020, the World Health Organization published an "R&D Blueprint (for the) novel Coronavirus" (Blueprint). The Blueprint documented a "large, international, multi-site, individually randomized controlled clinical trial" to allow "the concurrent evaluation of the benefits and risks of each promising candidate vaccine within 3–6 months of it being made available for the trial." The Blueprint listed a Global Target Product Profile (TPP) for COVID‑19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID‑19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks.The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trial – the "Solidarity trial" for vaccines – to enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID‑19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition will prioritize which vaccines should go into Phase II and III clinical trials, and determine harmonized Phase III protocols for all vaccines achieving the pivotal trial stage.The global "Solidarity" and European "Discovery" trials of hospitalized people with severe COVID‑19 infection apply adaptive design to rapidly alter trial parameters as results from the four experimental therapeutic strategies emerge. The US National Institute of Allergy and Infectious Diseases (NIAID) initiated an adaptive design, international Phase III trial (called "ACTT") to involve up to 800 hospitalized COVID‑19 people at 100 sites in multiple countries. Usage: Breast cancer An adaptive trial design enabled two experimental breast cancer drugs to deliver promising results after just six months of testing, far shorter than usual. Researchers assessed the results while the trial was in process and found that cancer had been eradicated in more than half of one group of patients. The trial, known as I-Spy 2, tested 12 experimental drugs. Usage: I-SPY 1 For its predecessor I-SPY 1, 10 cancer centers and the National Cancer Institute (NCI SPORE program and the NCI Cooperative groups) collaborated to identify response indicators that would best predict survival for women with high-risk breast cancer. During 2002–2006, the study monitored 237 patients undergoing neoadjuvant therapy before surgery. Iterative MRI and tissue samples monitored the biology of patients to chemotherapy given in a neoadjuvant setting, or presurgical setting. Evaluating chemotherapy's direct impact on tumor tissue took much less time than monitoring outcomes in thousands of patients over long time periods. The approach helped to standardize the imaging and tumor sampling processes, and led to miniaturized assays. Key findings included that tumor response was a good predictor of patient survival, and that tumor shrinkage during treatment was a good predictor of long-term outcome. Importantly, the vast majority of tumors identified as high risk by molecular signature. However, heterogeneity within this group of women and measuring response within tumor subtypes was more informative than viewing the group as a whole. Within genetic signatures, level of response to treatment appears to be a reasonable predictor of outcome. Additionally, its shared database has furthered the understanding of drug response and generated new targets and agents for subsequent testing. Usage: I-SPY 2 I-SPY 2 is an adaptive clinical trial of multiple Phase 2 treatment regimens combined with standard chemotherapy. I-SPY 2 linked 19 academic cancer centers, two community centers, the FDA, the NCI, pharmaceutical and biotech companies, patient advocates and philanthropic partners. The trial is sponsored by the Biomarker Consortium of the Foundation for the NIH (FNIH), and is co-managed by the FNIH and QuantumLeap Healthcare Collaborative. I-SPY 2 was designed to explore the hypothesis that different combinations of cancer therapies have varying degrees of success for different patients. Conventional clinical trials that evaluate post-surgical tumor response require a separate trial with long intervals and large populations to test each combination. Instead, I-SPY 2 is organized as a continuous process. It efficiently evaluates multiple therapy regimes by relying on the predictors developed in I-SPY 1 that help quickly determine whether patients with a particular genetic signature will respond to a given treatment regime. The trial is adaptive in that the investigators learn as they go, and do not continue treatments that appear to be ineffective. All patients are categorized based on tissue and imaging markers collected early and iteratively (a patient's markers may change over time) throughout the trial, so that early insights can guide treatments for later patients. Treatments that show positive effects for a patient group can be ushered to confirmatory clinical trials, while those that do not can be rapidly sidelined. Importantly, confirmatory trials can serve as a pathway for FDA Accelerated Approval. I-SPY 2 can simultaneously evaluate candidates developed by multiple companies, escalating or eliminating drugs based on immediate results. Using a single standard arm for comparison for all candidates in the trial saves significant costs over individual Phase 3 trials. All data are shared across the industry. As of January 2016 I-SPY 2 is comparing 11 new treatments against 'standard therapy', and is estimated to complete in Sept 2017. By mid 2016 several treatments had been selected for later stage trials. Usage: Alzheimer's Researchers plan to use an adaptive trial design to help speed development of Alzheimer's disease treatments, with a budget of 53 million euros. The first trial under the initiative was expected to begin in 2015 and to involve about a dozen companies. Bayesian designs: The adjustable nature of adaptive trials inherently suggests the use of Bayesian statistical analysis. Bayesian statistics inherently address updating information such as that seen in adaptive trials that change from updated information derived from interim analysis. The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. Bayesian designs: According to FDA guidelines, an adaptive Bayesian clinical trial can involve: Interim looks to stop or to adjust patient accrual Interim looks to assess stopping the trial early either for success, futility or harm Reversing the hypothesis of non-inferiority to superiority or vice versa Dropping arms or doses or adjusting doses Modification of the randomization rate to increase the probability that a patient is allocated to the most appropriate treatment (or Arm in the Multi-Armed Bandit model)The Bayesian framework Continuous Individualized Risk Index which is based on dynamic measurements from cancer patients can be effectively used for adaptive trial designs. Platform trials rely heavily on Bayesian designs. Statistical Analysis: The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning. Added complexity: The logistics of managing traditional, non-adaptive design clinical trials may be complex. In adaptive design clinical trials, adapting the design as results arrive adds to the complexity of design, monitoring, drug supply, data capture and randomization. Furthermore, it should be stated in the trial's protocol exactly what kind of adaptation will be permitted. Publishing the trial protocol in advance increases the validity of the final results, as it makes clear that any adaptation that took place during the trial was planned, rather than ad hoc. According to PCAST "One approach is to focus studies on specific subsets of patients most likely to benefit, identified based on validated biomarkers. In some cases, using appropriate biomarkers can make it possible to dramatically decrease the sample size required to achieve statistical significance—for example, from 1500 to 50 patients."Adaptive designs have added statistical complexity compared to traditional clinical trial designs. For example, any multiple testing, either from looking at multiple treatment arms or from looking at a single treatment arm multiple times, must be accounted for. Another example is statistical bias, which can be more likely when using adaptive designs, and again must be accounted for. Added complexity: While an adaptive design may be an improvement over a non-adaptive design in some respects (for example, expected sample size), it is not always the case that an adaptive design is a better choice overall: in some cases, the added complexity of the adaptive design may not justify its benefits. An example of this is when the trial is based on a measurement that takes a long time to observe, as this would mean having an interim analysis when many participants have started treatment but cannot yet contribute to the interim results. Risks: Shorter trials may not reveal longer term risks, such as a cancer's return. Resources (external links): "What are adaptive clinical trials?" (video). youtube.com. Medical Research Council Biostatistics Unit. 17 November 2022. Burnett, Thomas; Mozgunov, Pavel; Pallmann, Philip; Villar, Sofia S.; Wheeler, Graham M.; Jaki, Thomas (2020). "Adding flexibility to clinical trial designs: An example-based guide to the practical use of adaptive designs". BMC Medicine. 18 (1): 352. doi:10.1186/s12916-020-01808-2. PMC 7677786. PMID 33208155. Jennison, Christopher; Turnbull, Bruce (1999). Group Sequential Methods with Applications to Clinical Trials. ISBN 0849303168. Wason, James M. S.; Brocklehurst, Peter; Yap, Christina (2019). "When to keep it simple – adaptive designs are not always useful". BMC Medicine. 17 (1): 152. doi:10.1186/s12916-019-1391-9. PMC 6676635. PMID 31370839. Wheeler, Graham M.; Mander, Adrian P.; Bedding, Alun; Brock, Kristian; Cornelius, Victoria; Grieve, Andrew P.; Jaki, Thomas; Love, Sharon B.; Odondi, Lang'o; Weir, Christopher J.; Yap, Christina; Bond, Simon J. (2019). "How to design a dose-finding study using the continual reassessment method". BMC Medical Research Methodology. 19 (1): 18. doi:10.1186/s12874-018-0638-z. PMC 6339349. PMID 30658575. Grayling, Michael John; Wheeler, Graham Mark (2020). "A review of available software for adaptive clinical trial design". Clinical Trials. 17 (3): 323–331. doi:10.1177/1740774520906398. PMC 7736777. PMID 32063024. S2CID 189762427. Sources: Kurtz, Esfahani, Scherer (July 2019). "Dynamic Risk Profiling Using Serial Tumor Biomarkers for Personalized Outcome Prediction". Cell. 178 (3): 699–713.e19. doi:10.1016/j.cell.2019.06.011. PMC 7380118. PMID 31280963.{{cite journal}}: CS1 maint: multiple names: authors list (link) President's Council of Advisors on Science and Technology (September 2012). "Report To The President on Propelling Innovation in Drug Discovery, Development and Evaluation" (PDF). Executive Office of the President. Archived (PDF) from the original on 21 January 2017. Retrieved 4 January 2014. Sources: Brennan, Zachary (5 June 2013). "CROs Slowly Shifting to Adaptive Clinical Trial Designs". Outsourcing-pharma.com. Retrieved 5 January 2014. Spiegelhalter, David (April 2010). "Bayesian methods in clinical trials: Has there been any progress?" (PDF). Archived from the original (PDF) on 6 January 2014. Carlin, Bradley P. (25 March 2009). "Bayesian Adaptive Methods for Clinical Trial Design and Analysis" (PDF).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cosmic Cube** Cosmic Cube: The Cosmic Cube is a fictional object appearing in American comic books published by Marvel Comics. There are multiple Cubes in the Marvel Universe, all of which are depicted as containment devices that can empower whoever wields them. Although the first version, introduced in Tales of Suspense #79 (July 1966) and created by Stan Lee and Jack Kirby, originated on Earth as a weapon built by Advanced Idea Mechanics, most are of alien origins. Cosmic Cube: The Cube (renamed the Tesseract) plays a central role in several films of the Marvel Cinematic Universe, in which it is ultimately depicted as containing the Space Stone, one of the six Infinity Stones. Publication history: The first Cosmic Cube appeared in a story in Tales of Suspense #79–81 (July–Sept. 1966) and was created by Stan Lee and Jack Kirby. It was established as a device created by A.I.M. and capable of transforming any wish into reality, irrespective of the consequences. The Cube was also a plot device in a story that introduced the character of the Super-Adaptoid in Tales of Suspense #82–84 (Oct.–Dec. 1966). The Cube was also featured in a one-off story in Avengers #40 (1967) being found and briefly wielded by Namor. Publication history: The Cube reappeared in Captain America #115–120 (July–Dec. 1969), and featured in an epic cosmic storyline that starred arch-villain Thanos in Daredevil #107 (Jan. 1974) and Captain Marvel #25–33 (March 1972–July 1974, bi-monthly). Retrieved after Thanos' defeat, this original Cube featured in several Project Pegasus stories in Marvel Two-in-One #42–43 (Aug.–Sept. 1978), Marvel Two-in-One #57–58 (Dec. 1979–Jan. 1980), and Marvel Team-Up Annual #5 (1982). Publication history: The creation of a second Cube was shown in Super-Villain Team-Up #16–17 (May 1979 and June 1980), but this Cube was initially powerless and did not gain any reality-altering ability until years after its creation. Publication history: A major element was added to the Cube's origin—that each is in fact an evolving sentient being—in Captain America Annual #7 (1983). The sentient Cube returned in Avengers #289–290 (March–April 1988) to end the threat of the Super-Adaptoid (itself originally empowered by a "shard" of a Cosmic Cube), and then in Fantastic Four #319 (Oct. 1988). This story revealed that the villain the Molecule Man had ties to the Cube and introduced a new character. Publication history: The miniseries The Infinity War #1–6 (June–Nov. 1992) and Infinity Crusade #1–6 (June–Nov. 1993) established that the items actually exist in a variety of geometric forms called Cosmic Containment Units. A third Cosmic Cube was created during the "Taking A.I.M." storyline that ran through Avengers #386–388 (May–July 1995) and Captain America (vol. 2) #440–441 (June–July 1995). This unstable Cube has not been seen since it was sealed in a containment chamber at the conclusion of the storyline. Publication history: The previously powerless second Cosmic Cube finally gained an ability to alter reality in Captain America (vol. 2) #445–448 (November 1995–February 1996) but it was unstable and exploded at the end of that storyline. The second Cube's power reappeared in a storyline in Captain America (vol. 3) #14–19 (Feb.–July 1999) during which its power was internalized within the Red Skull, then stolen by Korvac and taken to an alternate 31st century Earth before being returned to the Red Skull on the present-day Earth, after which it was seemingly destroyed again by exposure to anti-matter energy. Publication history: Doctor Doom acquires the Cosmic Cube in the Fantastic Four miniseries The World's Greatest Comics Magazine (2001). Doom uses a time machine to get the Cube from the ocean floor, into which it had dropped during a battle between the Red Skull and Captain America. A Cube—together with 11 other items from Marvel and DC Comics continuity—was used once again as a plot device in the intercompany crossover series JLA/Avengers #1–4 (Sept. 2003–April 2004, bi-monthly). Publication history: The Cube also shows up in Captain America (vol. 5). Aleksander Lukin wants the Cube and is willing to trade the Red Skull for it. The Red Skull claims he does not have it, but has spies out looking for it. Five years later, the Skull is in New York City and is in possession of it. General Lukin sent the Winter Soldier to retrieve the Cube from the Skull, and to kill him. The Skull transfers his mind into the body of Lukin through the powers of the Cube.A fragment of a Cube empowered a new character that featured in a single storyline in Marvel Team-Up (vol. 3) #20–24 (July–Nov. 2006), and a Cube also appeared in Guardians of the Galaxy (vol. 2) #19 (Dec. 2009). The item added a new aspect to the abilities of the villain the Absorbing Man in The Mighty Avengers #32–33 (Feb.–March 2010). A new Cosmic Cube was revealed in Avengers Assemble #5 (July 2012); it was revealed to be a working facsimile with more limited powers than the 'real thing'. Fictional item history: The Cosmic Cubes are actually containment devices created by various civilizations throughout the Marvel Universe at various times. Examples including the Skrulls (creators of the Cube that would eventually evolve into the Shaper of Worlds), and various other, unnamed civilizations (whose Cubes were gathered/stolen by unknown means by the Magus in the Infinity War story arc and the Goddess in the Infinity Crusade story arc). These matrices—which may or may not actually be shaped like a Cube—are suffused with reality-warping energies of unknown composition that comes from the realm of the Beyonders. Fictional item history: Unknown to almost everyone in the Marvel Universe, including its creators, the nature of the mysterious energies are such that, after a sufficient but undefined period of time, the matrix will become self-aware and evolve into an independent, free-willed being still possessed of the original Cube's tremendous powers; the new being's overall personality is psychically imprinted with the beliefs, desires, and personalities of those who wielded it as a Cube (for example, the Shaper of Worlds, wielded for a long time by an insane and warlike Skrull Emperor, immediately destroyed a large portion of the galaxy that it was located in once it became sentient). Fictional item history: On Earth, the Cosmic Cube containment matrix was developed and created by an evil society of para-military scientists known as A.I.M. to further their ultimate goal of world conquest. The object is revealed to be so powerful that it drove to insanity its co-creator MODOK. Master villain and former Nazi the Red Skull obtains the device after taking control of the mind of the A.I.M. agent holding it, using a handheld device. Although apparently now all-powerful, the Skull became overconfident and was tricked and defeated by the hero Captain America, who pretended to surrender and ask to be the Skull's slave, then knocked the Cube away, causing it to fall into the ocean. It was found by Prince Namor after Hercules accidentally revealed it to him, but while battling the Avengers he lost contact with it, and it fell into the Earth. The Mole Man found it, but threw it away, not realizing its true value. Later, a shard of the Cube is also used by A.I.M. to power the android known as the Super-Adaptoid, who is sent in an unsuccessful attempt to kill Captain America.The Red Skull eventually retrieves the Cube and toys with Captain America, but the Skull is defeated when A.I.M. uses an object called the "Catholite Block" to dissolve the Cube.The Cube was eventually found (apparently having reformed) by Thanos who, like the Red Skull, wishes to control the universe (this also attracts the amorous attention of the cosmic entity Death). Although opposed by superhero team the Avengers and the alien Kree warrior Captain Mar-Vell, Thanos becomes supreme when he wills the Cube to make him a part of—and therefore in control of—everything. Thanos discards the Cube, believing it to be drained of power, and is then stripped of the power by the dying superhero named Mar-Vell, who shatters the Cube and restores the universe.Brought to research installation Project: Pegasus, the Cube was stolen by villain and cult leader Victorius, and is used to create the being Jude the Entropic Man. Both are neutralized when in simultaneous contact with the Cube (and the swamp monster the Man Thing). The Cube is returned to Pegasus by Captain America and the Fantastic Four member the Thing, where it eventually transforms the alien Wundarr into the entity the Aquarian.A second Cube was created on the Island of Exiles by a team of scientists (including Arnim Zola) working for the Red Skull and the Hate-Monger. Planning to transfer his consciousness into the completed Cube, the Hate-Monger secretly arranged for a distraction in the form of a strike team from the spy organization S.H.I.E.L.D. attacking the island in an attempt to retrieve the Cube. However, the Red Skull was aware of his plans and had kept secret the fact that the Cube project had succeeded only in creating a perfect prison, but had failed to capture the mysterious, omnidimensional x-element which gives the Cubes their reality-warping power. As a result, the Hate-Monger's mind was left trapped in a powerless Cube in the Red Skull's possession. This Cube was one of the trophies that the Red Skull kept in his home, Skull House.During a battle to stop A.I.M. from using the Cube once again, Captain America witnesses the Cube evolve into the entity called Kubik, which becomes a student of the Shaper of Worlds. Kubik returns to Earth when attracted by an anomaly possessing a fraction of its power, revealed to be the Super-Adaptoid. The Super-Adaptoid uses its abilities to "copy" Kubik's abilities and banishes the entity, intent on creating a race in its own image. The Adaptoid, however, is tricked into shutting down by Captain America. Kubik returns and then removes the sliver of the original Cosmic Cube from the Adaptoid that gave the robot its abilities.Kubik also battles the renegade entity the Beyonder, and reveals to the entity and former Fantastic Four villain the Molecule Man that they are in fact both parts of an incomplete Cube (officially retconning the Beyonder's powers as shown in Secret Wars in the process), and convinces them to merge their powers. This forms a new being called Kosmos, who becomes the pupil of Kubik.The character the Magus—an evil version of anti-hero Adam Warlock—acquires five Cosmic Cubes from neighboring universes, with each appearing in a different geometric form. The Magus uses mechanical aids to manipulate the Cubes, as their combined presence would quickly cause permanent brain damage. The character uses the Cubes to create evil doppelgangers of almost all of the Marvel heroes and then alters the universe, but is tricked and defeated when acquiring the Infinity Gauntlet, as the Reality Gem is revealed to be a fake, thus creating a gap in his powers.Although the Magus is defeated, Warlock's "good side"—the female Goddess—also appears and wishes to purge the universe of all evil. To do this, she collects 30 containment units, with each storing the power of a Cosmic Cube, and merges them into a "Cosmic Egg". Despite the fact that the Egg can fulfill the Goddess' wishes—although, unlike the Infinity Gauntlet, it has no power over the soul—the character is defeated by Warlock and Thanos. During this time, the two questioned Mephisto about the origins of the Cubes in exchange for giving a Cube to Mephisto, but they were able to cheat the deal by giving Mephisto a drained Cube, as he never specified that the Cube had to still be functional.A third Cosmic Cube was created by an Adaptoid-controlled faction of A.I.M. based on the island of Boca Caliente. This Cube was unstable and its reality-warping ability began to leak out onto the surrounding island, creating Cube constructs of anyone that was in the thoughts of nearby people. An Avengers team attempted to stop the Cube and the dying Captain America was willing to sacrifice himself to do so. In the end, it was an Adaptoid who had been accompanying Captain America and had been impressed by his heroic nature who ended the threat by willingly transforming itself into a non-sentient containment chamber for the Cube's energies.The second Cube was eventually recovered by the KubeKult, fanatical followers of the Hate-Monger, who spied upon the A.I.M. Adaptoids and discovered how to power it. Fearing how the Hate-Monger would punish him for his betrayal, the Red Skull allied himself with then-rogue S.H.I.E.L.D. agent Sharon Carter to kidnap the dying Captain America and restore him to health. Reluctantly working together, the trio invaded a KubeKult base to steal the erratically functioning Cube, but the Red Skull seized it and willed Captain America to be drawn inside it into an artificial reality during World War II, where Captain America and Bucky were on a mission to kill Hitler. The Red Skull believed that he would be able to wield the Cube's power only if Captain America killed Hitler's consciousness within the Cube. However, the Bucky within the Cube (actually a projection of Captain America's own mind) revealed what was really going on and Captain America was able to will himself out of the Cube. Appearing before the Skull, Captain America threw his shield in such a way that it first severed the Skull's arm, causing him to drop the Cube, and then struck and shattered the Cube itself, causing an explosion that seemingly destroyed both itself and the Red Skull.Months later, the Red Skull reappears, now with the Cube's power internalized within his body. He was approached by the time-traveler Kang the Conqueror (actually the disguised cosmic entity Korvac), who told him that the reason he had failed to completely control the Cube's power in the past was because his knowledge of the universe was incomplete. At the suggestion of "Kang", the Skull willed the starship of Galactus to travel to Earth so he could drain it of the needed information. At the same time, Korvac (now disguised as Uatu the Watcher) appeared to Captain America and Sharon Carter and managed to convince them that the only way to prevent the Skull from becoming unstoppable was for Captain America to kill him during a brief moment of vulnerability. Captain America did so, but as the Skull died, his body released the Cube energy, which flowed into "Uatu", who revealed his true identity and used his increased power to return to his alternate 31st century Earth to conquer it. However, Captain America followed him and fought him repeatedly, with Korvac rebooting the 31st century reality each time Captain America disturbed his perfectly ordered machine world. Eventually, Captain America managed to convince Korvac that the reason he was able to achieve anything at all against Korvac was due to there being too much humanity left within Korvac when he acquired the Cube's power. Accordingly, Korvac transported himself and Captain America back to just before the Skull died, but this time Captain America did not strike the fatal blow. Vulnerable to the Skull's power, Korvac teleported himself, Captain America and Carter aboard the starship, but the Skull soon found him and scattered Korvac across six dimensions. Soon afterwards, the Skull was tricked by Captain America into entering an anti-matter energy beam within the starship's engine room, which separated the Cube energy from him. Before the energy dissipated, Captain America and the Skull were each able to use its wish-granting ability to save themselves and Carter from death.A Cosmic Cube was one of the 12 items of power sought by superhero teams the Avengers and the Justice League of America when they competed against each other in a game organized by Krona and the Grandmaster. It was the final item of the quest, found in the Savage Land, where both teams converged for a full-scale fight, during which Green Lantern Kyle Rayner was able to use the Cube as a substitute power source for his power ring when his usual battery had been stolen and the ring was running out of power. Quicksilver was finally able to gain the Cube, bringing the game to a stalemate, but to make sure Krona lost, Captain America helped Batman to take it, because they were the only ones, aside from the Atom, who knew the true stakes of the game: Krona had forced the Grandmaster to take the Justice League as his representatives, so the League had to win in order to prevent Krona from destroying the Marvel Universe. Batman briefly attempted to use the Cube to end the game — having been filled in on its capabilities by Captain America — before the Grandmaster took it from him to tally up the score. Enraged at his loss, Krona attacked the Grandmaster, who then used the Cube along with all other items of power to temporarily fuse the Marvel and DC Universes and imprison Krona in the intersection, hoping he would be unable to destroy a universe if his own existence were linked to it.The Red Skull has finally created one by using pieces of the previous Cubes, and Aleksander Lukin wants it as much. The Red Skull is assassinated by the one person that Lukin was willing to trade for the Cube—the Winter Soldier. In the process of being assassinated, the Skull uses the Cube's power to transfer his mind into the body of Lukin for some time.A youth called Curtis Doyle becomes the hero Freedom Ring when he finds a fragment of the original Cube in the form of a ring, which allows the altering of reality in a very limited area of 15 feet around the wearer. The character dies in battle saving Captain America, Spider-Man, Spider-Woman, and Wolverine from the villain Iron Maniac. The ring is later found by a friend of Doyle, a Skrull who had settled on Earth and adopts the name the 'Crusader'.The powerful entity D'Spayre attempted to enhance his power by using a Cosmic Cube to draw on the grief of the general public in the aftermath of Captain America's assassination, only for his use of the Cube to have an apparently unintended side-effect when it granted the 'wish' of those who wanted Captain America back by drawing the Invaders into the present. He was defeated in a confrontation with the New Avengers when Echo proved immune to his powers due to her deafness, allowing her to take the Cube from him. The Cube is then used by Paul Anslem, a World War II soldier who had traveled with the Invaders against his will. Anslem's intentions to save his friends, who had died during an assault on a Nazi stronghold, allows the Red Skull of the World War II era to gain enough power to take over Earth. Anslem again regains control of the Cube with super-powered assistance and restores the timeline to what it should have been.A Cube is also given to Guardians of the Galaxy member Star-Lord by time-traveling villain Kang the Conqueror to use against Adam Warlock's evil alter ego, the Magus. However, the Magus altered perception to make it seem like the Cube's power was used up. Star-Lord used the Cube's last bit of energy for real by subduing the reborn Thanos, rendering it a "cosmic paperweight".The Absorbing Man becomes capable of assimilating the abilities of a fraction of a Cube. He is stopped by criminal mastermind Norman Osborn, who uses a magical sword (provided by the Asgardian god Loki) to neutralize the Absorbing Man's abilities.A new Cosmic Cube is later revealed to have been created by the U.S. government. It is stolen by members of the Zodiac at the behest of Thanos. Thanos' plot is later foiled by the combined might of the Avengers and the Guardians of the Galaxy.During the Avengers: Standoff! storyline, Maria Hill and the rest of S.H.I.E.L.D. used pieces of a Cosmic Cube to create Kobik, a near-omnipotent child, originally conceived by Longshot and the Cosmic Cube "Miss Grapples". With Kobik's help, S.H.I.E.L.D. began brainwashing supervillains into becoming mild-mannered civilians, who were then imprisoned in a gated community called Pleasant Hill. When the villains rebel, Kobik decides to bring Steve Rogers, then reduced to an old man due to the breakdown of his Super-Soldier serum, back to his physical peak, but due to the Red Skull's influence over the Cube from which Kobik was made, she unknowingly replaces Rogers with a covert HYDRA loyalist version of him, believing that to be the "right" version of him. This results in the real Rogers' consciousness becoming trapped within the Cube until he finds Kobik and encourages her to set things right by showing the atrocities his doppelgänger had committed in the name of HYDRA during the Secret Empire storyline; she then brings him back to the real world, with some help from the heroes outside.Mephisto was later revealed to apparently be able to reenergize with devilishly red hue the cosmic cube that Warlock and Thanos had given him so many years ago, which he named the Pandemonium Cube, otherwise referred to as the Hellahedron. Mephisto then gave the cube to Phil Coulson whom used it to remake the Marvel Universe into the Heroes Reborn Universe where Coulson became President of the United States, the Squadron Supreme replaced the Avengers and Mephisto is worshipped like a God. However the cube wasn't perfect, though. Somehow, Blade the Vampire Hunter of Earth-616 was unaffected by the reality warp and began gathering the Avengers to change reality back to its previous state.As their perfect universe began to collapse on itself and the Avengers slowly start to see the cracks in the exposed fraud reality, Coulson decides to use the Pandemonium Cube again as a last-ditch effort to save everything he created only to witness his version of the world come to an end at the hands of Earth's Mightiest Heroes once the cube was apparently destroyed. However this defeat does not seem to have stung Mephisto at all, in fact, once reality has been set right again, Mephisto holding the Pandemonium Cube with Phil Coulson trapped inside, revealed that while his minion was defeated, the whole reality warp was only to bringing together 615 different Mepihstos from across the multiverse and to demonstrate how much reality could be changed by just one Mephisto. Other versions: "Heroes Reborn" In the "Heroes Reborn" miniseries, Phil Coulson and Mephisto used the Pandemonium Cube, or Hellahedron, to change reality so that the Squadron Supreme of America are Earth's mightiest heroes instead of the Avengers. After the latter group slowly reform and fight to change reality back however, Coulson attempts to use the Pandemonium Cube to defeat them. Despite his best efforts, Captain America steals it from Coulson and gives it to Echo and Star Brand to change reality back. Following this, Mephisto traps Coulson's soul in the Pandemonium Cube. Other versions: Ultimate Marvel In the Ultimate Marvel imprint alternate universe title Ultimate Fantastic Four, Mister Fantastic builds a "cuboid volitional lattice" courtesy of a deliberate, subconscious suggestion from the Ultimate version of the Titan Thanos. Another version of the Cube exists as a creation of A.I.M. under the employment of Red Skull, which they stole blueprints of from the Fantastic Four's recently abandoned Baxter Building.A version of the Cosmic Cube is seen in Project Pegasus alongside the Watcher and Infinity Gauntlet. In other media: Television The Cosmic Cube appears in The Avengers: Earth's Mightiest Heroes. In the episode "Everything is Wonderful", A.I.M. creates it for HYDRA, though the former's leader MODOK secretly intends to swindle the latter. Upon discovering the cube's potential for their plans however, A.I.M. returns HYDRA's money and claims the project was a failure so they can use the cube for themselves. However, HYDRA leader Baron Strucker sees through MODOK's deception and a battle ensues between the two groups for possession of the cube in the subsequent episode "Hail Hydra". As a result, the Avengers intervene, defeat both groups, and claim the cube. Amidst the battle, Captain America holds it and unconsciously uses its power to revive his fallen comrade Bucky Barnes at the moment of his death. In other media: The Cosmic Cube, referred to as the Tesseract, appears in Avengers Assemble. In the episode "By the Numbers", the Avengers and the Cabal race to claim the Tesseract before the other group, with the latter succeeding in doing so. In "Exodus", the Red Skull builds a machine powered by the Tesseract to send the Cabal through various portals and conquer other worlds, but Iron Man foils the plot and turns the Cabal against him. Despite this, the Red Skull uses the Tesseract's power to become the Cosmic Skull and seek vengeance against the Avengers in "The Final Showdown". While he is defeated by the heroes and the Cabal, he escapes and presents the Tesseract to Thanos. In other media: Marvel Cinematic Universe An adapted version of the Cosmic Cube, referred to as the Tesseract, appears in media set in the Marvel Cinematic Universe (MCU). Sources outside of the films reveal that it was originally safeguarded by the Asgardians before it ended up on Earth. Introduced in the mid-credits scene of the live-action film Thor (2011), the Tesseract is shown to be in Nick Fury and S.H.I.E.L.D.'s custody. In the live-action film Captain America: The First Avenger (2011), the Tesseract is found by Hydra during World War II, who use it to create advanced weaponry. At the end of the war, it falls into the Arctic Ocean after transporting the Red Skull to space when he grabs it. It is later recovered by Howard Stark. In the live-action film The Avengers (2012), Loki steals the Tesseract from S.H.I.E.L.D. and uses it to create a portal to allow an invading army of Chitauri to attack the Earth, but they are defeated by the Avengers. After the battle, Thor takes the Tesseract back to Asgard. While the Tesseract does not appear in the live-action film Thor: The Dark World (2013), it is stated in this and the live-action film Avengers: Age of Ultron (2015) that it contains the Space Stone, one of six Infinity Stones. The Tesseract makes a brief appearance in the live-action film Thor: Ragnarok (2017), in which Loki steals it while helping Thor evacuate Asgard's population from Hela's wrath. In the live-action film Avengers: Infinity War (2018), Thanos attacks the Asgardians' escape ship and nearly kills Thor, forcing Loki to give the Tesseract to Thanos to save his brother's life. Thanos crushes the cube to free the Space Stone and place it in his Infinity Gauntlet before eventually initiating the Blip once he finds the remaining five Stones. In the live-action film Captain Marvel (2019), which takes place in the 1990s, Project Pegasus scientist Dr. Wendy Lawson attempts to use the Tesseract to build a light-speed engine. During a test run however, the engine explodes, granting Carol Danvers cosmic powers. In other media: As of the live-action film Avengers: Endgame (2019), Thanos has destroyed the Infinity Stones to prevent the Avengers from undoing the Blip. When the heroes discover time travel five years later, they use it to retrieve past versions of the Stones and build their own Infinity Gauntlet. Tony Stark and Scott Lang attempt to collect the Tesseract in the aftermath of the Avengers' battle with Loki in 2012, but inadvertently lose it to the 2012 Loki, who uses it to open a portal and escape from his Avengers' custody. In response, Stark and Steve Rogers travel to S.H.I.E.L.D. headquarters in 1970 and successfully obtain an earlier version of the Tesseract. Once the Avengers undo the Blip and defeat an alternate timeline version of Thanos who followed them to their time, Rogers returns the time-displaced Stones to their proper places in the timeline. In other media: In the Disney+ live-action series Loki episode "Glorious Purpose", the alternate 2012 Loki who escaped with his version of the Tesseract is captured by the Time Variance Authority, with the cube being depowered as the Infinity Stones do not work outside the multiverse. In the Disney+ animated series What If...? episode "What If... Captain Carter Were the First Avenger?", the Tesseract appears in an alternate timeline wherein Peggy Carter receives the Super Soldier Serum instead of Rogers. In other media: Video games The MCU's Tesseract appears in Lego Marvel Super Heroes. Originally kept in Odin's vault on Asgard, it is stolen by Loki, who is defeated by Captain America, Thor, Wolverine, and the Human Torch. While the others are arguing over what they should do with the Tesseract, Wolverine grabs it and brings it to the X-Mansion in the hopes that Professor X can use it to locate Magneto. However, it is stolen by Magneto during the Brotherhood of Mutants' attack on the mansion and given to Doctor Doom, who uses it to power a ray gun to defeat Galactus before the latter destroys the Earth so that Doom can conquer it. Following Doom's defeat, Loki reveals that the ray gun is actually a mind control device, which he uses on Galactus in an attempt to destroy both Earth and Asgard. However, he is foiled by an alliance of heroes and villains who send Loki and Galactus through a wormhole. In the process, Thor destroys Loki's mind control device and the Tesseract is claimed by S.H.I.E.L.D. for safeguarding. In other media: The Cosmic Cube appears in Marvel's Avengers. This version is a containment device, codenamed "Project Omega", built by Monica Rappaccini of A.I.M. to prevent a future Kree invasion of Earth. When she used it however, the Cube froze her, Nick Fury, and nearby S.H.I.E.L.D. and A.I.M. agents and Kree Sentries in time while the rest of the world was eventually destroyed by a nuclear war and collapsed into chaos. The first two DLC expansion packs for the game, "Operation: Kate Bishop - Taking A.I.M." and "Operation: Hawkeye - Future Imperfect", focus on the Avengers trying to avert this apocalyptic future after learning about what happened from a time-travelling Hawkeye. In other media: Miscellaneous A flawed Cosmic Cube appears in Steven A. Roman's Chaos Engine novel series, with the object passing between Doctor Doom, Magneto, and the Red Skull. As each of them use it to create his own unique version of a perfect world, a team of X-Men who were operating outside of their reality when the initial change occurred work to stop them. They eventually realize that the cube "superimposes" another alternate reality over the X-Men's world of origin, temporarily merging them with their counterparts while draining the wish-maker's life energy. The crisis concludes when one of the Red Skull's lieutenants, who joined the Skull's group unaware of the scale of his evil, sacrifices himself to use the Cube to restore everything to normal. In other media: The Cosmic Cube appears in Marvel Universe Live!. This version is said to have the ability to corrupt any who attempt to use it. As such, Thor attempts to destroy it with Mjolnir. However, Loki uses a fragment of the cube to duplicate it for his own use, forcing the Avengers to retrieve the other fragments from Hydra, A.I.M., and the Sinister Six to stop him.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ossicular replacement prosthesis** Ossicular replacement prosthesis: In medicine, an ossicular replacement prosthesis is a device intended to be implanted for the functional reconstruction of segments of the ossicles and facilitates the conduction of sound waves from the tympanic membrane to the inner ear. There are two common types of ossicular replacement prostheses, the total ossicular replacement prosthesis (TORP) and partial ossicular replacement prosthesis (PORP). A TORP replaces the entire ossicular chain while a PORP replaces only the incus and malleus but not the stapes. Indications for use of an ossicular replacement prosthesis include: Chronic middle ear disease Otosclerosis Congenital fixation of the stapes Secondary surgical intervention to correct for a significant and persistent conductive hearing loss from prior otologic surgery Surgically correctable injury to the middle ear from trauma
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cherry cola** Cherry cola: Cherry cola is a soft drink made by mixing cherry-flavored syrup into cola. It is a popular mixture that has been available at old-fashioned soda fountains for years. Several major soda manufacturers market their own version of the beverage, including Coca-Cola Cherry, Pepsi Wild Cherry and Cherry RC. There are also alcoholic drinks called cherry cola, containing Coca-Cola often with vodka and grenadine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logic of argumentation** Logic of argumentation: The logic of argumentation (LA) is a formalised description of the ways in which humans reason and argue about propositions. It is used, for example, in computer artificial intelligence systems in the fields of medical diagnosis and prognosis, and research chemistry. Origin of term: Krause et al. appear to have been the first authors to use the term "logic of argumentation" in a paper about their model for using argumentation for qualitative reasoning under uncertainty, although the approach had been used earlier in prototype computer applications to support medical diagnosis. Their ideas have been developed further, and used in applications for predicting chemical toxicity and xenobiotic metabolism, for example. Implementations: In LA, arguments for and arguments against a proposition are distinct; an argument for a proposition contributes nothing to the case against it, and vice versa. Among other things, this means that LA can support contradiction – proof that an argument is true and that it is false. Arguments supporting the case for and arguments supporting the case against are aggregated separately, leading to a single assessment of confidence in the case for and a single assessment of confidence in the case against. Then the two are resolved to provide a single measure of confidence in the proposition. Implementations: In most implementations of LA the default aggregated value is equal to the strongest value in the set of arguments for or against the proposition. Having more than one argument in agreement does not automatically increase confidence because it cannot be assumed that the arguments are independent when reasoning under uncertainty. If there is evidence that arguments are independent and there is a case for increased confidence when they agree, this is sometimes expressed in additional rules of the form "If A and B then ...". Implementations: The process of aggregation and resolution can be represented as follows: T = Resolve[Max{For(Ca,x, Cb,y, ...)}, Max{Against(Ca,x, Cb,y, ...)}] where T is the overall assessment of confidence in a proposition; Resolve[] is a function which returns the single confidence value which is the resolution of any pair of values; For and Against are the sets of arguments supporting and opposing the proposition, respectively; Ca,x, Cb,y, ..., are the confidence values for those arguments; Max{...} is a function which returns the strongest member of the set upon which it operates (For or Against). Implementations: Arguments may assign confidence to propositions that themselves influence confidence in other arguments, and one rule may be undercut by another. A computer implementation can recognize these interrelationships to construct reasoning trees automatically.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light skin** Light skin: Light skin is a human skin color that has a base level of eumelanin pigmentation that has adapted to environments of low UV radiation. Light skin is most commonly found amongst the native populations of Europe and Northeast Asia as measured through skin reflectance. People with light skin pigmentation are often referred to as "white" although these usages can be ambiguous in some countries where they are used to refer specifically to certain ethnic groups or populations.Humans with light skin pigmentation have skin with low amounts of eumelanin, and possess fewer melanosomes than humans with dark skin pigmentation. Light skin provides better absorption qualities of ultraviolet radiation, which helps the body to synthesize higher amounts of vitamin D for bodily processes such as calcium development. On the other hand, light-skinned people who live near the equator, where there is abundant sunlight, are at an increased risk of folate depletion. As a consequence of folate depletion, they are at a higher risk of DNA damage, birth defects, and numerous types of cancers, especially skin cancer. Humans with darker skin who live further from the tropics have low vitamin D levels, which also can lead to health complications, both physical and mental, including a greater risk of developing schizophrenia. These two observations form the "vitamin D–folate hypothesis", which attempts to explain why populations that migrated away from the tropics into areas of low UV radiation evolved to have light skin pigmentation.The distribution of light-skinned populations is highly correlated with the low ultraviolet radiation levels of the regions inhabited by them. Historically, light-skinned populations almost exclusively lived far from the equator, in high latitude areas with low sunlight intensity. Due to colonization, imperialism, and increased mobility of people between geographical regions in recent centuries, light-skinned populations today are found all over the world. Evolution: It is generally accepted that dark skin evolved as a protection against the effect of UV radiation; eumelanin protects against both folate depletion and direct damage to DNA. This accounts for the dark skin pigmentation of Homo sapiens during their development in Africa; the major migrations out of Africa to colonize the rest of the world were also dark-skinned. It is widely supposed that light skin pigmentation developed due to the importance of maintaining vitamin D3 production in the skin. Strong selective pressure would be expected for the evolution of light skin in areas of low UV radiation.Lighter skin tones evolved independently in ancestral populations of north-west and north-east Eurasia, with the two populations diverging around 40,000 years ago. Studies have suggested that the two genes most associated with lighter skin colour in modern Europeans originated in the Middle East and the Caucasus about 22,000 to 28,000 years ago, and were present in Anatolia by 9,000 years ago, where their carriers became associated with the Neolithic Revolution and the spread of Neolithic farming across Europe. Lighter skin and blond hair also evolved in the Ancient North Eurasian population.A further wave of lighter-skinned populations across Europe (and elsewhere) is associated with the Yamnaya culture and the Indo-European migrations bearing Ancient North Eurasian ancestry and the KITLG allele for blond hair. Furthermore, the SLC24A5 gene linked with light pigmentation in Europeans was introduced into East Africa from Europe over five thousand years ago. These alleles can now be found in the San, Ethiopians, and Tanzanian populations with Afro-Asiatic ancestry. In the San people, it was acquired from interactions with Eastern African pastoralists. Meanwhile, in the case of north-east Asia and the Americas, a variation of the MFSD12 gene is responsible for lighter skin colour. The modern association between skin tone and latitude is thus a relatively recent development.Some authors have expressed caution regarding the skin pigmentation predictions. According to Ju et al. (2021), in a study addressing 40,000 years of modern human history, "we can assess the extent to which they carried the same light pigmentation alleles that are present today", but explain that c. 40,000 BP Early Upper Paleolithic hunter-gatherers "may have carried different alleles that we cannot now detect", and as a result "we cannot confidently make statements about the skin pigmentation of ancient populations.”According to Crawford et al. (2017), most of the genetic variants associated with light and dark pigmentation appear to have originated more than 300,000 years ago. African, South Asian and Australo-Melanesian populations also carry derived alleles for dark skin pigmentation that are not found in Europeans or East Asians. Huang et al. 2021 found the existence of "selective pressure on light pigmentation in the ancestral population of Europeans and East Asians", prior to their divergence from each other. Skin pigmentation was also found to be affected by directional selection towards darker skin among Africans, as well as lighter skin among Eurasians. Crawford et al. (2017) similarly found evidence for selection towards light pigmentation prior to the divergence of West Eurasians and East Asians. Geographic distribution; ultraviolet and vitamin D: In the 1960s, biochemist W. Farnsworth Loomis suggested that skin colour is related to the body's need for vitamin D. The major positive effect of UV radiation in land-living vertebrates is the ability to synthesize vitamin D3 from it. A certain amount of vitamin D helps the body to absorb more calcium which is essential for building and maintaining bones, especially for developing embryos. Vitamin D production depends on exposure to sunlight. Humans living at latitudes far from the equator developed light skin in order to help absorb more vitamin D. People with light (type II) skin can produce previtamin D3 in their skin at rates 5–10 times faster than dark-skinned (type V) people.In 1998, anthropologist Nina Jablonski and her husband George Chaplin collected spectrometer data to measure UV radiation levels around the world and compared it to published information on the skin color of indigenous populations of more than 50 countries. The results showed a very high correlation between UV radiation and skin color; the weaker the sunlight was in a geographic region, the lighter the indigenous people's skin tended to be. Jablonski points out that people living above the latitudes of 50 degrees have the highest chance of developing vitamin D deficiency. She suggests that people living far from the equator developed light skin to produce adequate amounts of vitamin D during winter with low levels of UV radiation. Genetic studies suggest that light-skinned humans have been selected for multiple times. Polar regions, vitamin D, and diet: Polar regions of the Northern Hemisphere receive little UV radiation, and even less vitamin D-producing UVB, for most of the year. These regions were uninhabited by humans until about 12,000 years ago. (In northern Fennoscandia at least, human populations arrived soon after deglaciation.) Areas like Scandinavia and Siberia have very low concentrations of ultraviolet radiation, and indigenous populations are all light-skinned.However, dietary factors may allow vitamin D sufficiency even in dark skinned populations. Many indigenous populations across Northern Europe and Northern Asia survive by consuming reindeer, which they follow and herd. Reindeer meat, organs, and fat contain large amounts of vitamin D which the reindeer get from eating substantial amounts of lichen. Some people of the polar regions, like the Inuit (Eskimos), retained their dark skin; they ate Vitamin D-rich seafood, such as fish and sea mammal blubber.Furthermore, these people have been living in the far north for less than 7,000 years. As their founding populations lacked alleles for light skin colour, they may have had insufficient time for significantly lower melanin production to have been selected for by nature after being introduced by random mutations. "This was one of the last barriers in the history of human settlement," Jablonski states. "Only after humans learned fishing, and therefore had access to food rich in vitamin D, could they settle regions of high latitude." Additionally, in the spring, Inuit would receive high levels of UV radiation as reflection from the snow, and their relatively darker skin then protects them from the sunlight. Polar regions, vitamin D, and diet: Earlier hypotheses Two other main hypotheses have been put forward to explain the development of light skin pigmentation: resistance to cold injury, and genetic drift; now both of them are considered unlikely to be the main mechanism behind the evolution of light skin.The resistance to cold injury hypothesis claimed that dark skin was selected against in cold climates far from the equator and in higher altitudes as dark skin was more affected by frostbite. It has been found that reaction of the skin to extreme cold climates has actually more to do with other aspects, such as the distribution of connective tissue and distribution of fat, and with the responsiveness of peripheral capillaries to differences in temperature, and not with pigmentation.The supposition that dark skin evolved in the absence of selective pressure was put forward by the probable mutation effect hypothesis. The main factor initiating the development of light skin was seen as a consequence of genetic mutation without an evolutionary selective pressure. The subsequent spread of light skin was thought to be caused by assortive mating and sexual selection contributed to an even lighter pigmentation in females. Doubt has been cast on this hypothesis, as more random patterns of skin coloration would be expected in contrast to the observed structural light skin pigmentation in areas of low UV radiation. The clinal (gradual) distribution of skin pigmentation observable in the Eastern hemisphere, and to a lesser extent in the Western hemisphere, is one of the most significant characteristics of human skin pigmentation. Increasingly lighter skinned populations are distributed across areas with incrementally lower levels of UV radiation. Genetic associations: Variations in the KITL gene have been positively associated with about 20% of melanin concentration differences between African and non-African populations. One of the alleles of the gene has an 80% occurrence rate in Eurasian populations. The ASIP gene has a 75–80% variation rate among Eurasian populations compared to 20–25% in African populations. Variations in the SLC24A5 gene account for 20–25% of the variation between dark and light skinned populations of Africa, and appear to have arisen as recently as within the last 10,000 years. The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, but is found across the globe, particularly among populations in Northern Africa, the Horn of Africa, West Asia, Central Asia and South Asia. Biochemistry: Melanin is a derivative of the amino acid tyrosine. Eumelanin is the dominant form of melanin found in human skin. Eumelanin protects tissues and DNA from radiation damage by UV light. Melanin is produced in specialized cells called melanocytes, which are found in the lowest level of the epidermis. Melanin is produced inside small membrane-bound packages called melanosomes. Humans with naturally occurring light skin have varied amounts of smaller and sparsely distributed eumelanin and its lighter-coloured relative, pheomelanin. The concentration of pheomelanin varies highly within populations from individual to individual, but it is more commonly found among lightly pigmented Europeans, East Asians, and Native Americans.For the same body region, individuals, independently of skin colour, have the same amount of melanocytes (however variation between different body parts is substantial), but organelles which contain pigments, called melanosomes, are smaller and less numerous in light-skinned humans.For people with very light skin, the skin gets most of its colour from the bluish-white connective tissue in the dermis and from the haemoglobin associated blood cells circulating in the capillaries of the dermis. The colour associated with the circulating haemoglobin becomes more obvious, especially in the face, when arterioles dilate and become tumefied with blood as a result of prolonged physical exercise or stimulation of the sympathetic nervous system (usually embarrassment or anger). Up to 50% of UVA can penetrate deeply into the dermis in persons with light skin pigmentation with little protective melanin pigment.The combination of light skin, red hair, and freckling is associated with high amount of pheomelanin, little amounts of eumelanin. This phenotype is caused by a loss-of-function mutation in the melanocortin 1 receptor (MC1R) gene. However, variations in the MC1R gene sequence only have considerable influence on pigmentation in populations where red hair and extremely light skin is prevalent. The gene variation's primary effect is to promote eumelanin synthesis at the expense of pheomelanin synthesis, although this contributes to very little variation in skin reflectance between different ethnic groups. Melanocytes from light skin cells cocultured with keratinocytes give rise to a distribution pattern characteristic of light skin.Freckles usually only occur in people with very lightly pigmented skin. They vary from very dark to brown in colour and develop a random pattern on the skin of the individual. Solar lentigines, the other types of freckles, occur among old people regardless of skin colour. People with very light skin (types I and II) make very little melanin in their melanocytes, and have very little or no ability to produce melanin in the stimulus of UV radiation. This can result in frequent sunburns and a more dangerous, but invisible, damage done to connective tissue and DNA underlying the skin. This can contribute to premature aging and skin cancer. The strongly red appearance of lightly pigmented skin as a response to high UV radiation levels is caused by the increased diameter, number, and blood flow of the capillaries.People with moderately pigmented skin (Types III-IV) are able to produce melanin in their skin in response to UVR. Normal tanning is usually delayed as it takes time for the melanins to move up in the epidermis. Heavy tanning does not approach the photoprotective effect against UVR-induced DNA damage compared to naturally occurring dark skin, however it offers great protection against seasonal variations in UVR. Gradually developed tan in the spring prevents sunburns in the summer. This mechanism is almost certainly the evolutionary reason behind the development of tanning behaviour. Health implications: Skin pigmentation is an evolutionary adaptation to the various UV radiation levels around the world. There are health implications of light-skinned people living in environments of high UV radiation. Various cultural practices increase problems related to health conditions of light skin, for example sunbathing among the light-skinned. Health implications: Advantages in low sunlight Humans with light skin pigmentation living in low sunlight environments experience increased vitamin D synthesis compared to humans with dark skin pigmentation due to the ability to absorb more sunlight. Almost every part of the human body, including the skeleton, the immune system, and brain requires vitamin D. Vitamin D production in the skin begins when UV radiation penetrates the skin and interacts with a cholesterol-like molecule produce pre-vitamin D3. This reaction only occurs in the presence of medium length UVR, UVB. Most of the UVB and UVC rays are destroyed or reflected by ozone, oxygen, and dust in the atmosphere. UVB reaches the Earth's surface in the highest amounts when its path is straight and goes through a little layer of atmosphere. Health implications: The farther a place is from the equator, the less UVB is received, and the potential to produce of vitamin D is diminished. Some regions far from the equator do not receive UVB radiation at all between autumn and spring. Vitamin D deficiency does not kill its victims quickly, and generally does not kill at all. Rather it weakens the immune system, the bones, and compromises the body's ability to fight uncontrolled cell division which results in cancer. A form of vitamin D is a potent cell growth inhibitor; thus chronic deficiencies of vitamin D seem to be associated with higher risk of certain cancers. This is an active topic of cancer research and is still debated. The vitamin D deficiency associated with dark skin leads to higher levels of schizophrenia in such populations residing in northerly latitudes.With the increase of vitamin D synthesis, there is a decreased incidence of conditions that are related to common vitamin D deficiency conditions of people with dark skin pigmentation living in environments of low UV radiation: rickets, osteoporosis, numerous cancer types (including colon and breast cancer), and immune system malfunctioning. Vitamin D promotes the production of cathelicidin, which helps to defend humans' bodies against fungal, bacterial, and viral infections, including flu. When exposed to UVB, the entire exposed area of body's skin of a relatively light skinned person is able to produce between 10 and 20000 IU of vitamin D. Health implications: Disadvantages in high sunlight Light-skinned people living in high sunlight environments are more susceptible to the harmful UV rays of sunlight because of the lack of melanin produced in the skin. The most common risk that comes with high exposure to sunlight is the increased risk of sunburns. This increased risk has come along with the cultural practice of sunbathing, which is popular among light-skinned populations. This cultural practice to gain tanned skin if not regulated properly can lead to sunburn, especially among very lightly-skinned humans. The overexposure to sunlight also can lead to basal cell carcinoma, which is a common form of skin cancer. Health implications: Another health implication is the depletion of folate within the body, where the overexposure to UV light can lead to megaloblastic anemia. Folate deficiency in pregnant women can be detrimental to the health of their newborn babies in the form of neural tube defects, miscarriages, and spina bifida, a birth defect in which the backbone and spinal canal do not close before birth. The peak of neural tube defect occurrences is the highest in the May–June period in the Northern Hemisphere. Folate is needed for DNA replication in dividing cells and deficiency can lead to failures of normal embryogenesis and spermatogenesis.Individuals with lightly pigmented skin who are repeatedly exposed to strong UV radiation, experience faster aging of the skin, which shows in increased wrinkling and anomalies of pigmentation. Oxidative damage causes the degradation of protective tissue in the dermis, which confers the strength of the skin. It has been postulated that white women may develop wrinkles faster after menopause than black women because they are more susceptible to the lifetime damage of the sun throughout life. Dr. Hugh S. Taylor, of Yale School of Medicine, concluded that the study could not prove the findings but they suspect the underlying cause. Light-coloured skin has been suspected to be one of the contributing factors that promote wrinkling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**American Institute of Chemists Gold Medal** American Institute of Chemists Gold Medal: The American Institute of Chemists Gold Medal is the highest award of the American Institute of Chemists and has been awarded since 1926.It is presented annually to a person who has most encouraged the science of chemistry or the profession of chemist or chemical engineer in the United States of America, giving "exemplary service". Medal recipients: The following people have received the AIC Gold Medal:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glossary of Schenkerian analysis** Glossary of Schenkerian analysis: This is a glossary of Schenkerian analysis, a method of musical analysis of tonal music based on the theories of Heinrich Schenker (1868–1935). The method is discussed in the concerned article and no attempt is made here to summarize it. Similarly, the entries below whenever possible link to other articles where the concepts are described with more details (in several cases, the name of the entry links to a specialized article), and the definitions are kept here to a minimum. A: Anstieg See Initial ascent.Arpeggiation (German: Brechung) Elementary elaboration of a harmony. See also Bass arpeggiation; First-order arpeggiation; Unfolding.Ausfaltung See Unfolding.Auskomponierung See Prolongation.Außensatz See Fundamental structure. B: Background (German: Hintergrund) The structural level of the fundamental structure. See also Middelground and Foreground.Bass arpeggiation (German: Bassbrechung) Bass pattern I-V-I forming the harmonic content of the background of tonal musical pieces; the concept belongs to the final version of Schenkerian theory, from 1930 onwards. See also Schenkerian analysis: The arpeggiation of the bass.Bassbrechung See Bass arpeggiation.Brechung See Arpeggiation. C: Chord of nature See Klang.Composing out See Prolongation.Compound melody See Unfolding.Coupling (German: Koppelung) "The connection of two registers which lie an octave apart". It often results from a register transfer in which the transferred voice maintains a relation with its original register.Cover tone (German: Deckton) "A tone of the inner voice which appears above the foreground diminution". It often results from an ascending register transfer or coupling, but "the main thread of melodic activity remains with the displaced voice while the voice that does the displacing functions as a 'cover'". D: Deckton See Cover tone.Diminution "The process by which an interval formed by notes of longer value is expressed in notes of smaller value".Divider (German: Teiler) Consonant subdivision of a consonant interval: the octave can be divided at the fifth (fifth-divider, German: Quintteiler) and the fifth can be divided at the third (third-teiler, German: Terzteiler). Schenker had also imagined a divider at the fourth (or lower fifth), but he apparently abandoned the concept after 1926, probably because the upper fourth does not belong to the divided triad. See also Schenkerian analysis: The arpeggiation of the bass and the divider at the fifth. F: Fernhören See Structural hearing.First-order arpeggiation Arpeggiated motion leading to the primary tone of the fundamental line. The term has been proposed by Forte & Gilbert. See also Schenkerian analysis: Initial ascent, initial arpeggiation.Foreground (German: Vordergrund) See Structural level.Free Composition (German: freier Satz) Title of the American translation of Schenker's Der freie SatzFundamental line (German: Urlinie) The melodic aspect of the fundamental structure, a stepwise descent from one of the triad notes to the tonic, with the bass arpeggiation being the harmonic aspect. The notion of the descending fundamental line belongs to the final version of Schenkerian theory, from 1930 onwards; fundamental (or, better, "primal") lines in Schenker's earlier writings at times were ascending. The first note of the fundamental line is its primary tone. See also Schenkerian analysis: The fundamental line.Fundamental structure (German: Ursatz) "The background in music is represented by a contrapuntal structure which I have designated the fundamental structure". It consists in the fundamental line counterpointed by the bass arpeggiation, together forming a counterpoint of the outer lines (German: Außensatz). H: Headnote See Primary tone.Hintergrund See Background.Höherlegung See Register transfer I: Initial arpeggiation See First-order arpeggiation.Initial ascent (German: Anstieg) Ascending motion leading to the primary tone of the fundamental line. K: Klang The complex sound consisting of the first five notes of the harmonic series, suggesting a model for the major triad. For a discussion of the meaning of this concept, see Klang (music).Kopfton See Primary note.Koppelung See Coupling. L: Level See Structural level.Linear progression (German: Auskomponierungszug or Zug) A passing note elaboration involving stepwise melodic motion in one direction between two harmonic tones. M: Middleground (German: Mittelgrund) See Structural levelMischung See Mixture.Mittelgrund See Middelground.Mixture (German: Mischung) Change of mode of the tonic (major to minor, minor to major). N: Neighbour note (German: Nebennote) Nonchord tone that passes, usually stepwise, from a chord tone directly above or below it (which frequently causes the NN to create dissonance with the chord) and resolves to the same chord tone. See Neighbor tone. See also Schenkerian analysis. O: Octave transfer See Register transfer.Obligate Lage See Obligatory register.Obligatory register (German: obligate Lage) "No matter how far the composing-out may depart from its basic register [...], it nevertheless retains an urge to return to that register". This urge is often fulfilled, but not always. P: Primal line, Primal structure The use of "Fundamental" as a translation of Ur- in Urlinie or Ursatz has been questioned. For more details, see Fundamental structure: Terminology.Primary tone (German: Kopfton) The first tone of the Fundamental line. One of the three notes of the tonic triad, , or . See Schenkerian analysis:The fundamental line.Prolongation (German: Auskomponierung), Composing-out, Elaboration The process in tonal music through which a pitch, interval, or consonant triad is able to govern spans of music when not physically sounding. Schenker himself appears to have used the German term Prolongation mainly to describe extensions of the laws of strict counterpoint to freer writing: see Prolongation in Heinrich Schenker. Auskomponierung can be literally translated as "composing-out"; the German word is coined on the model of Ausarbeitung, "elaboration". R: Reaching over (German: Übergreifen) Elaboration by which a descending inner voice is placed above the (descending) upper voice by a register transfer. Successive reaching-over lines may produce an ascending motion. See List of Schenker's references to reaching over.Register transfer Ascending (German: Höherlegung) or descending (German: Tieferlegung) motion of one or several voices into a different octave (i.e. into a different register). S: Scale-step (German: Stufe) "The scale-step is a higher and more abstract unit [than that of triad]. At times it may even comprise several harmonies [...]; in other words: even if, under certain circumstances, a certain number of harmonies look like independent triads or seventh-chords, they may nonetheless add up, in their totality, to one single triad [...] and they would have to be subsumed under the concept of this triad [...] as a scale-step.Schicht See Structural level.Stimmtausch See Voice exchange.Strata (German: Schichten) Term used by John Rothgeb to translate Schicht (see Structural level) in Oswald Jonas' Introduction to the Theory of Heinrich Schenker.Structural hearing Title of the influential book by Felix Salzer. The expression may derive from that of "long-distance hearing" (German: Fernhören), that Schenker used in Der Tonwille 1 and 2 (1921 and 1922) and that Furtwängler quoted in his paper "Heinrich Schenker. Ein zeitgemäßes Problem" of 1947.Structural level (German: Schicht) Schenker uses the term "level" mainly in the expression "voice-leading level", denoting the successive levels through which the fundamental structure develops to form the foreground. The expression "Structural level" appears to have been coined by Allen Forte.Stufe See Scale step. T: Teiler See Divider.Tieferlegung See Register transfer.Tonal space One of the most general principles of Schenkerian analysis: the intervals between the notes of the tonic triad form a tonal space that is filled with passing and neighbour notes, producing new triads and new tonal spaces, open for further elaborations until the surface of the work (the score) is reached. U: Übergreifen See Reaching over.Unfolding (German: Ausfaltung) The transformation of a single chord into a horizontal succession (see Arpeggiation), either when a tone of the upper voice and one of the inner voice are interconnected, or when a similar connection takes place in a succession of several chords. See Coupling. See also Schenkerian analysis: Unfolding.Urlinie See Fundamental line.Urlinietafel "Graph of the Urlinie", rhythmic reduction or the score with which Schenker often began his analyses. See also Schenkerian analysis: Schenkerian notation.Ursatz See Fundamental structure. V: Voice exchange (German: Stimmtausch) "A pattern that involves two and only two voices, a pattern in which the voices literally exchange their pitches." See also Schenkerian analysis: Voice exchange.Voice leading "The study of voice leading is the study of the principles that govern the progression of the component voices of a composition both separately and in combination. In the Schenkerian tradition, this study begins with strict species counterpoint."Vordergrund See Foreground.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open IPTV Forum** Open IPTV Forum: The Open IPTV Forum (OIPF) was a non-profit consortium and standards organization focused on defining and publishing open for end-to-end Internet Protocol television (IPTV) standards. It was later joined by several others.Since June 2014, OIPF has been part of the Hybrid Broadcast Broadband TV association, a similar industry organisation for hybrid broadcast and broadband TV services formed in 2009, which worked closely with OIPF on browser and media specifications for network-connected televisions and set-top boxes. History: In March 2007 AT&T, Ericsson, Orange, Panasonic, Philips, Samsung, Siemens, Sony and Telecom Italia formed the group.In September 2010 the consortium released the second version of their specification.The HbbTV standard, which has been adopted by many broadcasters across Europe, is based on the specifications created by the Open IPTV Forum.The OIPF and HbbTV announced a joint initiative for testing and certification in 2012.In June 2014, OIPF merged with HbbTV. The two initiatives were combined under the HbbTV banner because the markets for IPTV, OTT and hybrid broadcast and broadband TV are converging.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allotment (travel industry)** Allotment (travel industry): Allotments in the tourism industry are used to designate a certain block of pre-negotiated carrier seats or hotel rooms which have been bought out and held by a travel organizer with a huge buying power like a wholesaler, tour operator or hotel consolidator, and more rarely by a retail travel agent.Allotments can be purchased for a specific period of time such as a whole season, part of a season or for any single dates and then resold to travel partners and final customers around the globe. A couple of days prior to carrier departure/hotel check-in any unsold seats/rooms may be released back to the supplier if such an agreement exists between the two parties. An allotment release back period is also negotiated as part of the allotment contract (e.g. four days prior to check-in/departure). Negotiating allotments: Allotments can be negotiated between a tour operator and a travel service supplier such as airline company/hotel chain, or between two travel organizers such as a tour operator and a retail travel agent. Either way the buyer needs to prove a consistent level of business, because allotments are hardly granted without any previous sales history. Rooms or seats that have not been contracted between the travel company and the product supplier are handled as ‘on-request’, where each booking of an airline seat or hotel room needs to be confirmed with the supplier before being confirmed with the client. The allotment or allocation contract: The amount of the contracted rooms/seats to be specified in the allotment contract is a result of the estimated, during the negotiation, volume of sales to be realized by the tour operator. Tour operators book a certain number of rooms in hotels or seats on carriers and have the right to use them by a given date, also known as a release date, that usually is some days prior to tourist's arrival (hotels)/departure (carriers). The allotment contract reduces the risk of any unsold products by the supplier and grants relative price advantage to the travel organizer helping him to stay competitive on the market by offering extra discounts.Tour operators obtain discounts, through allotment or commitment contracts, primarily depend on the firm size and the bargaining power exercised; they can vary from 10% to 50% according to the period of the year, the destination, the quantity and quality of services contracted upon. Some big tour operators are able to obtain up to 70% of discount.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tittibhasana** Tittibhasana: Tittibhasana (Sanskrit: टिट्टिभासन Ṭiṭṭibhāsana) or Firefly pose is an arm-balancing asana with the legs stretched out forwards in hatha yoga and modern yoga as exercise. Variants include Bhujapidasana, with the legs crossed at the ankle, and Eka Hasta Bhujasana, with one leg stretched out forwards. Etymology and origins: The name Tittibhasana comes from Sanskrit: Ṭiṭṭibha, "small insect, firefly", and āsana, "posture" or "seat". Indian folklore tells the story of a pair of Tittibha birds that nested by the sea; the ocean swept away their eggs, and the birds complained to Vishnu, asking for the eggs to be returned. The god gave the order, and the sea gave the eggs back. The effectiveness of the small weak birds is said to be used as a symbol of yoga, able to overcome the power of illusion in the world.The name Bhujapidasana (Sanskrit: भुजपीडासन; IAST: Bhujapīḍāsana) comes from Bhuja (Sanskrit: भुज) meaning "arm" or "shoulder", and Pīḍa (Sanskrit: पीडा) meaning "pressure". The pose is described and illustrated in the 19th century Sritattvanidhi as Mālāsana, garland pose; however, that name is given to a different asana in Light on Yoga. The pose is described in the 20th century in Krishnamacharya's 1935 Yoga Makaranda, and it was taken up by his pupils Pattabhi Jois in his Ashtanga Vinyasa Yoga and B. K. S. Iyengar in his Light on Yoga. Description: Tittibhasana is described in Light on Yoga as being entered from Dvi Pada Sirsasana, a difficult sitting pose with the legs crossed behind the head, that in Iyengar's words "requires practice", by uncrossing the ankles, stretching the legs straight up, and pushing down on the hands to balance. It is an intermediate level asana in Ashtanga vinyasa yoga. Variations: Bhujapidasana, Shoulder Pressing Pose, is similar, with the thighs resting on the upper arms, but the legs are crossed at the ankle in front of the body.Eka Hasta Bhujasana, Elephant's Trunk Pose or One Leg Over Arm Balance, has one leg stretched out straight forwards between the supporting arms. Sources: Iyengar, B. K. S. (1979) [1966]. Light on Yoga: Yoga Dipika. Unwin Paperbacks. ISBN 978-1855381667. Sjoman, Norman E. (1999). The Yoga Tradition of the Mysore Palace. Abhinav Publications. ISBN 81-7017-389-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alimemazine** Alimemazine: Alimemazine (INN), also known as trimeprazine, (brand names Nedeltran, Panectyl, Repeltin, Therafene, Theraligene, Theralen, Theralene, Vallergan, Vanectyl, and Temaril), commonly provided as a tartrate salt, is a phenothiazine derivative that is used as an antipruritic (it prevents itching from causes such as eczema or poison ivy, by acting as an antihistamine). It also acts as a sedative, hypnotic, and antiemetic for prevention of motion sickness. Although it is structurally related to drugs such as chlorpromazine, it is not used as an antipsychotic. In the Russian Federation, it is marketed under the brand name Teraligen for the treatment of anxiety disorders (including GAD), organic mood disorders, sleep disturbances, personality disorders accompanied by asthenia and depression, somatoform autonomic dysfunction and various neuroses.Alimemazine is not approved for use in humans in the United States. The combination of alimemazine and prednisolone (commonly sold under the brand name Temaril-P) is licensed as an antipruritic and antitussive in dogs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Agrinierite** Agrinierite: Agrinierite (chemical formula K2(Ca,Sr)(UO2)3O3(OH)2·5H2O) is a mineral often found in the oxidation zone of uranium deposits. The IMA symbol is Agn. It is named for Henry Agrinier (1928–1971), an engineer for the Commissariat à l'Énergie Atomique.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bing metrization theorem** Bing metrization theorem: In topology, the Bing metrization theorem, named after R. H. Bing, characterizes when a topological space is metrizable. Formal statement: The theorem states that a topological space X is metrizable if and only if it is regular and T0 and has a σ-discrete basis. A family of sets is called σ-discrete when it is a union of countably many discrete collections, where a family F of subsets of a space X is called discrete, when every point of X has a neighborhood that intersects at most one member of F. History: The theorem was proven by Bing in 1951 and was an independent discovery with the Nagata–Smirnov metrization theorem that was proved independently by both Nagata (1950) and Smirnov (1951). Both theorems are often merged in the Bing-Nagata-Smirnov metrization theorem. It is a common tool to prove other metrization theorems, e.g. the Moore metrization theorem – a collectionwise normal, Moore space is metrizable – is a direct consequence. Comparison with other metrization theorems: Unlike the Urysohn's metrization theorem which provides a sufficient condition for metrization, this theorem provides both a necessary and sufficient condition for a topological space to be metrizable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bigeminal pulse** Bigeminal pulse: Bigeminal pulse is a medical condition, easily confused with pulsus alternans. Similar features between bigeminal pulse and pulsus alternans are strong peak and weak peak. However, unlike pulsus alternans, the weak beat in bigeminal pulse occurs prematurely (early). Thus, not followed a pause as it is in pulsus alternans but occurs close to normal strong beat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SIMes** SIMes: SIMes (or H2Imes) is an N-heterocyclic carbene. It is a white solid that dissolves in organic solvents. The compound is used as a ligand in organometallic chemistry. It is structurally related to the more common ligand IMes but with a saturated backbone (the S of SIMes indicates a saturated backbone). It is slightly more flexible and is a component in Grubbs II. It is prepared by alkylation of trimethylaniline by dibromoethane followed by ring closure and dehydrohalogenation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sex and sexuality in speculative fiction** Sex and sexuality in speculative fiction: Sexual themes are frequently used in science fiction or related genres. Such elements may include depictions of realistic sexual interactions in a science fictional setting, a protagonist with an alternative sexuality, a sexual encounter between a human and a fictional extraterrestrial, or exploration of the varieties of sexual experience that deviate from the conventional. Sex and sexuality in speculative fiction: Science fiction and fantasy have sometimes been more constrained than non-genre narrative forms in their depictions of sexuality and gender. However, speculative fiction (SF) and soft science fiction also offer the freedom to imagine alien or galactic societies different from real-life cultures, making it a tool to examine sexual bias, heteronormativity, and gender bias and enabling the reader to reconsider their cultural assumptions. Sex and sexuality in speculative fiction: Prior to the 1960s, explicit sexuality of any kind was not characteristic of genre speculative fiction due to the relatively high number of minors in the target audience. In the 1960s, science fiction and fantasy began to reflect the changes prompted by the civil rights movement and the emergence of a counterculture. New Wave and feminist science fiction authors imagined cultures in which a variety of gender models and atypical sexual relationships are the norm, and depictions of sex acts and alternative sexualities became commonplace. Sex and sexuality in speculative fiction: There is also science fiction erotica, which explores more explicit sexuality and the presentation of themes aimed at inducing arousal. Critical analysis: As genres of popular literature, science fiction and fantasy often seem even more constrained than non-genre literature by their conventions of characterization and the effects that these conventions have on depictions of sexuality and gender. Sex is often linked to disgust in science fiction and horror, and plots based on sexual relationships have mainly been avoided in genre fantasy narratives. On the other hand, science fiction and fantasy can also offer more freedom than do non-genre literatures to imagine alternatives to the default assumptions of heterosexuality and masculine superiority that permeate some cultures.In speculative fiction, extrapolation allows writers to focus not on the way things are (or were), as non-genre literature does, but on the way things could be different. It provides science fiction with a quality that Darko Suvin has called "cognitive estrangement": the recognition that what we are reading is not the world as we know it, but a world whose difference forces us to reconsider our own world with an outsider's perspective. When the extrapolation involves sexuality or gender, it can force the reader to reconsider their heteronormative cultural assumptions; the freedom to imagine societies different from real-life cultures makes science fiction an incisive tool to examine sexual bias. In science fiction, such estranging features include technologies that significantly alter sex or reproduction. In fantasy, such features include figures (for example, mythological deities and heroic archetypes) who are not limited by preconceptions of human sexuality and gender, allowing them to be reinterpreted. Science fiction has also depicted a plethora of alien methods of reproduction and sex.Uranian Worlds, by Eric Garber and Lyn Paleo, is an authoritative guide to science fiction literature featuring gay, lesbian, transgender, and related themes. The book covers science fiction literature published before 1990 (2nd edition), providing a short review and commentary on each piece. Critical analysis: Themes explored Some of the themes explored in speculative fiction include: Sex with aliens, machines and sex robots Reproductive technology including cloning, artificial wombs, parthenogenesis, and genetic engineering Sexual equality of men and women Male- and female-dominated societies, including single-gender worlds Polyamory Changing gender roles Homosexuality and bisexuality Androgyny and sex changes Sex in virtual reality Other advances in technology for sexual pleasure such as teledildonics Asexuality Male pregnancy Sexual taboos and morality Sex in zero gravity Birth control and other, more radical measures to prevent overpopulation SF literature: Proto SF True History, a Greek-language tale by Assyrian writer Lucian (120-185 CE), has been called the first ever science fiction story. The narrator is suddenly enveloped by a typhoon and swept up to the Moon, which is inhabited by a society of men who are at war with the Sun. After the hero distinguishes himself in combat, the king gives him his son, the prince, in marriage. The all-male society reproduces (male children only) by giving birth from the thigh or by growing a child from a plant produced by planting the left testicle in the Moon's soil.In other proto-SF works, sex itself, of any type, was equated with base desires or "beastliness," as in Gulliver's Travels (1726), which contrasts the animalistic and overtly sexual Yahoos with the reserved and intelligent Houyhnhnms. Early works that showed sexually open characters to be morally impure include the first lesbian vampire story "Carmilla" (1872) by Sheridan Le Fanu (collected in In a Glass Darkly).The 1915 utopian novel Herland by Charlotte Perkins Gilman depicts the visit by three men to an all-female society in which women reproduce by parthenogenesis. SF literature: Pulp era (1920–30s) During the pulp era, explicit sexuality of any kind was not characteristic of genre science fiction and fantasy. The frank treatment of sexual topics of earlier literature was abandoned. For many years, the editors who controlled what was published, such as Kay Tarrant, assistant editor of Astounding Science Fiction, felt that they had to protect the adolescent male readership that they identified as their principal market. Although the covers of some 1930s pulp magazines showed scantily clad women menaced by tentacled aliens, the covers were often more lurid than the magazines' contents. Implied or disguised sexuality was as important as that which was openly revealed. In this sense, genre science fiction reflected the social mores of the day, paralleling common prejudices. This was particularly true of pulp fiction, more so than literary works of the time.H. P. Lovecraft seminal short story, "The Call of Cthulhu", first published in the pulp magazine Weird Tales in 1928, launched what developed into the Cthulhu Mythos, a shared fictional universe taken up by various other writers and considerably affecting the entire field of Fantasy. Bobby Derie's 2014 book Sex and The Cthulhu Mythos treats extensively the various sexual aspects of this Mythos: "H. P. Lovecraft was one of the most asexual beings in history - at least by his own admission. Whether we accept this view of his own sexual instincts or not, there is no denying that sexuality - normal and aberrant - underlies a number of significant tales in the Lovecraft oeuvre. The impregnation of a human woman by Yog-Sothoth in "The Dunwich Horror" and the mating of humans with strange creatures from the sea in "The Shadow over Innsmouth" are only two such examples. Sex and The Cthulhu Mythos examines the significant uses of love, gender, and sex in the work of H. P. Lovecraft, moving on to some of his leading disciples and noting that "The work of such significant writers of the Lovecraft tradition as Robert E. Howard, Clark Ashton Smith, Ramsey Campbell, W. H. Pugmire, and Caitlín R. Kiernan, features far more explicit sexuality than anything Lovecraft could have imagined". Finally, Derie goes on to study sexual themes in other venues, such as Lovecraftian occultism, Japanese manga and anime, and even Lovecraftian fan fiction.In Aldous Huxley's dystopian novel Brave New World (1932), natural reproduction has been abolished, with human embryos being raised artificially in "hatcheries and conditioning centres." Recreational sex is promoted, often as a group activity; what passes for a religious ceremony consists of six men and six women meeting once a week to hold an "orgy porgy". To prepare for this life, little boys and girls are made to play sexual games with each other as part of the official curriculum in what passes for kindergartens and elementary schools. On the other hand, marriage, pregnancy, natural birth, and parenthood are considered too vulgar to be mentioned in polite conversation. An important part of the plot concerns the Savage Reservation in New Mexico, where marriage, pregnancy and motherhood are still practiced by the local Native Americans. Linda, a woman from 'civilized' London who was marooned there and sticking to her accustomed sexual mores, is persecuted as "a whore" by the local women whose husbands she seduced. John, Linda's son who grew up on the Reservation but was always alienated from its society, comes to London and there falls deeply in love with a girl - seeking a happy loving consummation with her and then violently assaulting her from jealousy when she behaves according to her society's sexual mores. SF literature: One of the earliest examples of genre science fiction that involves a challenging amount of unconventional sexual activity is Odd John (1935) by Olaf Stapledon. John is a mutant with extraordinary mental abilities who will not allow himself to be bound by many of the rules imposed by the ordinary British society of his time. The novel strongly implies that he has consensual intercourse with his mother and that he seduces an older boy who becomes devoted to him but also suffers from the affront that the relationship creates to his own morals. John eventually concludes that any sexual interaction with "normal" humans is akin to bestiality. SF literature: War with the Newts, a 1936 satirical science fiction novel by Czech author Karel Čapek, concerns the discovery in the Pacific of a sea-dwelling race, an intelligent breed of newts – who are initially enslaved and exploited by humans and later rebel and go war against them. The book includes a detailed appendix entitled 'The Sex Life of the Newts', which examines the Newts' sexuality and reproductive processes in a pastiche of academese. This is one of the first attempts to speculate on what form sex might have among non-human intelligent beings. SF literature: C.L. Moore's 1934 story "Shambleau" begins in what seems a classical damsel in distress situation: the protagonist, space adventurer Northwest Smith, sees a "sweetly-made girl" pursued by a lynch mob intent on killing her and intervenes to save her. But once he takes her to his room, she turns to be a disguised alien creature who spreads her inhuman long tendrils of hair, trapping Smith in a kind of psychic bondage and drawing out his life, and but for his partner arriving and killing her, it would have ended with his death. The story has little explicit sex, and no other physical contact than that of the hair of the "girl" with Smith's body; yet the story clearly explores sexual themes in a way highly daring for its time. SF literature: In C.S. Lewis's That Hideous Strength, a prominent place is given among the cast of villains to a monstrous lesbian – Miss Hardcastle, Security Chief of the satanic "Institute" which quite literally intends to take over the world. Hardcastle is presented as an inveterate sadist who takes pleasure in torturing "fluffy" young women and inflicting on them burns with a lighted cigarette. SF literature: Golden Age (1940–50s) As the readership for science fiction and fantasy began to age in the 1950s, writers were able to introduce more explicit sexuality into their work. SF literature: William Tenn wrote in 1949 Venus and the Seven Sexes – featuring the Plookhs, natives of the planet Venus, who require the participation of seven different sexes in order to reproduce and who get corrupted by human film director Hogan Shlestertrap. The rather satirical story might be the first case of an author speculating of creatures having more than two sexes, an idea later taken up by various others. SF literature: Philip José Farmer wrote The Lovers (1952), arguably the first science fiction story to feature sex as a major theme, and Strange Relations (1960), a collection of five stories about human/alien sexual relations. In his novel Flesh (1960), a hypermasculine antlered man ritually impregnates legions of virgins in order to counter declining male fertility. Theodore Sturgeon wrote many stories that emphasised the importance of love regardless of the current social norms, such as "The World Well Lost" (1953), a classic tale involving alien homosexuality, and the novel Venus Plus X (1960), in which a contemporary man awakens in a futuristic place where the people are hermaphrodites. SF literature: Robert A. Heinlein's time-travel short story "All You Zombies" (1959) chronicles a young man (later revealed to be intersex) taken back in time and tricked into impregnating his younger, female self before he underwent a sex change. He then turns out to be the offspring of that union, with the paradoxical result that he is both his own mother and father. SF literature: When Heinlein's "The Puppet Masters" was originally published, it was censored by the publisher to remove various references to sex. The opening scene, where the protagonist is called urgently to HQ on an early morning hour, was re-written to remove all mention of his being in bed with a girl he had casually picked up. The published version did mention that the book's alien invaders cause human beings whose bodies they take over to lose sexual feeling – but removed a later section mentioning that after some time on Earth the invaders "discovered sex" and started engaging in wild orgies and even broadcasting them on TV in areas under their control. Thirty years later, with changing mores, Heinlein published the book's full, unexpurgated text. SF literature: In "Time Enough for Love" (1973), Heinlein's recurring protagonist Lazarus Long – who never grows old and has an extremely long and eventful life – travels backward in time to the period of his own childhood. As an unintentional result, he falls in love with his own mother. He has no guilt feeling about pursuing and eventually consummating that relationship – considering her simply as an extremely attractive young woman named Maureen who just happens to have given birth to him thousands of years ago (as far as his personal timeline is concerned). The sequel, "To Sail Beyond the Sunset" takes place after Maureen had discovered the true identity of her lover – and shows that for her part, she was more amused than shocked or angry. Poul Anderson's 1958 novel "War of the Wing-Men", centers on a species of winged intelligent creatures and sexual differences are central to its plot. Of the two mutually-hostile societies featured in the book, one practices monogamous marriage, while in the other there are every spring several days of a wild indiscriminate orgy – and a complete celibacy for the rest of the year. Ironically, both societies alike consider themselves chaste and the other depraved: "We keep faithful to our mates while they fuck around indiscriminately – disgusting!"; "We keep sex where it belongs, to one week per year where you are not really yourself. They do it all over the year- disgusting!". Humans who land on the planet intervene in the centuries-long war, by showing members of the two societies that they are not all that different from each other. SF literature: Another Poul Anderson novel of the same period, Virgin Planet (1959), deals in a straightforward manner with homosexuality and polyamory on an exclusively female world. The plot twist is that the protagonist is the only male on a world of women, and though quite a few of them are interested in sex with him, it is never consummated during his sojourn on the planet. SF literature: A mirror image was presented by A. Bertram Chandler in Spartan Planet (1969), featuring an exclusively male world, where by definition homosexual relations are the normal (and only) sexual relations. The plot revolves around the explosive social upheaval resulting when the planet is discovered by a spaceship from the wider galaxy, whose crew includes both men and women. Until the late 1960s, few other writers depicted alternative sexuality or revised gender roles, nor openly investigated sexual questions.More conventionally, A. Bertram Chandler's books include numerous episodes of free fall sex, his characters (male and female alike) strongly prone to extramarital relations and tending to while away the boring months-long Deep Space voyages by forming complicated love triangles. Plots and themes: New Wave era (1960–70s) By the late 1960s, science fiction and fantasy began to reflect the changes prompted by the civil rights movement and the emergence of a counterculture. Within the genres, these changes were incorporated into a movement called "the New Wave," a movement more skeptical of technology, more liberated socially, and more interested in stylistic experimentation. New Wave writers were more likely to claim an interest in "inner space" instead of outer space. They were less shy about explicit sexuality and more sympathetic to reconsiderations of gender roles and the social status of sexual minorities. Notable authors who often wrote on sexual themes included Joanna Russ, Thomas M. Disch, John Varley, James Tiptree, Jr., and Samuel R. Delany. Under the influence of New Wave editors and authors such as Michael Moorcock (editor of the influential New Worlds magazine) and Ursula K. Le Guin, sympathetic depictions of alternative sexuality and gender multiplied in science fiction and fantasy, becoming commonplace.In Brian Aldiss's 1960 novel The Interpreter (in the US published as Bow Down to Nul), Earth is a backwater colony planet in the galactic empire of the Nuls, a giant, three-limbed, civilised alien race. The plot, dealing with complicated relations between humans and their Nul rulers, touches among other things on Nul sex. The Nul wear no clothes, but their equivalent of hands and arms are wide membranes which are normally held in a fixed position before the body, not moving even when the "fingers" are manipulating a tool. Only in a sexual context are the hands moved aside, to reveal the genital organs behind – the equivalent of humans undressing. In one scene, the human protagonist is able to tune to an erotic (or pornographic) Nul sensory device, made for internal Nul consumption and not intended for humans, which replicates the wild ecstasy felt by Nuls when daring to move aside their membrane hands and reveal their bodies to each other – similar in some ways to human sexual arousal but also very different. Plots and themes: Robert A. Heinlein's Stranger in a Strange Land (1961) and The Moon Is a Harsh Mistress (1966) both depict heterosexual group marriages and public nudity as desirable social norms, while in Heinlein's Time Enough for Love (1973), the main character argues strongly for the future liberty of homosexual sex. Heinlein's character Lazarus Long, travelling back in time to the period of his own childhood, discovers, to his surprise and (initial) shame, a sexual desire of his own mother – but overcoming this initial shame, he comes to think of her simply as "Maureen", an attractive young woman who is far from indifferent to him. Plots and themes: Samuel R. Delany's Nebula Award-winning short story "Aye, and Gomorrah" (1967) posits the development of neutered human astronauts, and then depicts the people who become sexually oriented toward them. By imagining a new gender and resultant sexual orientation, the story allows readers to reflect on the real world while maintaining an estranging distance. In his 1975 science fiction novel entitled Dhalgren, Delany colors his large canvas with characters of a wide variety of sexualities. Once again, sex is not the focus of the novel, although it does contain some of the first explicitly described scenes of gay sex in science fiction. Delany depicts, mostly with affection, characters with a wide variety of motivations and behaviours, with the effect of revealing to the reader the fact that these kinds of people exist in the real world. In later works, Delany blurs the line between science fiction and gay pornography. Delany faced resistance from book distribution companies for his treatment of these topics.In 1968, Anne McCaffrey's Dragonflight launched the Dragonriders of Pern series, depicting the lives of humans living in close partnership with dragons. In a key scene, the young golden Dragon Queen takes off on her mating flight, pursued by the male dragons – until finally one of them catches up with her and they engage in passionate mating high up in the air, their necks and wings curled around each other. On the ground the woman and man who are these dragons' riders share their passion telepathically – and inevitably wildly embrace and kiss, embarking in parallel human mating. Plots and themes: Ursula K. Le Guin explores radically alternative forms of sexuality in The Left Hand of Darkness (1969) and again in "Coming of Age in Karhide" (1995), which imagine the sexuality of an alien "human" species in which individuals are neither "male" nor "female," but undergo a monthly sexual cycle in which they randomly experience the activation of either male or female sexual organs and reproductive abilities; this makes them in a sense bisexual, and in other senses androgynous or hermaphroditic. It is common for an individual of that species to undergo at some moment of life pregnancy and birth-giving, while at another time having the male role and impregnating somebody else. In the novel, the Genthian political leader, who appears externally male, becomes pregnant. Plots and themes: Le Guin has written considerations of her own work in two essays, "Is Gender Necessary?" (1976) and "Is Gender Necessary? Redux" (1986), which respond to feminist and other criticism of The Left Hand of Darkness. In these essays, she makes it clear that the novel's assumption that Gethenians would automatically find a mate of the gender opposite to the gender they were becoming produced an unintended heteronormativity. Le Guin has subsequently written many stories that examine the possibilities science fiction allows for non-traditional sexuality, such as the sexual bonding between clones in "Nine Lives" (1968) and the four-way marriages in "Mountain Ways" (1996). Plots and themes: The complicated plot of Roger Zelazny's 1970 fantasy novel The Guns of Avalon includes the protagonist Corwin meeting and making love to Dara, who seems a normal (and very attractive) young woman. But at the end of the book, when she walks the powerfully magical "Pattern" she is changing, her hair "crackling with static electricity" and then she seems to grow horns and hoofs, then becomes an enormous cat, then "a bright winged thing of indescribable beauty" followed by "a tower of ashes". Finally, she again becomes a recognizable Dara, but "tall and magnificent, both beautiful and somehow horrible at the same time, her arms raised in exultation and inhuman laughter flowing from her lips". Shocked, Corwin wonders "had I truly held, caressed, made love to - that?". While Corwin struggles with feeling mightily repelled and simultaneously attracted as never before, the changed Dara declares herself his mortal enemy and nemesis - and disappears. Zelazny's publishers had no problem with this final scene and its ambiguous sexual connotations, but they did object to an earlier sex scene - straightforward but explicit by 1970s American publishing standards - between Corwin and the seemingly normal Dara. Zelazny was amused when the book's editor asked him to remove it "so that sales to libraries would not be jeopardized". That deleted scene has never appeared with the novel, even in later editions when mores had become more elastic, but has been printed for the first time in The Collected Stories of Roger Zelazny, Volume 3: This Mortal Mountain.In his 1972 novel The Gods Themselves, Isaac Asimov describes an alien race with three sexes, all of them necessary for sexual reproduction. One sex produces a form of sperm, another sex provides the energy needed for reproduction, and members of the third sex bear and raise the offspring. All three genders are included in sexual and social norms of expected and acceptable behavior. In this same novel, the hazards and problems of sex in microgravity are described, and while people born on the Moon are proficient at it, people from Earth are not.Similarly, Poul Anderson's Three Worlds to Conquer depicts centaur-like beings living on Jupiter who have three genders: female, male and "demi-male". In order to conceive, a female must have sex with both a male and a demi-male within a short time of each other. In the society of the protagonist, there are stable, harmonious three-way families, in effect a formalized Menage a Trois, with the three partners on equal terms with each other. An individual in that society feels a strong attachment to all three parents – mother, father and demi-father – who all take part in bringing up the young. Conversely, among the harsh invaders who threaten to destroy the protagonist's homeland and culture, males are totally dominant over both females and demi-males; the latter are either killed at birth or preserved in subjugation for reproduction – which the protagonist regards as a barbaric aberration. Plots and themes: In Anderson's satirical story A Feast for the Gods, the Greek god Hermes visits modern America and has casual sex with an American woman, who tells him that she is "on the pill" and does not take seriously Hermes telling her that "The Embrace of a God is always fertile". She ends up pregnant and destined to give birth to a modern demi god. Plots and themes: Feminist science fiction authors imagined cultures in which homo- and bisexuality and a variety of gender models are the norm. Joanna Russ's award-winning short story "When It Changed" (1972), portraying a female-only lesbian society that flourished without men, and her novel The Female Man (1975), were enormously influential. Russ was largely responsible for introducing radical lesbian feminism into science fiction.The bisexual female writer Alice Bradley Sheldon, who used James Tiptree, Jr. as her pen name, explored the sexual impulse as her main theme. Some stories by Tiptree portray humans becoming sexually obsessed with aliens, such as "And I Awoke and Found Me Here on the Cold Hill's Side" (1972), or aliens being sexually abused. The Girl Who Was Plugged In (1973) is an early precursor of cyberpunk that depicts a relationship via a cybernetically controlled body. In her award-winning novella Houston, Houston, Do You Read? (1976), Tiptree presents a female-only society after the extinction of men from disease. The society lacks stereotypically "male" problems such as war but is stagnant. The women reproduce via cloning, and consider men to be comical. Plots and themes: In Robert Silverberg's novelette The Way to Spook City the protagonist meets and has an affair with a woman named Jill, who seems completely human – and convincingly, passionately female human. Increasingly in love with her, he still has a nagging suspicion that she is in fact a disguised member of the mysterious extraterrestrial species known as "Spooks", who had invaded and taken over a large part of the United States. Until the end, he repeatedly grapples with two questions: Is she human or a Spook? And if she is a Spook, could the two of them nevertheless build a life together? In the centuries-long, futile space war described in Joe Haldeman's The Forever War, the protagonist's increasing feeling of alienation is manifested, among other things, when he is appointed as the commanding officer of a "strike force" whose soldiers are exclusively homosexual, and who resent being commanded by a heterosexual. Later in the book, he finds that while he was fighting in space, humanity has begun to clone itself, resulting in a new, collective species calling itself simply Man. Luckily for the protagonist, Man has established several colonies of old-style, heterosexual humans, just in case the evolutionary change proves to be a mistake. In one of these colonies, the protagonist is happily reunited with his long-lost beloved and they embark upon monogamous marriage and on having children through sexual reproduction and female pregnancy – an incredibly archaic and old-fashioned way of life for most of that time's humanity. Plots and themes: Elizabeth A. Lynn's science fiction novel A Different Light (1978) features a same-sex relationship between two men and inspired the name of the LGBT bookstore chain A Different Light. Lynn's The Chronicles of Tornor (1979–80) series of novels, the first of which won the World Fantasy Award, were among the first fantasy novels to include gay relationships as an unremarkable part of the cultural background. Lynn also wrote novels depicting sadomasochism. Plots and themes: John Varley, who also came to prominence in the 1970s, is another writer who examined sexual themes in his work. In his "Eight Worlds" suite of stories and novels, humanity has achieved the ability to change sex quickly, easily and completely reversibly – leading to a casual attitude with people changing their sex back and forth as the sudden whim takes them. Homophobia is shown as initially inhibiting the uptake of this technology, as it engenders drastic changes in relationships, with bisexuality becoming the default mode for society. Sexual themes are central to the story "Options": a married woman, Cleo, living in King City, undergoes a change to male despite her husband's objections. As "Leo" she finds out what it means to be a man in her society and even becomes her husband's best friend. She also learns that people are adopting new names that are historically neither male nor female. She eventually returns to female as "Nile". Varley's Gaea trilogy (1979-1984) features lesbian protagonists. Plots and themes: Female characters in science fiction films, such as Barbarella (1968), continued to be often portrayed as simple sex kittens. Modern SF (post-New Wave) After the pushing back of boundaries in the 1960s and 70s, sex in genre science fiction gained wider acceptance and was often incorporated into otherwise conventional science fiction stories with little comment. Plots and themes: In 1968 Jack Vance introduced the Planet of Adventure, inhabited by four different alien races, each with its own distinct society and culture. One of these – the predatory, part feline, part bird-like Dirdir – is described as having a very complex sexuality, with many different genders that leads to many different combinations of gender-compatibility when it comes to sex and breeding, though each breeding still seems to involve only two individuals. Plots and themes: Jack L. Chalker's Well World series, launched in 1977, depicts a world – designed by the super science of a vanished extraterrestrial race, the Markovians – which is divided into numerous "hexes", each inhabited by different sentient race. Anyone entering one of these hexes is transformed into a member of the local race. This plot device gives a wide scope for exploring the divergent biology and cultures of the various species – including their sex life. For example, a human entering a hex inhabited by an insectoid intelligent race is transformed into a female of that species, feels sexual desire for a male and mates with him. Too late does she discover that in this species, pregnancy is fatal – the mother being devoured from the inside by her larvae. Plots and themes: In a later part, a very macho villain gains control of a supercomputer whose power includes the ability to "redesign" people's bodies to almost any specification. He uses the computer to give himself a "super-virile" body, capable of a virtually unlimited number of erections and ejaculations – and then proceeds to transform his male enemies into beautiful women and induce in them a strong sexual desire towards himself. However, a computer breakdown restores to these captives their normal minds. Though they are still in women's bodies, these bodies were designed with great strength and stamina, so as to enable them to undergo repeated sexual encounters. Thus, they are well-equipped to chase, catch and suitably punish their abuser. Plots and themes: In Frederik Pohl's Jem, humans exploring the eponymous planet Jem discover by experience that local beings emit a milt which has a strong aphrodisiac effect on humans. Characters who were hitherto not at all drawn to each other find themselves suddenly involved in wild, uncontrollable sex. At the ironic ending, their descendants who colonize the planet and build up a distinctive society and culture develop the custom of celebrating Christmas by deliberately stimulating the local beings into emitting the milt, and then taking off their clothes and engaging in a wild indiscriminate orgy – their copulations accompanied by a chorus of the planet's enslaved indigenous beings who were taught to sing "Good King Wenceslas", with the song's Christian significance long forgotten. Plots and themes: Also set on an alien planet, Octavia E. Butler's acclaimed short story "Bloodchild" (1984) depicts the complex relationship between human refugees and the insect-like aliens who keep them in a preserve to protect them, but also to use them as hosts for breeding their young. Sometimes called Butler's "pregnant man story," "Bloodchild" won the Nebula Award, Hugo Award, and Locus Award. Other of Butler's works explore miscegenation, non-consensual sex, and hybridity.In Robert Silverberg's 1982 novella Homefaring, the protagonist enters the mind of an intelligent lobster of the very far future and experiences all aspects of lobster life, including sex: "He approached a female, knowing precisely which one was the appropriate one, and sang to her, and she acknowledged his song with a song of her own, and raised her third pair of legs for him, and let him plant his gametes beside her oviducts. There was no apparent pleasure in it, as he remembered pleasure from his time as a human. Yet it brought him a subtle but unmistakable sense of fulfillment, of the completion of biological destiny, that had a kind of orgasmic finality about it, and left him calm and anchored at the absolute dead center of his soul". When finally returning to his human body and his human lover, he keeps longing for the lobster life, to "his mate and her millions of larvae". Plots and themes: Quentin and Alice, the extremely shy and insecure protagonists of Lev Grossman's fantasy novel The Magicians, spend years as fellow students at a School of Magic without admitting to being deeply in love with each other. Only the experience of being magically turned into foxes enables them at last to break through their reserve: "Increasingly, Quentin noticed one scent more than the others. It was a sharp, acrid, skunky musk that probably would have smelled like cat piss to a human being, but to a fox it was like a drug. He tackled the source of the smell, buried his snuffling muzzle in her fur, because he had known all along, with what was left of his consciousness, that what he was smelling was Alice. Vulpine hormones and instincts were powering up, taking over, manhandling what was left of his rational human mind." The next sequence depicts animal sex: "He locked his teeth in the thick fur of her neck. It didn't seem to hurt her any, or at least not in a way that was easily distinguishable from pleasure. He caught a glimpse of Alice's wild, dark fox eyes rolling with terror and then half shutting with pleasure. Their tiny quick breathes puffed white in the air and mingled and disappeared. Her white fox fur was coarse and smooth at the same time, and she made little yipping sounds every time he pushed himself deeper inside her. He never wanted to stop". When resuming their human bodies, Quentin and Alice are initially even more shy and awkward with each other, and only after going through some harrowing magical experiences are they finally able to have human sex. Plots and themes: Lois McMaster Bujold explores many areas of sexuality in the multiple award-winning novels and stories of her Vorkosigan Saga (1986-ongoing), which are set in a fictional universe influenced by the availability of uterine replicators and significant genetic engineering. These areas include an all-male society, promiscuity, monastic celibacy, hermaphroditism, and bisexuality. Plots and themes: In the Mythopoeic Award-winning novel Unicorn Mountain (1988), Michael Bishop includes a gay male AIDS patient among the carefully drawn central characters who must respond to an irruption of dying unicorns at their Colorado ranch. The death of the hedonistic gay culture, and the safe sex campaign resulting from the AIDS epidemic, are explored, both literally and metaphorically.Sex has a major role in Harry Turtledove's 1990 novel A World of Difference, taking place on the planet Minerva (a more habitable analogue of Mars). Minervan animals (including the sentient Minervans) are hexameristically radially symmetrical. This means that they have six eyes spaced equally all around, see in all directions and have no "back" where somebody could sneak on them unnoticed. Females (referred to as "mates" by the Minervans) give birth to litters that consist of one male and five females, and the "mates" always die after reproducing because of torrential bleeding from the places where the six fetuses were attached; this gives a population multiplication of 5 per generation if all females live to adolescence and reproduce. Plots and themes: Females reach puberty while still hardly out of childhood, and typically experience sex only once in the lifetime – leading to pregnancy and death at birth-giving. Thus, in Minervan society male dominance seems truly determined by a biological imperative – though it takes different forms in various Minervan societies: in some females are considered expendable and traded as property, in other they are cherished and their tragic fate mourned – but still, their dependent status is taken for granted. The American women arriving on Minerva and discovering this situation consider it intolerable; a major plot element is their efforts, using the resources of Earth medical science, to find a way of saving the Minervan females and letting them survive birth-giving. At the end, they do manage to save a particularly sympathetic Minervan female – potentially opening the way for a complete upheaval in Minervan society. Plots and themes: Sex is also an important ingredient in another of Harry Turtledove's works, the Worldwar Series of Alternative History, based on the premise of reptile extraterrestrials, nicknamed "The Lizards", invading Earth in 1942, forcing humans to terminate the Second World War and unite against this common enemy. As depicted by Turtledove, the "Lizards" have no concept whatever that sex ought to be private, and they engage in it in public as in any other activity. This leads to human beings in areas occupied by them feeling shocked and outraged by the "immorality" of their new masters - especially that the invaders, preferring hot climates, prioritize conquest of the Arab and Islamic countries. For their part, the invaders are genuinely puzzled by the Humans' insistence on having privacy for sex and their outrage when reptile warriors walk in on them when engaged in it. As gradually becomes clear, on their home planet, the "Lizards" have a clearly defined mating season, when normal activity ceases and they engage in a days-long, indiscriminate orgy; as their young can fend for themselves from the moment of hatching from the egg, there is no of parental care and they have no marriages or families, and thus there is no reason to establish paternity. Outside the mating season, sex does not occur among them and does not concern them. However, when arriving on Earth they soon discover that ginger, an innocuous spice to humans, acts as a powerful narcotic on the invaders' physiology - and that it causes their females to become sexually active and emit pheromones, out of the normal season. This causes an unaccustomed disruption of their daily activity, with females who had taken ginger suddenly becoming sexual, males and females then feeling compelled to immediately engage in mating before they could resume their daily work. This also arises the phenomenon of females deliberately taking ginger in return for payment - prostitution having been completely unknown in their society before their arrival on Earth. Plots and themes: In the far future human colony of Frederik Pohl's The World at the End of Time, the common way to produce new humans is for a geneticist to take DNA samples from two or more "parents" – regardless of their being male or female. The DNA is then combined in a laboratory, and the parents arrive to pick up the baby nine months later. The few couples who prefer to do it in the old fashioned way, a man sexually impregnating a woman, are considered strange but harmless eccentrics. Plots and themes: Glory Season (1993) by David Brin is set on the planet Stratos, inhabited by a strain of human beings designed to conceive clones in winter, and normal children in summer. All clones are female, because males cannot reproduce themselves individually. Further, males and females have opposed seasons of sexual receptivity; women are sexually receptive in winter, and men in summer. (This unusual heterogamous reproductive cycle is known to be evolutionarily advantageous for some species of aphids.) The novel treats themes of separatist feminism and biological determinism. Plots and themes: Elizabeth Bear's novel Carnival (2006) revisits the trope of the single-gender world, as a pair of gay male ambassador-spies attempt to infiltrate and subvert the predominately lesbian civilization of New Amazonia, whose matriarchal rulers have all but enslaved their men.The fantasy world of Scott Lynch's 2007 Red Seas Under Red Skies offers a new variation on the long-established genre of pirate literature – depicting a pirate ship which is run on the basis of complete gender equality. The pirate crew is composed of a roughly equal number of men and women, and crew members may freely engage in sex – homosexual or heterosexual, as they choose – when off duty. Since shipboard life offers little chance of privacy, the sound of people having a noisy orgasm is a normal part of night time routine on board the Poison Orchid. Plots and themes: However, any attempt at a sexual act without the other person's sexual consent is punished immediately and severely. The formidable Captain Zamira Drakasha is raising her two children aboard, and is well able to combine being a deadly fighter and strict disciplinarian with her role as a loving and doting mother – but having children aboard is a privilege reserved to the Captain alone; other female pirates who get pregnant must leave their children on shore. Plots and themes: The plot of The Tamír Triad by Lynn Flewelling has a major transsexual element. To begin with the protagonist, Prince Tobin, is to all appearances a male – both in his own perception and in that of others. Boys who swim naked together with Tobin have no reason to doubt his male anatomy. Yet, due to the magical reasons which are an important part of the plot, in the underlying, essential identity Tobin had always been a disguised girl. In the series' cataclysmic scene of magical change, this becomes an evident physical fact, and Prince Tobin becomes Queen Tamír, shedding the male body and gaining a fully functioning female one. Yet, it takes Tamír a considerable time and effort to come to terms with her female sexuality. Plots and themes: In Lateral Magazine, The freedom of a genre: Sexuality in speculative fiction: 'In another twist of today's society, Nontraditional Love by Rafael Grugman (2008) puts together an upside-down society where heterosexuality is outlawed, and homosexuality is the norm. A ‘traditional’ family unit consists of two dads with a surrogate mother. Alternatively, two mothers, one of whom bares a child. In a nod to the always-progressive Netherlands, this country is the only country progressive enough to allow opposite sex marriage. This is perhaps the most obvious example of cognitive estrangement. It puts the reader in the shoes of the oppressed by modelling an entire world of opposites around a fairly “normal” everyday heterosexual protagonist. A heterosexual reader would not only be able to identify with the main character but be immersed in a world as oppressive and bigoted as the real world has been for homosexuals and the queer community throughout history.The 2018 Fantasy novel Stone Unturned, set in Lawrence Watt-Evans' magical world of Ethshar, begins with the young wizard Morvash of the Shadows discovering that some of the statues in his uncle's house were real people turned to stone, and sets out to do the right thing. What Morvash considered the most disturbing of these statues "was hidden away in a sort of marble grotto in the garden behind the house, and depicted a young man and a young woman in what might politely be called an intimate embrace, or a compromising position. Plots and themes: They were not in the sort of elegant pose that artists use for erotica, with graceful lines displaying the female's curves and the male's muscles. They were in an earthier position. The woman — a girl, really — was on her back, with her knees drawn up to her chest and her head raised as her blank stone eyes stared perpetually at the man's belly. Her mouth was open as if panting. Her partner was kneeling between her legs, leaning forward over her, one hand grabbing her shoulder, the other occupied elsewhere. His eyes were closed, but his mouth was also open; Morvash thought it was more of a moan than a pant. He could almost smell the sweat. Neither wore any clothing whatsoever, nor were there any artfully-placed draperies or fig leaves to obscure the details. Had the wizard responsible for this petrifaction timed it deliberately, or had he caught them in this position by accident?" Eventually, it turns out that the couple were Prince Marek of Melitha and Darissa the Witch's Apprentice, who had fallen deeply in love with each other during a war that threatened their kingdom and who sought to celebrate victory with a bout of intensive love-making in the privacy of the Prince's bedchamber – but were surprised and turned into stone by a wizard in the employ of the Prince's envious sister, who sought to seize the throne. Plots and themes: Afterwards, the couple spent forty petrified years, dimly conscious, perpetually caught in their sexual act and forming a prized item in Lord Landessin's sculpture collection. When the wizard Morvash finally manages to bring them back to life, they find themselves lying on the floor in a big hall, surrounded by various other people who were also revived from petrifaction, and hasten to disengage and look for something to cover their nakedness. After various other adventures, they finally get married and fully clothed mount the throne of Melitha as King and Queen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fox hunting** Fox hunting: Fox hunting is a traditional activity involving the tracking, chase and, if caught, the killing of a fox, normally a red fox, by trained foxhounds or other scent hounds. A group of unarmed followers, led by a "master of foxhounds" (or "master of hounds"), follow the hounds on foot or on horseback.Fox hunting with hounds, as a formalised activity, originated in England in the sixteenth century, in a form very similar to that practised until February 2005, when a law banning the activity in England and Wales came into force. A ban on hunting in Scotland had been passed in 2002, but it continues to be within the law in Northern Ireland and several other jurisdictions, including Australia, Canada, France, the Republic of Ireland and the United States.The sport is controversial, particularly in the United Kingdom. Proponents of fox hunting view it as an important part of rural culture and useful for reasons of conservation and pest control, while opponents argue it is cruel and unnecessary. History: The use of scenthounds to track prey dates back to Assyrian, Babylonian, and ancient Egyptian times, and was known as venery. Europe Many Greek- and Roman-influenced countries have long traditions of hunting with hounds. Hunting with Agassaei hounds was popular in Celtic Britain, even before the Romans arrived, introducing the Castorian and Fulpine hound breeds which they used to hunt. Norman hunting traditions were brought to Britain when William the Conqueror arrived, along with the Gascon and Talbot hounds. History: Foxes were referred to as beasts of the chase by medieval times, along with the red deer (hart & hind), martens, and roes, but the earliest known attempt to hunt a fox with hounds was in Norfolk, England, in 1534, where farmers began chasing foxes down with their dogs for the purpose of pest control. The last wolf in England was killed in the early 16th century during the reign of Henry VII, leaving the English fox with no threat from larger predators. The first use of packs specifically trained to hunt foxes was in the late 1600s, with the oldest fox hunt being, probably, the Bilsdale in Yorkshire.By the end of the seventeenth century, deer hunting was in decline. The Inclosure Acts brought fences to separate formerly open land into many smaller fields, deer forests were being cut down, and arable land was increasing. With the onset of the Industrial Revolution, people began to move out of the country and into towns and cities to find work. Roads, railway lines, and canals all split hunting countries, but at the same time they made hunting accessible to more people. Shotguns were improved during the nineteenth century and the shooting of gamebirds became more popular. Fox hunting developed further in the eighteenth century when Hugo Meynell developed breeds of hound and horse to address the new geography of rural England.In Germany, hunting with hounds (which tended to be deer or boar hunting) was first banned on the initiative of Hermann Göring on 3 July 1934. In 1939, the ban was extended to cover Austria after Germany's annexation of the country. Bernd Ergert, the director of Germany's hunting museum in Munich, said of the ban, "The aristocrats were understandably furious, but they could do nothing about the ban given the totalitarian nature of the regime." United States According to the Masters of Foxhounds Association of America, Englishman Robert Brooke was the first man to import hunting hounds to what is now the United States, bringing his pack of foxhounds to Maryland in 1650, along with his horses. Also around this time, numbers of European red foxes were introduced into the Eastern seaboard of North America for hunting. The first organised hunt for the benefit of a group (rather than a single patron) was started by Thomas, sixth Lord Fairfax in 1747. In the United States, George Washington and Thomas Jefferson both kept packs of fox hounds before and after the American Revolutionary War. History: Australia In Australia, the European red fox was introduced solely for the purpose of fox hunting in 1855. Native animal populations have been very badly affected, with the extinction of at least 10 species attributed to the spread of foxes. Fox hunting with hounds is mainly practised in the east of Australia. In the state of Victoria there are thirteen hunts, with more than 1000 members between them. Fox hunting with hounds results in around 650 foxes being killed annually in Victoria, compared with over 90,000 shot over a similar period in response to a State government bounty. History: The Adelaide Hunt Club traces its origins to 1840, just a few years after the colonization of South Australia. Current status: United Kingdom Fox hunting is prohibited in Great Britain by the Protection of Wild Mammals (Scotland) Act 2002 and the Hunting Act 2004 (England and Wales), but remains legal in Northern Ireland. The passing of the Hunting Act was notable in that it was implemented through the use of the Parliament Acts 1911 and 1949, after the House of Lords refused to pass the legislation, despite the Commons passing it by a majority of 356 to 166.After the ban on fox hunting, hunts in Great Britain switched to legal alternatives, such as drag hunting and trail hunting. The Hunting Act 2004 also permits some previously unusual forms of hunting wild mammals with dogs to continue, such as "hunting... for the purpose of enabling a bird of prey to hunt the wild mammal".Opponents of hunting, such as the League Against Cruel Sports, claim that some of these alternatives are a smokescreen for illegal hunting or a means of circumventing the ban. While supporters of fox hunting claim that the number of foxes killed has increased since the Hunting Act came into force, both by the hunts (through lawful methods) and landowners, and that hunts have reported an increase in membership.Tony Blair wrote in A Journey, his memoirs published in 2010, that the Hunting Act of 2004 is 'one of the domestic legislative measures I most regret'. Current status: United States In America, fox hunting is also called "fox chasing", as it is the practice of many hunts not to actually kill the fox (the red fox is not regarded as a significant pest). Some hunts may go without catching a fox for several seasons, despite chasing two or more foxes in a single day's hunting. Foxes are not pursued once they have "gone to ground" (hide in a hole). American fox hunters undertake stewardship of the land, and endeavour to maintain fox populations and habitats as much as possible. In many areas of the eastern United States, the coyote, a natural predator of the red and grey fox, is becoming more prevalent and threatens fox populations in a hunt's given territory. In some areas, coyote are considered fair game when hunting with foxhounds, even if they are not the intended species being hunted. Current status: In 2013, the Masters of Foxhounds Association of North America listed 163 registered packs in the US and Canada. This number does not include the non-registered (also known as "farmer" or "outlaw") packs. Baily's Hunting Directory Lists 163 foxhound or draghound packs in the US and 11 in Canada In some arid parts of the Western United States, where foxes in general are more difficult to locate, coyotes are hunted and, in some cases, bobcats. Current status: Other countries The other main countries in which organized fox hunting with hounds is practised are Ireland (which has 41 registered packs), Australia, France (this hunting practice is also used for other animals such as deer, wild boar, fox, hare or rabbit), Canada and Italy. There is one pack of foxhounds in Portugal, and one in India. Although there are 32 packs for the hunting of foxes in France, hunting tends to take place mainly on a small scale and on foot, with mounted hunts tending to hunt red or roe deer, or wild boar.In Portugal fox hunting is permitted (Decree-Law no. 202/2004) but there have been popular protests and initiatives to abolish it with a petition with more than 17,500 signatures. handed over to the Assembly of the Republic on 18 May 2017 and the parliamentary hearing in 2018.In Canada, the Masters of Foxhounds Association of North America lists seven registered hunt clubs in the province of Ontario, one in Quebec, and one in Nova Scotia. Ontario issues licenses to registered hunt clubs, authorizing its members to pursue, chase or search for fox, although the primary target of the hunts is coyotes. Quarry animals: Red fox The red fox (Vulpes vulpes) is the normal prey animal of a fox hunt in the US and Europe. A small omnivorous predator, the fox lives in burrows called earths, and is predominantly active around twilight (making it a crepuscular animal). Adult foxes tend to range around an area of between 5 and 15 square kilometres (2–6 square miles) in good terrain, although in poor terrain, their range can be as much as 20 square kilometres (7.7 sq mi). The red fox can run at up to 48 km/h (30 mph). The fox is also variously known as a Tod (old English word for fox), Reynard (the name of an anthropomorphic character in European literature from the twelfth century), or Charlie (named for the Whig politician Charles James Fox). American red foxes tend to be larger than European forms, but according to foxhunters' accounts, they have less cunning, vigour and endurance in the chase than European foxes. Quarry animals: Coyote, grey fox, and other quarry Other species than the red fox may be the quarry for hounds in some areas. The choice of quarry depends on the region and numbers available. Quarry animals: The coyote (Canis latrans) is a significant quarry for many Hunts in North America, particularly in the west and southwest, where there are large open spaces. The coyote is an indigenous predator that did not range east of the Mississippi River until the latter half of the twentieth century. The coyote is faster than a fox, running at 65 km/h (40 mph) and also wider ranging, with a territory of up to 283 square kilometres (109 sq mi), so a much larger hunt territory is required to chase it. However, coyotes tend to be less challenging intellectually, as they offer a straight line hunt instead of the convoluted fox line. Coyotes can be challenging opponents for the dogs in physical confrontations, despite the size advantage of a large dog. Coyotes have larger canine teeth and are generally more practised in hostile encounters.The grey fox (Urocyon cinereoargenteus), a distant relative of the European red fox, is also hunted in North America. It is an adept climber of trees, making it harder to hunt with hounds. The scent of the gray fox is not as strong as that of the red, therefore more time is needed for the hounds to take the scent. Unlike the red fox which, during the chase, will run far ahead from the pack, the gray fox will speed toward heavy brush, thus making it more difficult to pursue. Also unlike the red fox, which occurs more prominently in the northern United States, the more southern gray fox is rarely hunted on horseback, due to its densely covered habitat preferences. Quarry animals: Hunts in the southern United States sometimes pursue the bobcat (Lynx rufus). In countries such as India, and in other areas formerly under British influence, such as Iraq, the golden jackal (Canis aureus) is often the quarry. During the British Raj, British sportsmen in India would hunt jackals on horseback with hounds as a substitute for the fox hunting of their native England. Unlike foxes, golden jackals were documented to be ferociously protective of their pack mates, and could seriously injure hounds. Jackals were not hunted often in this manner, as they were slower than foxes and could scarcely outrun greyhounds after 200 yards. Alternatives to hunting live prey: Following the ban on fox hunting in Great Britain, hunts switched to legal alternatives in order to preserve their traditional practices, although many had previously claimed this would be impossible and that their hound packs would have to be destroyed.Most hunts turned, primarily, to trail hunting, which anti-hunt organisations claim is just a smokescreen for illegal hunting. Some anti-hunting campaigners have urged hunts to switch to the established sport of drag hunting instead, as this involves significantly less risk of wild animals being accidentally caught and killed. Alternatives to hunting live prey: Trail hunting A controversial alternative to hunting animals with hounds. A trail of animal urine (most commonly fox) is laid in advance of the 'hunt', and then tracked by the hound pack and a group of followers; on foot, horseback, or both. Because the trail is laid using animal urine, and in areas where such animals naturally occur, hounds often pick up the scent of live animals; sometimes resulting in them being caught and killed. Alternatives to hunting live prey: Drag hunting An established sport which dates back to the 19th century. Hounds follow an artificial scent, usually aniseed, laid along a set route which is already known to the huntsmen. A drag hunt course is set in a similar manner to a cross country course, following a route over jumps and obstacles. Because it is predetermined, the route can be tailored to keep hounds away from sensitive areas known to be populated by animals which could be confused for prey. Alternatives to hunting live prey: Hound trailing Similar to drag hunting, but in the form of a race; usually of around 10 mi (16 km) in length. Unlike other forms of hunting, the hounds are not followed by humans. Clean boot hunting Clean boot hunting uses packs of bloodhounds to follow the natural trail of a human's scent. Animals of the hunt: Hounds and other dogs Fox hunting is usually undertaken with a pack of scent hounds, and, in most cases, these are specially bred foxhounds. Animals of the hunt: These dogs are trained to pursue the fox based on its scent. The two main types of foxhound are the English Foxhound and the American Foxhound. It is possible to use a sight hound such as a Greyhound or lurcher to pursue foxes, though this practice is not common in organised hunting, and these dogs are more often used for coursing animals such as hares. There is also one pack of beagles in Virginia that hunt foxes. They are unique in that they are the only hunting beagle pack in the US to be followed on horseback. English Foxhounds are also used for hunting mink. Animals of the hunt: Hunts may also use terriers to flush or kill foxes that are hiding underground, as they are small enough to pursue the fox through narrow earth passages. This is not practised in the United States, as once the fox has gone to ground and is accounted for by the hounds, it is left alone. Animals of the hunt: Horses The horses, called "field hunters" or hunters, ridden by members of the field, are a prominent feature of many hunts, although others are conducted on foot (and those hunts with a field of mounted riders will also have foot followers). Horses on hunts can range from specially bred and trained field hunters to casual hunt attendees riding a wide variety of horse and pony types. Draft and Thoroughbred crosses are commonly used as hunters, although purebred Thoroughbreds and horses of many different breeds are also used. Animals of the hunt: Some hunts with unique territories favour certain traits in field hunters; for example, when hunting coyote in the western US, a faster horse with more stamina is required to keep up, as coyotes are faster than foxes and inhabit larger territories. Hunters must be well-mannered, have the athletic ability to clear large obstacles such as wide ditches, tall fences, and rock walls, and have the stamina to keep up with the hounds. In English foxhunting, the horses are often a cross of half or a quarter Irish Draught and the remainder English thoroughbred.Dependent on terrain, and to accommodate different levels of ability, hunts generally have alternative routes that do not involve jumping. The field may be divided into two groups, with one group, the First Field, that takes a more direct but demanding route that involves jumps over obstacles while another group, the Second Field (also called Hilltoppers or Gaters), takes longer but less challenging routes that utilise gates or other types of access on the flat. Animals of the hunt: Birds of prey In the United Kingdom, since the introduction of the hunting ban, a number of hunts have employed falconers to bring birds of prey to the hunt, due to the exemption in the Hunting Act for falconry. Many experts, such as the Hawk Board, deny that any bird of prey can reasonably be used in the British countryside to kill a fox which has been flushed by (and is being chased by) a pack of hounds. Procedure: Main hunting season The main hunting season usually begins in early November, in the northern hemisphere, and in May in the southern hemisphere. Procedure: A hunt begins when the hounds are put, or cast, into a patch of woods or brush where foxes are known to lay up during daylight hours; known as a covert (pronounced "cover"). If the pack manages to pick up the scent of a fox, they will track it for as long as they are able. Scenting can be affected by temperature, humidity, and other factors. If the hounds lose the scent, a check occurs. Procedure: The hounds pursue the trail of the fox and the riders follow, by the most direct route possible. This may involve very athletic skill on the part of horse and rider, and fox hunting has given birth to some traditional equestrian sports including steeplechase and point-to-point racing.The hunt continues until either the fox goes to ground (evades the hounds and takes refuge in a burrow or den) or is overtaken and usually killed by the hounds. Procedure: Social rituals are important to hunts, although many have fallen into disuse. One of the most notable was the act of blooding. In this ceremony, the master or huntsman would smear the blood of the fox onto the cheeks or forehead of a newly initiated hunt-follower, often a young child. Another practice of some hunts was to cut off the fox's tail (brush), the feet (pads) and the head (mask) as trophies, with the carcass then thrown to the hounds. Both of these practices were widely abandoned during the nineteenth century, although isolated cases may still have occurred to the modern day. Cubbing: In the autumn of each year, hunts accustom the young hounds, which by now are full-size (although not yet sexually mature), to hunt and kill foxes through the practice of cubbing (also called cub hunting or autumn hunting). Cubbing also aims to teach hounds to restrict their hunting to foxes.The activity sometimes incorporates the practice of holding up; where hunt supporters, riders and foot followers surround a covert and drive back foxes attempting to escape, before then drawing the covert with the puppies and some more experienced hounds, allowing them to find and kill foxes within the surrounded wood. A young hound is considered to be "entered" into the pack once he or she has successfully joined in a hunt in this fashion. Cubbing: Weak foxhounds, and those that do not show sufficient aptitude, may be destroyed or drafted to other packs, including minkhound packs. In the US, it is sometimes the practice to have some fox cubs chased but allowed to escape in order for them to learn evasion techniques and so that they can be tracked again in the future.The Burns Inquiry, established in 1999, reported that an estimated 10,000 fox cubs were killed annually during the cub-hunting season in Great Britain. Cub hunting is now illegal in Great Britain, although anti-hunt associations maintain that the practice remains widespread. People: Hunt staff and officials As a social ritual, participants in a fox hunt fill specific roles, the most prominent of which is the master, who often number more than one and then are called masters or joint masters. These individuals typically take much of the financial responsibility for the overall management of the sporting activities of the hunt, and the care and breeding of the hunt's fox hounds, as well as control and direction of its paid staff. People: The Master of Foxhounds (who use the post-nominal (and may also be called) MFH) or Joint Master of Foxhounds operates the sporting activities of the hunt, maintains the kennels, works with (and sometimes is) the huntsman, and spends the money raised by the hunt club. (Often the master or joint masters are the largest of financial contributors to the hunt.) The master will have the final say over all matters in the field. People: Honorary secretaries are volunteers (usually one or two) who look after the administration of the hunt. The Treasurer collects the cap (money) from guest riders and manages the hunt finances. A kennelman looks after hounds in kennels, assuring that all tasks are completed when pack and staff return from hunting. The huntsman, who may be a professional, is responsible for directing the hounds. The Huntsman usually carries a horn to communicate to the hounds, followers and whippers in. Some huntsmen also fill the role of kennelman (and are therefore known as the kennel huntsman). In some hunts the master is also the huntsman. People: Whippers-in (or "Whips") are assistants to the huntsman. Their main job is to keep the pack all together, especially to prevent the hounds from straying or 'riotting', which term refers to the hunting of animals other than the hunted fox or trail line. To help them to control the pack, they carry hunting whips (and in the United States they sometimes also carry .22 revolvers loaded with snake shot or blanks.) The role of whipper-in in hunts has inspired parliamentary systems (including the Westminster System and the US Congress) to use whip for a member who enforces party discipline and ensures the attendance of other members at important votes. People: Terrier man— Carries out fox control. Most hunts where the object is to kill the fox will employ a terrier man, whose job it is to control the terriers which may be used underground to corner or flush the fox. Often voluntary terrier men will follow the hunt as well. In the UK and Ireland, they often ride quadbikes with their terriers in boxes on their bikes.In addition to members of the hunt staff, a committee may run the Hunt Supporters Club to organise fundraising and social events and in the United States many hunts are incorporated and have parallel lines of leadership. People: The United Kingdom, Ireland, and the United States each have a Masters of Foxhounds Association (MFHA) which consists of current and past masters of foxhounds. This is the governing body for all foxhound packs and deals with disputes about boundaries between hunts, as well as regulating the activity. People: Attire Mounted hunt followers typically wear traditional hunting attire. A prominent feature of hunts operating during the formal hunt season (usually November to March in the northern hemisphere) is hunt members wearing 'colours'. This attire usually consists of the traditional red coats worn by huntsmen, masters, former masters, whippers-in (regardless of sex), other hunt staff members and male members who have been invited by masters to wear colours and hunt buttons as a mark of appreciation for their involvement in the organization and running of the hunt. People: Since the Hunting Act in England and Wales, only Masters and Hunt Servants tend to wear red coats or the hunt livery whilst out hunting. Gentleman subscribers tend to wear black coats, with or without hunt buttons. In some countries, women generally wear coloured collars on their black or navy coats. These help them stand out from the rest of the field. People: The traditional red coats are often misleadingly called "pinks". Various theories about the derivation of this term have been given, ranging from the colour of a weathered scarlet coat to the name of a purportedly famous tailor.Some hunts, including most harrier and beagle packs, wear green rather than red jackets, and some hunts wear other colours such as mustard. The colour of breeches vary from hunt to hunt and are generally of one colour, though two or three colours throughout the year may be permitted. Boots are generally English dress boots (no laces). For the men they are black with brown leather tops (called tan tops), and for the women, black with a patent black leather top of similar proportion to the men. Additionally, the number of buttons is significant. The Master wears a scarlet coat with four brass buttons while the huntsman and other professional staff wear five. Amateur whippers-in also wear four buttons. People: Another differentiation in dress between the amateur and professional staff is found in the ribbons at the back of the hunt cap. The professional staff wear their hat ribbons down, while amateur staff and members of the field wear their ribbons up.Those members not entitled to wear colours, dress in a black hunt coat and unadorned black buttons for both men and women, generally with pale breeches. Boots are all English dress boots and have no other distinctive look. Some hunts also further restrict the wear of formal attire to weekends and holidays and wear ratcatcher (tweed jacket and tan breeches), at all other times. People: Other members of the mounted field follow strict rules of clothing etiquette. For example, for some hunts, those under eighteen (or sixteen in some cases) will wear ratcatcher all season. Those over eighteen (or in the case of some hunts, all followers regardless of age) will wear ratcatcher during autumn hunting from late August until the Opening Meet, normally around 1 November. From the Opening Meet they will switch to formal hunting attire where entitled members will wear scarlet and the rest black or navy. People: The highest honour is to be awarded the hunt button by the Hunt Master. This sometimes means one can then wear scarlet if male, or the hunt collar if female (colour varies from hunt to hunt) and buttons with the hunt crest on them. For non-mounted packs or non-mounted members where formal hunt uniform is not worn, the buttons are sometimes worn on a waistcoat. All members of the mounted field should carry a hunting whip (it should not be called a crop). These have a horn handle at the top and a long leather lash (2–3 yards) ending in a piece of coloured cord. Generally all hunting whips are brown, except those of Hunt Servants, whose whips are white. Controversy: The nature of fox hunting, including the killing of the quarry animal, the pursuit's strong associations with tradition and social class, and its practice for sport have made it a source of great controversy within the United Kingdom. In December 1999, the then Home Secretary, Jack Straw MP, announced the establishment of a Government inquiry (the Burns Inquiry) into hunting with dogs, to be chaired by the retired senior civil servant Lord Burns. The inquiry was to examine the practical aspects of different types of hunting with dogs and its impact, how any ban might be implemented and the consequences of any such ban.Amongst its findings, the Burns Inquiry committee analysed opposition to hunting in the UK and reported that: There are those who have a moral objection to hunting and who are fundamentally opposed to the idea of people gaining pleasure from what they regard as the causing of unnecessary suffering. There are also those who perceive hunting as representing a divisive social class system. Others, as we note below, resent the hunt trespassing on their land, especially when they have been told they are not welcome. They worry about the welfare of the pets and animals and the difficulty of moving around the roads where they live on hunt days. Finally there are those who are concerned about damage to the countryside and other animals, particularly badgers and otters. Controversy: Anti-hunting activists who choose to take action in opposing fox hunting can do so through lawful means, such as campaigning for fox hunting legislation and monitoring hunts for cruelty. Some use unlawful means. Main anti-hunting campaign organisations include the RSPCA and the League Against Cruel Sports. In 2001, the RSPCA took high court action to prevent pro-hunt activists joining in large numbers to change the society's policy in opposing hunting.Outside of campaigning, some activists choose to engage in direct intervention such as sabotage of the hunt. Hunt sabotage is unlawful in a majority of the United States, and some tactics used in it (such as trespass and criminal damage) are offences there and in other countries.Fox hunting with hounds has been happening in Europe since at least the sixteenth century, and strong traditions have built up around the activity, as have related businesses, rural activities, and hierarchies. For this reason, there are large numbers of people who support fox hunting and this can be for a variety of reasons. Controversy: Pest control The fox is referred to as vermin in some countries. Some farmers fear the loss of their smaller livestock, while others consider them an ally in controlling rabbits, voles, and other rodents, which eat crops. A key reason for dislike of the fox by pastoral farmers is their tendency to commit acts of surplus killing toward animals such as chickens, since having killed many they eat only one. Some anti-hunt campaigners maintain that provided it is not disturbed, the fox will remove all of the chickens it kills and conceal them in a safer place.Opponents of fox hunting claim that the activity is not necessary for fox control, arguing that the fox is not a pest species despite its classification and that hunting does not and cannot make a real difference to fox populations. They compare the number of foxes killed in the hunt to the many more killed on the roads. They also argue that wildlife management goals of the hunt can be met more effectively by other methods such as lamping (dazzling a fox with a bright light, then shooting by a competent shooter using an appropriate weapon and load).There is scientific evidence that fox hunting has no effect on fox populations, at least in Britain, thereby calling into question the idea it is a successful method of culling. In 2001 there was a 1-year nationwide ban on fox-hunting because of an outbreak of foot-and-mouth disease. It was found this ban on hunting had no measurable impact on fox numbers in randomly selected areas. Prior to the fox hunting ban in the UK, hounds contributed to the deaths of 6.3% of the 400,000 foxes killed annually.The hunts claim to provide and maintain a good habitat for foxes and other game, and, in the US, have fostered conservation legislation and put land into conservation easements. Anti-hunting campaigners cite the widespread existence of artificial earths and the historic practice by hunts of introducing foxes, as indicating that hunts do not believe foxes to be pests.It is also argued that hunting with dogs has the advantage of weeding out old, sick, and weak animals because the strongest and healthiest foxes are those most likely to escape. Therefore, unlike other methods of controlling the fox population, it is argued that hunting with dogs resembles natural selection. The counter-argument is given that hunting cannot kill old foxes because foxes have a natural death rate of 65% per annum.In Australia, where foxes have played a major role in the decline in the number of species of wild animals, the Government's Department of the Environment and Heritage concluded that "hunting does not seem to have had a significant or lasting impact on fox numbers." Instead, control of foxes relies heavily on shooting, poisoning and fencing. Controversy: Economics As well as the economic defence of fox hunting that it is necessary to control the population of foxes, lest they cause economic cost to the farmers, it is also argued that fox hunting is a significant economic activity in its own right, providing recreation and jobs for those involved in the hunt and supporting it. The Burns Inquiry identified that between 6,000 and 8,000 full-time jobs depend on hunting in the UK, of which about 700 result from direct hunt employment and 1,500 to 3,000 result from direct employment on hunting-related activities.Since the ban in the UK, there has been no evidence of significant job losses, and hunts have continued to operate along limited lines, either trail hunting, or claiming to use exemptions in the legislation. Controversy: Animal welfare and animal rights Many animal welfare groups, campaigners and activists believe that fox hunting is unfair and cruel to animals. They argue that the chase itself causes fear and distress and that the fox is not always killed instantly as is claimed. Animal rights campaigners also object to hunting (including fox hunting), on the grounds that animals should enjoy some basic rights (such as the right to freedom from exploitation and the right to life).In the United States and Canada, pursuing quarry for the purpose of killing is strictly forbidden by the Masters of Foxhounds Association. According to article 2 of the organisation's code: The sport of fox hunting as it is practised in North America places emphasis on the chase and not the kill. It is inevitable, however, that hounds will at times catch their game. Death is instantaneous. A pack of hounds will account for their quarry by running it to ground, treeing it, or bringing it to bay in some fashion. The Masters of Foxhounds Association has laid down detailed rules to govern the behaviour of Masters of Foxhounds and their packs of hounds. Controversy: There are times when a fox that is injured or sick is caught by the pursuing hounds, but hunts say that the occurrence of an actual kill of this is exceptionally rare.Supporters of hunting maintain that when foxes or other prey (such as coyotes in the western USA) are hunted, the quarry are either killed relatively quickly (instantly or in a matter of seconds) or escapes uninjured. Similarly, they say that the animal rarely endures hours of torment and pursuit by hounds, and research by Oxford University shows that the fox is normally killed after an average of 17 minutes of chase. They further argue that, while hunting with hounds may cause suffering, controlling fox numbers by other means is even more cruel. Depending on the skill of the shooter, the type of firearm used, the availability of good shooting positions and luck, shooting foxes can cause either an instant kill, or lengthy periods of agony for wounded animals which can die of the trauma within hours, or of secondary infection over a period of days or weeks. Research from wildlife hospitals, however, indicates that it is not uncommon for foxes with minor shot wounds to survive. Hunt supporters further say that it is a matter of humanity to kill foxes rather than allow them to suffer malnourishment and mange.Other methods include the use of snares, trapping and poisoning, all of which also cause considerable distress to the animals concerned, and may affect other species. This was considered in the Burns Inquiry (paras 6.60–11), whose tentative conclusion was that lamping using rifles fitted with telescopic sights, if carried out properly and in appropriate circumstances, had fewer adverse welfare implications than hunting. The committee believed that lamping was not possible without vehicular access, and hence said that the welfare of foxes in upland areas could be affected adversely by a ban on hunting with hounds, unless dogs could be used to flush foxes from cover (as is permitted in the Hunting Act 2004). Controversy: Some opponents of hunting criticise the fact that the animal suffering in fox hunting takes place for sport, citing either that this makes such suffering unnecessary and therefore cruel, or else that killing or causing suffering for sport is immoral. The Court of Appeal, in considering the British Hunting Act, determined that the legislative aim of the Hunting Act was "a composite one of preventing or reducing unnecessary suffering to wild mammals, overlaid by a moral viewpoint that causing suffering to animals for sport is unethical."Anti-hunting campaigners also criticised UK hunts of which the Burns Inquiry estimated that foxhound packs put down around 3,000 hounds, and the hare hunts killed around 900 hounds per year, in each case after the hounds' working life had come to an end.In June 2016, three people associated with the South Herefordshire Hunt (UK) were arrested on suspicion of causing suffering to animals in response to claims that live fox cubs were used to train hounds to hunt and kill. The organisation Hunt Investigation Team supported by the League Against Cruel Sports, gained video footage of an individual carrying a fox cub into a large kennel where the hounds can clearly be heard baying. A dead fox was later found in a rubbish bin. The individuals arrested were suspended from Hunt membership. In August, two more people were arrested in connection with the investigation. Controversy: Civil liberties It is argued by some hunt supporters that no law should curtail the right of a person to do as they wish, so long as it does not harm others. Philosopher Roger Scruton has said, "To criminalise this activity would be to introduce legislation as illiberal as the laws which once deprived Jews and Catholics of political rights, or the laws which outlawed homosexuality". In contrast, liberal philosopher, John Stuart Mill wrote, "The reasons for legal intervention in favour of children apply not less strongly to the case of those unfortunate slaves and victims of the most brutal parts of mankind—the lower animals." The UK's most senior court, the House of Lords, has decided that a ban on hunting, in the form of the Hunting Act 2004, does not contravene the European Convention on Human Rights, as did the European Court of Human Rights. Controversy: Trespass In its submission to the Burns Inquiry, the League Against Cruel Sports presented evidence of over 1,000 cases of trespass by hunts. These included trespass on railway lines and into private gardens. Trespass can occur as the hounds cannot recognise human-created boundaries they are not allowed to cross, and may therefore follow their quarry wherever it goes unless successfully called off. However, in the United Kingdom, trespass is a largely civil matter when performed accidentally. Controversy: Nonetheless, in the UK, the criminal offence of 'aggravated trespass' was introduced in 1994 specifically to address the problems caused to fox hunts and other field sports by hunt saboteurs. Hunt saboteurs trespass on private land to monitor or disrupt the hunt, as this is where the hunting activity takes place. For this reason, the hunt saboteur tactics manual presents detailed information on legal issues affecting this activity, especially the Criminal Justice Act. Some hunt monitors also choose to trespass whilst they observe the hunts in progress.The construction of the law means that hunt saboteurs' behaviour may result in charges of criminal aggravated trespass, rather than the less severe offence of civil trespass. Since the introduction of legislation to restrict hunting with hounds, there has been a level of confusion over the legal status of hunt monitors or saboteurs when trespassing, as if they disrupt the hunt whilst it is not committing an illegal act (as all the hunts claim to be hunting within the law) then they commit an offence; however, if the hunt was conducting an illegal act then the criminal offence of trespass may not have been committed. Controversy: Social life and class issues in Britain In Britain, and especially in England and Wales, supporters of fox hunting regard it as a distinctive part of British culture generally, the basis of traditional crafts and a key part of social life in rural areas, an activity and spectacle enjoyed not only by the riders but also by others such as the unmounted pack which may follow along on foot, bicycle or 4x4 vehicles. They see the social aspects of hunting as reflecting the demographics of the area; the Home Counties packs, for example, are very different from those in North Wales and Cumbria, where the hunts are very much the activity of farmers and the working class. The Banwen Miners Hunt is such a working class club, founded in a small Welsh mining village, although its membership now is by no means limited to miners, with a more cosmopolitan make-up.Oscar Wilde, in his play A Woman of No Importance (1893), once famously described "the English country gentleman galloping after a fox" as "the unspeakable in full pursuit of the uneatable." Even before the time of Wilde, much of the criticism of fox hunting was couched in terms of social class. The argument was that while more "working class" blood sports such as cock fighting and badger baiting were long ago outlawed, fox hunting persists, although this argument can be countered with the fact that hare coursing, a more "working-class" sport, was outlawed at the same time as fox hunting with hounds in England and Wales. The philosopher Roger Scruton has said that the analogy with cockfighting and badger baiting is unfair, because these sports were more cruel and did not involve any element of pest control.A series of "Mr. Briggs" cartoons by John Leech appeared in the magazine Punch during the 1850s which illustrated class issues. More recently the British anarchist group Class War has argued explicitly for disruption of fox hunts on class warfare grounds and even published a book The Rich at Play examining the subject. Other groups with similar aims, such as "Revolutions per minute" have also published papers which disparage fox hunting on the basis of the social class of its participants.Opinion polls in the United Kingdom have shown that the population is equally divided as to whether or not the views of hunt objectors are based primarily on class grounds. Some people have pointed to evidence of class bias in the voting patterns in the House of Commons during the voting on the hunting bill between 2000 and 2001, with traditionally working-class Labour members voting the legislation through against the votes of normally middle- and upper-class Conservative members. In popular culture: Fox hunting has inspired artists in several fields to create works which involve the sport. Examples of notable works which involve characters' becoming involved with a hunt or being hunted are listed below. Films, television, and literature Victorian novelist R. S. Surtees wrote several popular humorous novels about fox hunting, of which the best known are Handley Cross and Mr. Sponge's Sporting Tour. Anthony Trollope, who was addicted to hunting, felt himself "deprived of a legitimate joy" when he could not introduce a hunting scene into one of his novels. The foxhunt is a prominent feature of the movie The List of Adrian Messenger (1963). In Omen III: The Final Conflict (1981), there is a fox hunting including a "blooding" by Antichrist Damien Thorn Sam Neill. Rita Mae Brown's series of fox-hunting mysteries starring "Sister" Jane Arnold, starting with Outfoxed (2000). In real life, Brown is the master of the Oak Ridge Fox Hunt Club. In popular culture: Colin Dann's illustrated novel, The Animals of Farthing Wood (1979), originated a multimedia franchise comprising the original children's book, a prequel book, six sequel books, and an animated Animals of Farthing Wood television series based on the books, which tell the story of a group of woodland animals whose home has been paved over by developers, their journey to the White Deer Park nature reserve, where they will be safe, their Oath, promising to protect one another and overcome their natural instincts until they reach their destination, and their adventures once they've reached White Deer Park. Their challenges include hunters and poachers. In popular culture: Arthur Conan Doyle's story, "How the Brigadier Slew the Fox", in which the French officer Brigadier Gerard joins an English fox hunt but commits the unpardonable sin of slaying the fox with his sabre. Conan Doyle also wrote a non-series story about a fox hunt, "The King of the Foxes". In popular culture: Siegfried Sassoon wrote "Memoirs of a Fox-Hunting Man" (Faber and Faber, 1928), a semi-autobiographical account of growing up as minor gentry in rural England prior to the First World War. The main character George Sherston ends up as an infantry officer on the Western Front, which becomes the basis for the sequel, Memoirs of an Infantry Officer (Faber and Faber, 1930). In popular culture: In the season 7 episode of the animated sitcom Futurama called "31st Century Fox", Bender joins fox hunting only to find out that it's robot fox hunting. A fox hunt is prominently featured in the first act of the Jerry Herman musical Mame, premiering on Broadway in 1966. Fox hunting begins the plot of the Looney Tunes short "Foxy by Proxy". In popular culture: Daniel P. Mannix's novel, The Fox and the Hound (1967), which follows the story of a half-Bloodhound dog named Copper and a red fox named Tod . This story was subsequently used by Walt Disney Pictures to create the animated feature-length film The Fox and the Hound (1981), although the film differs from the novel in that Copper and Tod befriend each other and survive as friends. In popular culture: David Rook's novel The Ballad of the Belstone Fox (1970) on a similar theme, was made into a 1973 James Hill film The Belstone Fox, in which a baby fox, "Tag", is brought up as a pet in an English fox-hunting household and adopted by their hound "Merlin". Poet Laureate John Masefield wrote "Reynard the Fox", a poem about a fox hunt in rural England in which the title character escapes. The film Mary Poppins includes an animated fox hunt. Music Several musical artists have made references to fox hunting: Both Ray Noble and George Formby recorded "Tan Tan Tivvy Tally Ho!", a comic song about fox hunting, in 1932 and 1938, respectively. More recently Dizzee Rascal used the concept of a fox-hunt for his "Sirens" music video, showing a stylised urban hunt. Sting's song, "The End of the Game", references a pair of foxes during a hunt. Taylor Swift's song "I Know Places" uses fox hunting as a metaphor for the paparazzi. Frank Turner Covered Chris T-T's Song "when the huntsman comes a marching" which criticises foxhunters due to social class and the cruelty of foxhunting
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded