text
stringlengths
60
353k
source
stringclasses
2 values
**Cellularization** Cellularization: In evolutionary biology, the term cellularization (cellularisation) has been used in theories to explain the evolution of cells, for instance in the pre-cell theory, dealing with the evolution of the first cells on this planet, and in the syncytial theory attempting to explain the origin of Metazoa from unicellular organisms. Processes of cell development in multinucleate cells (syncytium, plural syncytia) of animals and plants are also termed cellularization, often called syncytium cellularization. The pre-cell theory: According to Otto Kandler's pre-cell theory, the (bio)chemical origin, early evolution of life and primordial metabolism (see Iron-Sulfur world hypothesis according to Wächtershäuser) led to diversification through the evolution of a multiphenotypical population of pre-cells, from which the three founder groups A, B, C and then, from them, the precursor cells (here named proto-cells) of the three domains of life emerged successively.In this scenario the three domains of life did not originate from an ancestral nearly complete “first cell“ nor a cellular organism often defined as the last universal common ancestor (LUCA), but from a population of evolving pre-cells. Kandler introduced the term cellularization for his concept of a successive evolution of cells by a process of evolutionary improvements.His concept may explain the quasi-random distribution of evolutionarily important features among the three domains and, at the same time, the existence of the most basic biochemical features (genetic code, set of protein amino acids etc.) in all three domains (unity of life), as well as the close relationship between the Archaea and the Eucarya. Kandler’s pre-cell theory is supported by Wächtershäuser.According to Kandler, the protection of fragile primordial life forms from their environment by the invention of envelopes (i.e. membranes, walls) was an essential improvement. For instance, the development of rigid cell walls by the invention and elaboration of peptidoglycan in bacteria (domain Bacteria) may have been a prerequisite for their successful survival, radiation and colonisation of virtually all habitats of the geosphere and hydrosphere.A coevolution of the biosphere and the geosphere is suggested: “The evolving life could venture into a larger variety of habitats, even into microaerobic habitats in shallow, illuminated surface waters. The continuous changes in the physical environment on the aging and cooling Earth led to further diversification of habitats and favored opportunistic radiation of primitive life into numerous phenotypes on the basis of each of the different chemolithoautotrophies. Concomitantly, with the accumulation of organic matter derived from chemolithoautotrophic life, opportunistic and obligate heterotrophic life may also have developed”.: 155f The syncytial theory or ciliate-acoel theory: This theory is also known as a theory of cellularization. It is a theory to explain the origin of the Metazoa. The idea was proposed by Hadži (1953) and Hanson (1977).This cellularization (syncytial) theory states that metazoans evolved from a unicellular ciliate with multiple nuclei that went through cellularization. Firstly, the ciliate developed a ventral mouth for feeding and all nuclei moved to one side of the cell. Secondly, an epithelium was created by membranes forming barriers between the nuclei. In this way, a multicellular organism was created from one multinucleate cell (syncytium). The syncytial theory or ciliate-acoel theory: Example and Criticism Turbellarian flatworms According to the syncytial theory, the ciliate ancestor, by several cellularization processes, evolved into the currently known turbellarian flatworms, which are therefore the most primitive metazoans. The theory of cellularization is based on the large similarities between ciliates and flatworms. Both ciliates and flatworms have cilia, are bilaterally symmetric, and syncytial. Therefore, the theory assumes that bilateral symmetry is more primitive than radial symmetry. However, current biological evidence shows that the most primitive forms of metazoans show radial symmetry, and thus radially symmetrical animals like cnidaria cannot be derived from bilateral flatworms.By concluding that the first multicellular animals were flatworms, it is also suggested that simpler organisms as sponges, ctenophores and cnidarians would have derived from more complex animals. However, most current molecular research has shown that sponges are the most primitive metazoans. The syncytial theory or ciliate-acoel theory: Germ layers are formed simultaneously The syncytial theory rejects the theory of germ layers. During the development of the turbellaria (Acoela), three regions are formed without the formation of germ layers. From this, it was concluded that the germ layers are simultaneously formed during the cellularization process. This is in contrast to germ layer theory in which ectoderm, endoderm and mesoderm (in more complex animals) build up the embryo. The syncytial theory or ciliate-acoel theory: The macro and micronucleus of Ciliates There is a lot of evidence against ciliates being the metazoan ancestor. Ciliates have two types of nuclei: a micronucleus which is used as germline nucleus and a macronucleus which regulates the vegetative growth. This division of nuclei is a unique feature of the ciliates and is not found in any other members of the animal kingdom. Therefore, it would be unlikely that ciliates are indeed the ancestors of the metazoans. This is confirmed by molecular phylogenetic research. Ciliates were never found close to animals in any molecular phylogeny. The syncytial theory or ciliate-acoel theory: Flagellated sperm Furthermore, the syncytial theory cannot explain the flagellated sperm of metazoans. Since the ciliate ancestor does not have any flagella and it is unlikely that the flagella arose as a de novo trait in metazoans, the syncytial theory makes it almost impossible to explain the origin of flagellated sperm.Due to both the lack of molecular and morphological evidence for this theory, the alternative colonial theory of Haeckel, is currently gaining widespread acceptance. The syncytial theory or ciliate-acoel theory: For more theories see main article Multicellular organisms. Cellularization in a syncytium (syncytium cellularization): The development of cells in a syncytium (multinucleate cells) is termed syncytium cellularization. Syncytia are quite frequent in animals and plants. Syncytium cellularization occurs for instance in the embryonic development of animals and in endosperm development of plants. Here two examples: Drosophila melanogaster development In the embryonic development of Drosophila melanogaster, first 13 nuclear divisions take place forming a syncytial blastoderm consisting of approximately 6000 nuclei. During the later gastrulation stage, membranes are formed between the nuclei, and cellularization is completed. Cellularization in a syncytium (syncytium cellularization): Syncytium cellularization in plants The term syncytium cellularization is used for instance for a process of cell development in the endosperm of the Poaceae, e.g. barley (Hordeum vulgare), rice (Oryza sativa).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyanobacterin** Cyanobacterin: Cyanobacterin is a chemical compound produced by the cyanobacteria Scytonema hofmanni. It is a photosynthesis inhibitor with algaecidal and herbicidal effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superman complex** Superman complex: A Superman complex is an unhealthy sense of responsibility, or the belief that everyone else lacks the capacity to successfully perform one or more tasks. Such a person may feel a constant need to "save" others and, in the process, takes on more work on their own.The expression seems to have been first been used by Dr. Fredric Wertham in his 1954 book Seduction of the Innocent and his testimony before the Senate Subcommittee on Juvenile Delinquency. His initial theory focused less on the current allusion to the savior complex and more on people's propensity to find enjoyment in watching someone else beat up another person while they stand by unharmed. He claimed that children reading Superman comic books were exposed to "Fantasies of sadistic joy in seeing other people punished over and over again while you yourself remain immune.” In his discourse of the Superman complex, Wertham also blamed comic books for other social issues such as juvenile delinquency, homosexuality, and Communism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Action poetry** Action poetry: Action poetry is a form of poetry that is fierce, unpredictable, and full of energy. It is meant to be read aloud with emphasis and feelings to exhibit the true emotions of the poems. This form of poetry can be found in murals, books, and poetic competitions or recitals such as poetry slam. Forms: Murals Action poetry can be exhibited through murals. Whole poems or poem excerpts are painted on walls often with associating artworks. And example is The Art Alley Mural Project, which is a project by Arts Etobicoke that has painted a specially commissioned poem by the City of Toronto's Poet Laureate on a 1000 square foot wall in an alley in the city of Toronto. This mural is only one of a series of murals participating in the Urban Canvas project throughout Toronto, celebrating the 60th anniversary of the Universal Declaration of Human Rights. Forms: Poetry slam Poetry slam is another form of action poetry in which a competition is held where competitors would read or recite original works, while being judged by selected members of an audience. Written form Books that showcase different written works of action poetry are also available. One of these books is Action Poetry: Literary Tribes For The Internet Age by Levi Asher, Jamelah Earle, and Caryn Thurman. Forms: Literary Kicks is a website dedicated to action poetry in which users would post original work. Users can comment on, respond to, or add to works posted: "Once a bright eyedyoung fool obsessedwith the sky,Once apunk kid,with stars inhis eyes,Now who is he?The stars are fadinginto sorrowfullights that only seethe misery and woeof the worldwhile the heart weeps."So Much Gone by: Poetpunk Quote: "All poetry is a call to action."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inflationary epoch** Inflationary epoch: In physical cosmology, the inflationary epoch was the period in the evolution of the early universe when, according to inflation theory, the universe underwent an extremely rapid exponential expansion. This rapid expansion increased the linear dimensions of the early universe by a factor of at least 1026 (and possibly a much larger factor), and so increased its volume by a factor of at least 1078. Expansion by a factor of 1026 is equivalent to expanding an object 1 nanometer (10−9 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 62 trillion miles) long. Description: Vacuum state is a configuration of quantum fields representing a local minimum (but not necessarily a global minimum) of energy. Inflationary models propose that at approximately 10−36 seconds after the Big Bang, vacuum state of the Universe was different from the one seen at the present time: the inflationary vacuum had a much higher energy density. According to general relativity, any vacuum state with non-zero energy density generates a repulsive force that leads to an expansion of space. In inflationary models, early high-energy vacuum state causes a very rapid expansion. This expansion explains various properties of the current universe that are difficult to account for without such an inflationary epoch. Most inflationary models propose a scalar field called the inflaton field, with properties necessary for having (at least) two vacuum states. It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that any potential elementary particles (or other “unwanted” artifacts, such as topological defects) remaining from time before inflation were now distributed very thinly across the universe. When the inflaton field reconfigured itself into the low-energy vacuum state we currently observe, the huge difference of potential energy was released in the form of a dense, hot mixture of quarks, anti-quarks and gluons as it entered the electroweak epoch. Detection via polarization of cosmic microwave background radiation: One approach to confirming the inflationary epoch is to directly measure its effect on the cosmic microwave background (CMB) radiation. The CMB is very weakly polarized (to a level of a few μK) in two different modes called E-mode and B-mode (by analogy to the E-field and B-field in electrostatics). The E-mode polarization comes from ordinary Thomson scattering, but the B-mode may be created by two mechanisms: from gravitational lensing of E-modes; or from gravitational waves arising from cosmic inflation.If B-mode polarization from gravitational waves can be measured, it would provide direct evidence supporting cosmic inflation and could eliminate or support various inflation models based on the level detected. Detection via polarization of cosmic microwave background radiation: On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of B-mode polarization attributed to inflationary-related gravitational waves, which seemed to support cosmological inflation and the Big Bang, however, on 19 June 2014 they lowered the confidence level that the B-mode measurements were actually from gravitational waves and not from background noise from dust.The Planck spacecraft has instruments that measure the CMB radiation to a high degree of sensitivity (57 nK). After the BICEP finding, scientists from both projects worked together to further analyze the data from both projects. That analysis concluded to a high degree of certainty that the original BICEP signal can be entirely attributed to dust in the Milky Way and therefore does not provide evidence one way or the other to support the theory of the inflationary epoch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deer hunting** Deer hunting: Deer hunting is hunting for deer for meat and sport, an activity which dates back tens of thousands of years. Venison, the name for deer meat, is a nutritious and natural food source of animal protein that can be obtained through deer hunting. There are many different types of deer around the world that are hunted for their meat. For sport, often hunters try to kill deer with the largest and most antlers to score them using inches. There are two different categories of antlers. They are typical and nontypical. They measure tine length, beam length, and beam mass by each tine. They will add all these measurements up to get a score. This score is the score without deductions. Deductions occur when the opposite tine is not the same length as it is opposite. That score is the deducted score.Hunting deer is a regulated activity in many territories. In the United States, a state government agency such as a Department of Fish and Wildlife (DFW) or Department of Natural Resources (DNR) oversees the regulations. In the United Kingdom, it is illegal to use bows or rifles chambered in bores smaller than .243 caliber (6mm) for hunting. New Zealand: New Zealand has had 10 species of deer (Cervidae) introduced. From the 1850s, red deer were liberated, followed by fallow, sambar, wapiti, sika, rusa, and whitetail. The introduced herds of axis and moose failed to grow, and have become extinct. In the absence of predators to control populations, deer were thought to be a pest due to their effect on native vegetation. From the 1950s, the government employed professional hunters to cull the deer population. Deer hunting is now a recreational activity, organised and advocated for at the national level by the New Zealand Deerstalkers' Association. United States: The two main species of deer found in the United States are mule deer and white-tailed deer. Mule deer are mostly found west of the Rocky Mountains, but can also be found as far east as parts of North and South Dakota, while whitetails generally occur only to the east of the Rockies. Mule deer have a black-tipped tail which is proportionally smaller than that of the white-tailed deer. The male deer or bucks grow antlers annually. The mule deer have taller skinnier tines on their antlers where white-tailed deer typically have shorter thicker tines. White-tailed bucks are slightly smaller than mule deer bucks. Both of the species lose their antlers in January, and regrow the antlers during the following summer beginning in June. Although both species are found in the United States, where they are found is vastly different. Mule deer are found in the western United States in the foothills of the mountains. United States: As their antlers become fully developed, they will start to shed their velvet. Velvet is vascularised tissue that is a furry skin-like material that covers the growing antlers. If the antlers are damaged while they are in velvet they can cause nontypical features due to the soft nature of the antler tissue while growing. The velvet will fall off of the deer when their antlers start to harden in late summer to early fall to get ready for mating season in the winter. If the velvet doesn't fall off on its own they will make a "rub" on a small tree. This is when they rub their antlers to both mark territory and to take the remaining velvet off. United States: The deer shed their velvet as they begin to get ready for mating season. This mating season is referred to as the "rut". This time is always generally around the same time of year. It could differ by a little here or there but always takes place at the end of October and leading into November. During this time period the bucks are often up on their feet more to try and "lockdown" does. Big mature whitetails can be very difficult to kill. The "rut" is often looked like a great time to kill these giant bucks.There are many different types of hunting strategies when it comes to deer hunting. These strategies often depend on the time of year too. One of the most successful early and late season strategies is hunting over a food source. You must know your state laws for baiting or feeding deer while hunting, as each state differs. This can be done multiple ways. One way that can be very successful if done correctly is by growing a food plot. Some of the popular food plots for deer are clover, alfalfa, turnips, radishes, etc.... Another way as mentioned before is baiting deer. This is often done with corn or a mineral block. Another strategy is hunting a "rub" or "scrape" line. This strategy is often used closer to the "rut". Bucks often travel in the same areas over and over again. This tactic would be used to try and catch a buck up on its feet going from bedding to food or vice versa. Placing a tree stand on the trail or using a ground blind is another tactic used by bow and firearm hunters to camouflage themselves while hunting deer.There are many different types of ways to hunt deer as well. One of the ways would be the stalking method (called still hunting). This way is very difficult for a beginner, as well as an experienced hunter. This takes quite a bit of practice to become successful at. All conditions must be right for this method to work. A hunter must be quiet, the ground must be soft, there cannot be crunchy leaves on the ground, etc... . People often use tracking when trying to stalk deer. This works best with fresh snow as you can see the deer tracks that are recent. Another method used when hunting is simply sitting and waiting for a deer, whether in a tree stand or a ground blind. This method is very difficult for some people because of how long a hunter must sit still for hours at times. There is a combination of these hunting methods which has been used by most hunters. This method is called "driving". In "driving", there are people who try to move deer in a certain direction by walking through the woods in the direction they want the deer to move. While they are trying to move deer, people are sitting in a certain place that has been designated and expected for the deer to move through. The people shooting try to shoot the deer as they go through that area. This method is very successful in getting deer on their feet to move. United States: State government regulation Methods of pursuing game for wild meat and corresponding seasons are subject to regulation by state governments and therefore vary from state to state. A state government agency such as a Department of Fish and Wildlife (DFW) or Department of Natural Resources (DNR) oversees the regulations. United States: Deer hunting seasons vary across the United States. In game zone 3 in the state of South Carolina, deer hunting season starts August 15 and runs through January 1. Some seasons in states such as Florida and Kentucky start as early as September and can go all the way until February like in Texas. The length of the season is often based on the health and population of the deer herd, in addition to the number of hunters expected to be participating in the deer hunt. The duration of deer hunting seasons can also vary by county within a state, as in Kentucky. In the case of South Carolina, the season varies by SCDNR region. Each region has multiple counties. The DFW will also create specific time frames within the season where the number of hunters able to hunt is limited, which is known as a controlled hunt. United States: The DFW may also break the deer-hunting season into different time periods where only certain weapons are permitted: bows only (compound, recurve, and crossbows), modern firearms (rifles and shotguns) or black-powder muzzleloaders. (Some states, such as Kentucky, consider only compound and recurve bows as "bows" for hunting regulation purposes, and have special seasons for crossbows.) For example, during a bows-only season, in many areas a hunter would be limited to the use of a bow and the use of any firearm would be prohibited until that specific season opens, and in some areas a crossbow can only be used during a dedicated season for that weapon. Similarly, during a muzzleloader season, use of modern firearms is almost always prohibited. However, in many states, the archery season (at least for compound and recurve bows) completely overlaps all firearms seasons; in those locations, bowhunters may take deer during a firearms season. United States: Some states also have restrictions on hunting of antlered or antlerless deer. For example, Kentucky allows the taking of antlerless deer during any deer season in most of the state, but in certain areas allows only antlered deer to be taken during parts of deer season. United Kingdom and Republic of Ireland: The term "deer hunting" is used in North America for the shooting of deer, but in the United Kingdom and Ireland, the term generally refers to the pursuit of deer with scent hounds, with unarmed followers typically on horseback.There are six species of deer in the UK: red deer, roe deer, fallow deer, Sika deer, Reeves muntjac deer, and Chinese water deer, as well as hybrids of these deer. All are hunted to a degree reflecting their relative population either as sport or for culling. Closed seasons for deer vary by species. The practice of declaring a closed season in England dates back to medieval times, when it was called fence month and commonly lasted from June 9 to July 9, though the actual dates varied. It is illegal to use bows to hunt any wild animal in the UK under the Wildlife and Countryside Act 1981. United Kingdom and Republic of Ireland: UK deer stalkers, if supplying venison (in fur) to game dealers, butchers and restaurants, need to hold a Lantra level 2 large game meat hygiene certificate. Courses are run by organisations such as the British Association for Shooting and Conservation and this qualification is also included within the Level 1 deer stalking certificate. If supplying venison for public consumption (meat), the provider must have a fully functioning and clean larder that meets FSA standards and must register as a food business with the local authority. United Kingdom and Republic of Ireland: The vast majority of deer hunted in the UK are stalked. The phrase deer hunting is used to refer (in England and Wales) to the traditional practice of chasing deer with packs of hounds, currently illegal under the Hunting Act 2004. United Kingdom and Republic of Ireland: In the late nineteenth and twentieth centuries, there were several packs of staghounds hunting "carted deer" in England and Ireland. Carted deer were red deer kept in captivity for the sole purpose of being hunted and recaptured alive. More recently, there were three packs of staghounds hunting wild red deer of both sexes on or around Exmoor and the New Forest Buckhounds hunting fallow deer bucks in the New Forest, the latter disbanding in 1997. United Kingdom and Republic of Ireland: The practice of hunting with hounds, other than using two hounds to flush deer to be shot by waiting marksmen, has been banned in the UK since 2005; to date, two people have been convicted of breaking the law.There is one pack of stag hounds in the Republic of Ireland and one in Northern Ireland, the former operating under a licence to hunt carted deer. Australia: In Australia, there are six species of deer that are available to hunt. These are fallow deer, sambar, red deer, rusa, chital, and hog deer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3-MCPD** 3-MCPD: 3-MCPD (3-monochloropropane-1,2-diol or 3-chloropropane-1,2-diol) is an organic chemical compound with the formula HOCH2CH(OH)CH2Cl. It is a colorless liquid. It is a versatile multifunctional building block. The compound has attracted attention as the most common member of chemical food contaminants known as chloropropanols. It is suspected to be carcinogenic in humans. 3-MCPD: It is produced in foods treated at high temperatures with hydrochloric acid to speed up protein hydrolysis. As a byproduct of this process, chloride can react with the glycerol backbone of lipids to produce 3-MCPD. 3-MCPD can also occur in foods that have been in contact with materials containing epichlorohydrin-based wet-strength resins which are used in the production of some tea bags and sausage casings.In 2009, 3-MCPD was found in some East Asian and Southeast Asian sauces such as oyster sauce, Hoisin sauce, and soy sauce. Using hydrochloric acid is far faster than traditional slow fermentation. A 2013 European Food Safety Authority report indicated margarine, vegetable oils (excluding walnut oil), preserved meats, bread, and fine bakery wares as major sources in Europe.3-MCPD can also be found in many paper products treated with polyamidoamine-epichlorohydrin wet-strength resins. Absorption and toxicity: The International Agency for Research on Cancer has classified 3-MCPD as Group 2B, "possibly carcinogenic to humans". 3-MCPD is carcinogenic in rodents via a non-genotoxic mechanism. It is able to cross the blood-testis barrier and blood–brain barrier. The oral LD50 of 3-chloro-1,2-propanediol is 152 mg/kg bodyweight in rats.3-MCPD also has male antifertility effects and can be used as a rat chemosterilant. Legal limits: The joint Food Standards Australia New Zealand (FSANZ) set a limit for 3-MCPD in soy sauce of 0.02 mg/kg, in line with European Commission standards which came into force in the EU in April 2002. History: In 2000, a survey of soy sauces and similar products available in the UK was carried out by the Joint Ministry of Agriculture, Fisheries and Food/Department of Health Food Safety and Standards Group (JFSSG) and reported more than half of the samples collected from retail outlets contained various levels of 3-MCPD.In 2001, the United Kingdom Food Standards Agency (FSA) found in tests of various oyster sauces and soy sauces that 22% of samples contained 3-MCPD at levels considerably higher than those deemed safe by the European Union. About two-thirds of these samples also contained a second chloropropanol called 1,3-dichloropropane-2-ol (1,3-DCP) which experts advise should not be present at any levels in food. Both chemicals have the potential to cause cancer and the Agency recommended that the affected products be withdrawn from shelves and avoided.In 2001, the FSA and Food Standards Australia New Zealand (FSANZ) singled out brands and products imported from Thailand, China, Hong Kong, and Taiwan. Brands named in the British warning include Golden Mountain, King Imperial, Pearl River Bridge, Golden Mark, Kimlan, Golden Swan, Sinsin, Tung Chun, and Wanjasham soy sauce. Knorr soy sauce was also implicated, as well as Uni-President Enterprises Corporation creamy soy sauce from Taiwan, Silver Swan soy sauce from the Philippines, Ta Tun soy bean sauce from Taiwan, Tau Vi Yeu seasoning sauce and Soya bean sauce from Vietnam, Zu Miao Fo Shan soy superior sauce and Mushroom soy sauce from China and Golden Mountain and Lee Kum Kee chicken marinade. History: Between 2002 and 2004, relatively high levels of 3-MCPD and other chloropropanols were found in soy sauce and other foods in China.In 2007, in Vietnam, 3-MCPD was found in toxic levels. In 2004, the HCM City Institute of Hygiene and Public Health found 33 of 41 sample of soy sauce with high rates of 3-MCPD, including six samples with up to 11,000 to 18,000 times more 3-MPCD than permitted, an increase over 23 to 5,644 times in 2001, The newspaper Thanh Nien Daily commented, "Health agencies have known that Vietnamese soy sauce, the country's second most popular sauce after fish sauce, has been chock full of cancer agents since at least 2001."In March 2008, in Australia, "carcinogens" were found in soy sauces, and Australians were advised to avoid soy sauce.In November 2008, Britain's Food Standards Agency reported a wide range of household name food products from sliced bread to crackers, beefburgers and cheese with 3-MCPD above safe limits. Relatively high levels of the chemical were found in popular brands such as Mother's Pride, Jacobs crackers, John West, Kraft Dairylea and McVitie's Krackawheat. The same study also found relatively high levels in a range of supermarket own-brands, including Tesco char-grilled beefburgers, Sainsbury's Hot 'n Spicy Chicken Drumsticks and digestive biscuits from Asda. The highest levels of 3-MCPD found in a non- soy sauce product, crackers, was 134 μg per kg. The highest level of 3-MCPD found in soy sauce was 93,000 μg per kg, 700 times higher. The legal limit for 3-MCPD coming in next year will be 20 μg per kg, but the safety guideline on daily intake is 120 μg for a 60 kg person per day.In 2016, the occurrence of 3-MCPD in selected paper products (coffee filters, tea bags, disposable paper hot beverage cups, milk paperboard containers, paper towels) sold on the Canadian and German market was reported and the transfer of 3-MCPD from those products to beverages was investigated. Exposure to 3-MCPD from packaging material would likely constitute only a small percentage of overall dietary exposure when compared to the intake of processed oils/fats containing 3-MCPD equivalent (in form of fatty acid esters) which are often present at levels of about 0.2-2 μg/g.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integration platform** Integration platform: An integration platform is software which integrates different applications and services. It differentiates itself from the enterprise application integration which has a focus on supply chain management. It uses the idea of system integration to create an environment for engineers. Integration platforms can be built from components, purchased as a pre-built product ready for installation or procured from an integration Platform as a Service (iPaaS) company. Overview: An integration platform tries to create an environment in which engineers can: Data (information) integration: Ensure that they are using the same datasets and can share information. Data management with metadata information and versioning ensures the data is kept consistent. Integrate many kinds of applications (independent from platform, programming language or resource) so they can be bound together in workflows and processes to work in conjunction. The different interfaces are hidden by the usage of a uniform interface in the integration platform (Process Integration). Collaborate between distributed and scattered applications and engineers over the network. Interoperability between different operating systems and programming languages by the use of similar interfaces. Take security considerations into account so that, for example, data is shared only with the right resources. Visual guidance by interactive user interfaces and a common facade for all integrated applications. Common components of integration platform: Integration platform typically contains a set of functional components, such as Message bus for enabling reliable messaging between enterprise applications. Adapters to transform messages from and to application's proprietary protocol. Adapters often offer connectivity via common standards, like FTP, SFTP or format support, like EDI. Transformation engine and visualized data mapping to transform messages or files from one format to another. Metadata repository for storing information separated from processes, like business party. Process Orchestration Engine for orchestration design and execution. In this context orchestration is a technical workflow that represents a business process or part of it. Technical dashboard for tracking messages in a message bus and viewing execution history of orchestrations. Scheduler for scheduling orchestrations Batch engine for controlling large file transfers, batch jobs, execution of external scripts and other non-messaging based tasks. Differentiation: An integration platform has a focus to be designed by and helpful to engineers. It has no intention to map business processes or integrate tools for supply chain management. Therefore it is not related to those systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Winning isn't everything; it's the only thing** Winning isn't everything; it's the only thing: "Winning isn’t everything; it's the only thing" is a well-known quotation in sports. It is attributed to UCLA Bruins football coach Henry Russell ("Red") Sanders. He is on record with at least two different versions of the quotation during his coaching career. Sanders is reputed to have used this quote even as far back as the 1930s. Red Sanders: In 1950, at a Cal Poly San Luis Obispo physical education workshop, Sanders told his group: "Men, I'll be honest. Winning isn't everything", then following a long pause, "Men, it's the only thing!" In a three-part article, December 7, 1953, on Red Sanders, by Bud Furillo of the Los Angeles Herald and Express, the phrase is quoted in the sub head. Furillo said in his unpublished memoirs Sanders first made the statement to him after UCLA's loss to USC in 1949. In 1955, in a Sports Illustrated article preceding the 1956 Rose Bowl, he was quoted as saying "Sure, winning isn't every thing; it's the only thing."While at UCLA, another famous quote was attributed to Sanders regarding the UCLA–USC rivalry, "Beating 'SC is not a matter of life or death, it's more important than that." A form of this quote was later widely attributed to Bill Shankly, Liverpool FC coach from a 1981 television interview. Others: The phrase is quoted in the 1953 film Trouble Along the Way by Sherry Jackson's character, Carol Williams. Screenwriter Melville Shavelson heard it from his agent, who also happened to represent Red Sanders, which is how it got into the script. The quotation is widely, but wrongly attributed to American football coach Vince Lombardi, who probably heard the phrase from UCLA coach Sanders. Lombardi is on record using the quotation as early as 1959 in his opening talk on the first day of the Packers’ training camp. The quotation captured the American public's attention during Lombardi's highly successful reign as coach of the Packers in the 1960s. Over time, the quotation took on a life of its own. The words graced the walls of locker rooms, ignited pre-game pep talks, and even into the Richard Nixon campaign. According to the late James Michener's Sports in America, Lombardi claimed to have been misquoted. What he intended to say was "Winning isn't everything. The will to win is the only thing." However, Lombardi is on record repeating the original version of the quotation on several occasions. Other related quotations: This credo has served as counterpoint to the well known sentiment by sports journalist Grantland Rice that, "it's not that you won or lost but how you played the game", and to the modern Olympic creed expressed by its founder Pierre de Coubertin: "The most important thing. . . is not winning but taking part".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soft chemistry** Soft chemistry: Soft chemistry (also known as chimie douce) is a type of chemistry that uses reactions at ambient temperature in open reaction vessels with reactions similar to those occurring in biological systems. Aims: The aim of the soft chemistry is to synthesize materials, drawing capacity of living beings - more or less basic - such as diatoms capable of producing glass from silicates dissolved. It is a new branch of materials science that differs from conventional solid-state chemistry and its application to the intense energy to explore the chemical inventiveness of the living world. This specialty emerged in the 1980s around the label of "chimie douce", which was first published by the French chemist, Jacques Livage in Le Monde, 26 October 1977. French hits, the term soft chemistry is employed as such in the early twenty-first century in scientific publications, English and others. His mode of synthesis is similar generally for reactions involved in the polymerizations based on organic and the establishment of solutions reactive energy intake without essential polycondensation. The fundamental interest of this kind of polymerization mineral obtained at room temperature is to preserve organic molecules or microorganisms that wishes to fit. The products obtained by means of the so-called soft chemistry sol-gel can be stored in several types: mineral structures of various qualities (smoothness, uniformity, etc.) mixed structures combining inorganic and organic molecules on mineral structures wrapper complex molecules and even microorganisms maintaining or optimizing their beneficial characteristics.The early results have included the creation of glasses and ceramic with new properties. These different structures are more or less composite mobilized a wide range of applications ranging from health to the needs of the conquest of space. Beyond its mode of synthesis, a compound with the label soft chemistry combines the advantages of the mineral (resistance, transparency, repetition patterns, etc.) and now exploring the potential of the biochemistry and organic chemistry (interface with the organic world, reactivity, synthesis capability, etc.). According to its practitioners, the "soft chemistry "is only in its early success and opens up vast prospects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relaxation (physics)** Relaxation (physics): In the physical sciences, relaxation usually means the return of a perturbed system into equilibrium. Each relaxation process can be categorized by a relaxation time τ. The simplest theoretical description of relaxation as function of time t is an exponential law exp(−t/τ) (exponential decay). In simple linear systems: Mechanics: Damped unforced oscillator Let the homogeneous differential equation: md2ydt2+γdydt+ky=0 model damped unforced oscillations of a weight on a spring. The displacement will then be of the form cos ⁡(μt−δ) . The constant T ( =2m/γ ) is called the relaxation time of the system and the constant μ is the quasi-frequency. Electronics: RC circuit In an RC circuit containing a charged capacitor and a resistor, the voltage decays exponentially: V(t)=V0e−tRC, The constant τ=RC is called the relaxation time or RC time constant of the circuit. A nonlinear oscillator circuit which generates a repeating waveform by the repetitive discharge of a capacitor through a resistance is called a relaxation oscillator. In condensed matter physics: In condensed matter physics, relaxation is usually studied as a linear response to a small external perturbation. Since the underlying microscopic processes are active even in the absence of external perturbations, one can also study "relaxation in equilibrium" instead of the usual "relaxation into equilibrium" (see fluctuation-dissipation theorem). Stress relaxation In continuum mechanics, stress relaxation is the gradual disappearance of stresses from a viscoelastic medium after it has been deformed. Dielectric relaxation time In dielectric materials, the dielectric polarization P depends on the electric field E. If E changes, P(t) reacts: the polarization relaxes towards a new equilibrium. It is important in dielectric spectroscopy. Very long relaxation times are responsible for dielectric absorption. The dielectric relaxation time is closely related to the electrical conductivity. In a semiconductor it is a measure of how long it takes to become neutralized by conduction process. This relaxation time is small in metals and can be large in semiconductors and insulators. In condensed matter physics: Liquids and amorphous solids An amorphous solid such as amorphous indomethacin displays a temperature dependence of molecular motion, which can be quantified as the average relaxation time for the solid in a metastable supercooled liquid or glass to approach the molecular motion characteristic of a crystal. Differential scanning calorimetry can be used to quantify enthalpy change due to molecular structural relaxation. In condensed matter physics: The term "structural relaxation" was introduced in the scientific literature in 1947/48 without any explanation, applied to NMR, and meaning the same as "thermal relaxation". Spin relaxation in NMR In nuclear magnetic resonance (NMR), various relaxations are the properties that it measures. Chemical relaxation methods: In chemical kinetics, relaxation methods are used for the measurement of very fast reaction rates. A system initially at equilibrium is perturbed by a rapid change in a parameter such as the temperature (most commonly), the pressure, the electric field or the pH of the solvent. The return to equilibrium is then observed, usually by spectroscopic means, and the relaxation time measured. In combination with the chemical equilibrium constant of the system, this enables the determination of the rate constants for the forward and reverse reactions. Chemical relaxation methods: Monomolecular first-order reversible reaction A monomolecular, first order reversible reaction which is close to equilibrium can be visualized by the following symbolic structure: In other words, reactant A and product B are forming into one another based on reaction rate constants k and k'. To solve for the concentration of A, recognize that the forward reaction ( A→kB ) causes the concentration of A to decrease over time, whereas the reverse reaction ( B→k′A ) causes the concentration of A to increase over time. Therefore, d[A]dt=−k[A]+k′[B] , where brackets around A and B indicate concentrations. Chemical relaxation methods: If we say that at t=0,[A](t)=[A]0 , and applying the law of conservation of mass, we can say that at any time, the sum of the concentrations of A and B must be equal to the concentration of A0 , assuming the volume into which A and B are dissolved does not change: Substituting this value for [B] in terms of [A]0 and [A](t) yields which becomes the separable differential equation This equation can be solved by substitution to yield In atmospheric sciences: Desaturation of clouds Consider a supersaturated portion of a cloud. Then shut off the updrafts, entrainment, and any other vapor sources/sinks and things that would induce the growth of the particles (ice or water). Then wait for this supersaturation to reduce and become just saturation (relative humidity = 100%), which is the equilibrium state. The time it takes for the supersaturation to dissipate is called relaxation time. It will happen as ice crystals or liquid water content grow within the cloud and will thus consume the contained moisture. The dynamics of relaxation are very important in cloud physics for accurate mathematical modelling. In atmospheric sciences: In water clouds where the concentrations are larger (hundreds per cm3) and the temperatures are warmer (thus allowing for much lower supersaturation rates as compared to ice clouds), the relaxation times will be very low (seconds to minutes).In ice clouds the concentrations are lower (just a few per liter) and the temperatures are colder (very high supersaturation rates) and so the relaxation times can be as long as several hours. Relaxation time is given as where: D = diffusion coefficient [m2/s] N = concentration (of ice crystals or water droplets) [m−3] R = mean radius of particles [m] K = capacitance [unitless]. In astronomy: In astronomy, relaxation time relates to clusters of gravitationally interacting bodies, for instance, stars in a galaxy. The relaxation time is a measure of the time it takes for one object in the system (the "test star") to be significantly perturbed by other objects in the system (the "field stars"). It is most commonly defined as the time for the test star's velocity to change by of order itself. In astronomy: Suppose that the test star has velocity v. As the star moves along its orbit, its motion will be randomly perturbed by the gravitational field of nearby stars. The relaxation time can be shown to be 0.34 ln 0.95 10 10 200 10 ln 15 )−1yr where ρ is the mean density, m is the test-star mass, σ is the 1d velocity dispersion of the field stars, and ln Λ is the Coulomb logarithm. In astronomy: Various events occur on timescales relating to the relaxation time, including core collapse, energy equipartition, and formation of a Bahcall-Wolf cusp around a supermassive black hole.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knowledge Acquisition and Documentation Structuring** Knowledge Acquisition and Documentation Structuring: Knowledge Acquisition and Documentation Structuring (KADS) is a structured way of developing knowledge-based systems (expert systems). It was developed at the University of Amsterdam as an alternative to an evolutionary approach and is now accepted as the European standard for knowledge based systems.Its components are: A methodology for managing knowledge engineering projects. A knowledge engineering workbench. A methodology for performing knowledge elicitation.KADS was further developed into CommonKADS. KADS methodology and the industrial development of expert systems: A study carried out in 1989 showed that the main reason why expert systems were not being used was an insufficiency of methods for development, especially in the construction of knowledge bases, e.g. the transfer of expertise.Knowledge Based Systems Analysis and Design Support (KADS) originating in the European ESPRIT project P1098 and representing 75 person-years of work, was one of the most highly developed KBs (Knowledge Based Systems) in the early 90s. This pioneering method provides two types of support for the production of KBs in an industrial approach: firstly, a lifecycle enabling a response to be made to technical and economic constraints (control of the production process, quality assurance of the system,...), and secondly a set of models which structure the production of the system, especially the tasks of analysis and the transformation of expert knowledge into a form exploitable by the machine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CloudBolt** CloudBolt: CloudBolt is a hybrid cloud management platform developed by CloudBolt Software for deploying and managing virtual machines (VMs), applications, and other IT resources, both in public clouds (e.g., AWS, MS Azure, GCP) and in private data centers (e.g., VMware, OpenStack). History: The platform was developed by Alexandre Augusto "Auggy" da Rocha and Bernard Sanders. da Rocha began work on a prototype of the platform in 2010, calling it SmartCloud 1.0. Together they created a generalized solution, which was released in August 2011 as SmartCloud 2.0. The early version focused on simple installs and upgrades of virtual machines, and building an extensible product that customers could use as a platform for integrating with other technologies.In 2012, they renamed their company to CloudBolt Software (to avoid a name conflict with an IBM offering), and in 2013 CloudBolt Command and Control (C2) was included as the cloud manager in Dell Cloud for US Government. In 2014, the product name "CloudBolt Command and Control (C2)" was simplified to "CloudBolt". History: Product Timeline: 2011 - 2.0 released, with support only for VMware & HP Server Automation (formerly Opsware) 2012 - 3.0 released, rebranded as CloudBolt with support for AWS & OpenStack added 2013 - 4.0 released, support for Microsoft Azure, Google Cloud Platform, Infoblox, vCO (now vRO), and HP Operations Orchestration, Cobbler 2014 - 4.5 released, including integration with Puppet & Chef, ServiceNow 2015 - 5.0 released with support for Kubernetes container orchestrator and Razor bare metal provisioning 2016 - 6.0 released, with support for Azure ARM, Ansible, Oracle Public Cloud 2017 - 7.0 released, with 18 total public clouds and private virtualization systems supported, 4 configuration managers, and 2 external orchestrators 2018 - 8.0 released, with everything-as-a-service (XaaS), enhanced containerization and Kubernetes support 2019 - 9.0 released, with support for infrastructure-as-code (Terraform), expanded self-service workload delivery, security, and multi-cloud managementThe company received Series A funding from Insight Venture Partners in July 2018. Industry recognition: SIIA CODiE Award for Best Cloud Management Solution, 2020. Gartner names CloudBolt a "Challenger" in its 2020 Magic Quadrant for Cloud Management Platforms. Tech Target's Impact award for best private and hybrid cloud management product, 2016. WhatMatrix for CloudManagement, 2017, 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SDSS J1254+0846** SDSS J1254+0846: SDSS J1254+0846 is a face-on binary quasar pair which is in the process of merging. This binary quasar is the first resolved luminous pair to be observed in the act of merging. The pair is composed of two luminous radio-quiet quasars located at redshift z=0.44, being SDSS J125455.09+084653.9 (SDSS J1254+0846 A) and SDSS J125454.87+084652.1 (SDSS J1254+0846 B), or SDSS J1254+0846 collectively. These designations also refer to their host galaxies. This pair provide evidence for the theory that quasars are switched on by galactic collisions. The pair are optically separated by 3.6 arcseconds, giving the real separation as 21 kpc. Tidal tails some 75 kpc have been detected around the galaxies. Thus the two galaxies involved are disc galaxies. The pair was first detected by the Sloan Digital Sky Survey, hence the "SDSS" designations. The tidal tails were first observed by the Magellan Telescopes. A computer simulation by Thomas Cox of the Carnegie Institute corroborated the hypothesis that these were two merging galaxies. This binary quasar, was at the time of discovery in 2010, the lowest redshift binary quasar then observed.Prior to this discovery, all quasars in merging binary pairs either involved one luminous quasar, and a second obscured or dark nucleus, or a spatially unresolved pair of active nuclei.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ScreenFlow** ScreenFlow: ScreenFlow from Telestream, Inc. is a screencasting and video editing software for the macOS operating system. It can capture the audio and video from the computer, edit the captured video, add highlights or annotation, and output a number of different file types such as AIFF, GIF, M4V, MOV, and MP4.Version 5 added the support of video and audio capturing from a connected iPhone, iPod touch, or iPad.Version 9 of ScreenFlow was released on November 12, 2019 as a direct purchase from Telestream, Inc and via the Mac App Store. Awards: ScreenFlow won the Editors' Choice Award from Macworld in December 2012.ScreenFlow won an Eddy Winner award from Macworld in December 2008.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**7α-Hydroxycholesterol** 7α-Hydroxycholesterol: 7α-Hydroxycholesterol is a precursor of bile acids, created by cholesterol 7α-hydroxylase (CYP7A1). Its formation is the rate-determining step in bile acid synthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudarthrobacter chlorophenolicus** Pseudarthrobacter chlorophenolicus: Pseudarthrobacter chlorophenolicus is a species of bacteria capable of degrading high concentrations of 4-chlorophenol, hence its name. As such, it may be useful in bioremediation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**A113** A113: A113 is a studio code and its variants are an inside joke and Easter egg in media developed by alumni of California Institute of the Arts, referring to the classroom used by graphic design and character animation students. History: Students who have used the classroom include John Lasseter, Tim Burton, Michael Peraza, and Brad Bird. It has appeared in other Disney movies and almost every Pixar movie.Brad Bird first used it for a license plate number in the "Family Dog" episode of Amazing Stories: "I put it into every single one of my films, including my Simpsons episodes—it's sort of my version of caricaturist Al Hirschfeld's 'Nina'." It also appears in South Park, Aqua Teen Hunger Force, Family Guy, American Dad!, Doctor Who and the SPA Studios animated film Klaus (2019). The first Disney movie Bird used it in was The Brave Little Toaster (1987), in which he worked on as an animator. It can be seen as The Master's apartment address when Toaster and his friends knock on the door.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pragmatic web** Pragmatic web: The Pragmatic Web consists of the tools, practices and theories describing why and how people use information. In contrast to the Syntactic Web and Semantic Web the Pragmatic Web is not only about form or meaning of information, but about social interaction which brings about e.g. understanding or commitments. The transformation of existing information into information relevant to a group of users or an individual user includes the support of how users locate, filter, access, process, synthesize and share information. Social bookmarking is an example of a group tool, end-user programmable agents are examples of individual tools. The Pragmatic Web idea is rooted in the Language/action perspective.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Archard equation** Archard equation: The Archard wear equation is a simple model used to describe sliding wear and is based on the theory of asperity contact. The Archard equation was developed much later than Reye's hypothesis (sometimes also known as energy dissipative hypothesis), though both came to the same physical conclusions, that the volume of the removed debris due to wear is proportional to the work done by friction forces. Theodor Reye's model became popular in Europe and it is still taught in university courses of applied mechanics. Until recently, Reye's theory of 1860 has, however, been totally ignored in English and American literature where subsequent works by Ragnar Holm and John Frederick Archard are usually cited. In 1960, Mikhail Mikhailovich Khrushchov and Mikhail Alekseevich Babichev published a similar model as well. In modern literature, the relation is therefore also known as Reye–Archard–Khrushchov wear law. In 2022, the steady-state Archard wear equation was extended into the running-in regime using the bearing ratio curve representing the initial surface topography. Equation: Q=KWLH where: Q is the total volume of wear debris produced K is a dimensionless constant W is the total normal load L is the sliding distance H is the hardness of the softest contacting surfacesNote that WL is proportional to the work done by the friction forces as described by Reye's hypothesis. Also, K is obtained from experimental results and depends on several parameters. Among them are surface quality, chemical affinity between the material of two surfaces, surface hardness process, heat transfer between two surfaces and others. Derivation: The equation can be derived by first examining the behavior of a single asperity. The local load δW , supported by an asperity, assumed to have a circular cross-section with a radius a , is: δW=Pπa2 where P is the yield pressure for the asperity, assumed to be deforming plastically. P will be close to the indentation hardness, H, of the asperity. Derivation: If the volume of wear debris, δV , for a particular asperity is a hemisphere sheared off from the asperity, it follows that: δV=23πa3 This fragment is formed by the material having slid a distance 2a Hence, δQ , the wear volume of material produced from this asperity per unit distance moved is: δQ=δV2a=πa23≡δW3P≈δW3H making the approximation that P≈H However, not all asperities will have had material removed when sliding distance 2a. Therefore, the total wear debris produced per unit distance moved, Q will be lower than the ratio of W to 3H. This is accounted for by the addition of a dimensionless constant K, which also incorporates the factor 3 above. These operations produce the Archard equation as given above. Archard interpreted K factor as a probability of forming wear debris from asperity encounters. Typically for 'mild' wear, K ≈ 10−8, whereas for 'severe' wear, K ≈ 10−2. Recently, it has been shown that there exists a critical length scale that controls the wear debris formation at the asperity level. This length scale defines a critical junction size, where bigger junctions produce debris, while smaller ones deform plastically.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Offensive counter air** Offensive counter air: Offensive counter-air (OCA) is a military term for the suppression of an enemy's military air power, primarily through ground attacks targeting enemy air bases: disabling or destroying parked aircraft, runways, fuel facilities, hangars, air traffic control facilities and other aviation infrastructure. Ground munitions like bombs are typically less expensive than more sophisticated air-to-air munitions, and a single ground munition can destroy or disable multiple aircraft in a very short time whereas aircraft already flying must typically be shot down one at a time. Enemy aircraft already flying also represent an imminent threat as they can usually fire back, and therefore destroying them before they can take off minimizes the risk to friendly aircraft. Offensive counter air: Air-to-air operations conducted by fighter aircraft with the objective of clearing an airspace of enemy fighters known as combat air patrols can also be offensive counter-air missions, but they are seen as a comparatively slow and expensive way of achieving the final objective - air superiority. The opposite term is defensive counter air, primarily referring to the protection of territory, men and/or materiel against incursion by enemy aircraft, usually with a combination of ground-based surface-to-air missiles and anti-aircraft artillery but also through defensive combat air patrols. History: Offensive counter-air strikes have been used since World War I. The Teishin Shudan and Giretsu Kuteitai carried out two OCA raids in the Pacific theatre against B29s. In one measure the most successful single OCA mission to date was Operation Focus, the Israeli offensive that opened the Six-Day War of 1967, when the Heyl Ha'avir destroyed a large portion of the air power of Egypt, Syria, and Jordan, mostly on the ground, totaling roughly 600 airframes destroyed by a force of 200 aircraft. However, in sheer number of planes destroyed, the opening two weeks of Operation Barbarossa saw some 3-4,000 Russian planes destroyed in total. Other successful attacks include US counter-air operations in Korea in 1950 and 1953, French and British attacks during the Suez Crisis and many others. However, there have also been notable failures like Operation Chengiz Khan initiated by Pakistan during the Indo-Pakistani War of 1971 and Iraqi attacks on Iran.Although OCA missions are often carried out via air strikes, they are not limited to aerial action. The Teishin Shudan and Giretsu Kuteitai commandos carried out two notable OCA raids during World War II, as did the British Long Range Desert Group. The Vietcong successfully destroyed a number of American aircraft with mortar fire during the Vietnam War, and more recently a Taliban raid in Afghanistan destroyed eight AV-8B Harriers. History: The Swedish Air Force developed and used the Bas 60 and Bas 90 air base systems as a defensive measure against offensive counter air operations during the Cold War. History: Weapons used During the 1950s, the Cold War strategy of both NATO and the Warsaw Pact called for OCA to be carried out with tactical nuclear weapons, but by the mid-1960s, new policies of 'proportional response' brought about a return to conventional tactics. Beginning shortly before the Six-Day War, specialized weapons were developed for disrupting runways, such as the BLU-107 Durandal anti-runway bomb. Various such weapons continue to be fielded, notably the Hunting JP233 munition used by RAF Panavia Tornado aircraft during the 1991 Gulf War.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pilocarpine** Pilocarpine: Pilocarpine is a medication used to reduce pressure inside the eye and treat dry mouth. As an eye drop it is used to manage angle closure glaucoma until surgery can be performed, ocular hypertension, primary open angle glaucoma, and to constrict the pupil after dilation. However, due to its side effects it is no longer typically used for long-term management. Onset of effects with the drops is typically within an hour and lasts for up to a day. By mouth it is used for dry mouth as a result of Sjögren syndrome or radiation therapy.Common side effects of the eye drops include irritation of the eye, increased tearing, headache, and blurry vision. Other side effects include allergic reactions and retinal detachment. Use is generally not recommended during pregnancy. Pilocarpine is in the miotics family of medication. It works by activating cholinergic receptors of the muscarinic type which cause the trabecular meshwork to open and the aqueous humor to drain from the eye.Pilocarpine was isolated in 1874 by Hardy and Gerrard and has been used to treat glaucoma for more than 100 years. It is on the World Health Organization's List of Essential Medicines. It was originally made from the South American plant Pilocarpus. Medical uses: Pilocarpine stimulates the secretion of large amounts of saliva and sweat. It is used to prevent or treat dry mouth, particularly in Sjögren syndrome, but also as a side effect of radiation therapy for head and neck cancer.It may be used to help differentiate Adie syndrome from other causes of unequal pupil size.It may be used to treat a form of dry eye called aqueous deficient dry eye (ADDE) Surgery Pilocarpine is sometimes used immediately before certain types of corneal grafts and cataract surgery. It is also used prior to YAG laser iridotomy. In ophthalmology, pilocarpine is also used to reduce symptomatic glare at night from lights when the patient has undergone implantation of phakic intraocular lenses; the use of pilocarpine would reduce the size of the pupils, partially relieving these symptoms. The most common concentration for this use is pilocarpine 1%. Pilocarpine is shown to be just as effective as apraclonidine in preventing intraocular pressure spikes after laser trabeculoplasty. Medical uses: Presbyopia In 2021, the US Food and Drug Administration approved pilocarpine hydrochloride as an eyedrop treatment for presbyopia, age-related difficulty with near-in vision. It works by causing the pupils to constrict, increasing depth of field, similar to the effect of pinhole glasses. Marketed as Vuity, the effect lasts for more than 6 hours. Other Pilocarpine is used to stimulate sweat glands in a sweat test to measure the concentration of chloride and sodium that is excreted in sweat. It is used to diagnose cystic fibrosis. Adverse effects: Use of pilocarpine may result in a range of adverse effects, most of them related to its non-selective action as a muscarinic receptor agonist. Pilocarpine has been known to cause excessive salivation, sweating, bronchial mucus secretion, bronchospasm, bradycardia, vasodilation, and diarrhea. Eye drops can result in brow ache and chronic use in miosis. It can also cause temporary blurred vision or darkness of vision, temporary shortsightedness, hyphema and retinal detachment. Pharmacology: Pilocarpine is a drug that acts as a muscarinic receptor agonist. It acts on a subtype of muscarinic receptor (M3) found on the iris sphincter muscle, causing the muscle to contract - resulting in pupil constriction (miosis). Pilocarpine also acts on the ciliary muscle and causes it to contract. When the ciliary muscle contracts, it opens the trabecular meshwork through increased tension on the scleral spur. This action facilitates the rate that aqueous humor leaves the eye to decrease intraocular pressure. Paradoxically, when pilocarpine induces this ciliary muscle contraction (known as an accommodative spasm) it causes the eye's lens to thicken and move forward within the eye. This movement causes the iris (which is located immediately in front of the lens) to also move forward, narrowing the Anterior chamber angle. Narrowing of the anterior chamber angle increases the risk of increased intraocular pressure. Society and culture: Preparation Plants in the genus Pilocarpus are the only known sources of pilocarpine, and commercial production is derived entirely from the leaves of Pilocarpus microphyllus (Maranham Jaborandi). This genus grows only in South America, and Pilocarpus microphyllus is native to several states in northern Brazil.Pilocarpine is extracted from the powdered leaf material in a multi-step process. First the material is treated with ethanol acidified with hydrochloric acid, and the solvents removed under reduced pressure. The resultant aqueous residue is neutralized with ammonia and put aside until the resin has completely settled. It is then filtered and concentrated by sugar solution to a small volume, made alkaline with ammonia, and finally extracted with chloroform. The solvent is removed under reduced pressure. Society and culture: Trade names Pilocarpine is available under several trade names such as: Diocarpine (Dioptic), Isopto Carpine (Alcon), Miocarpine (CIBA Vision), Ocusert Pilo-20 and -40 (Alza), Pilopine HS (Alcon), Salagen (MGI Pharma), Scheinpharm Pilocarpine (Schein Pharmaceutical), Timpilo (Merck Frosst), and Vuity (AbbVie). Research: Pilocarpine is used to induce chronic epilepsy in rodents, commonly rats, as a means to study the disorder's physiology and to examine different treatments. Smaller doses may be used to induce salivation in order to collect samples of saliva, for instance, to obtain information about IgA antibodies. Veterinary: Pilocarpine is given in moderate doses (about 2 mg) to induce emesis in cats that have ingested foreign plants, foods, or drugs. One feline trial determined it was effective, even though the usual choice of emetic is xylazine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pu'er tea** Pu'er tea: Pu'er or pu-erh (Wade-Giles p'u-erh) is a variety of fermented tea traditionally produced in Yunnan Province, China. In the context of traditional Chinese tea production terminology, fermentation refers to microbial fermentation (called 'wet piling'), and is typically applied after the tea leaves have been sufficiently dried and rolled. As the tea undergoes controlled microbial fermentation, it also continues to oxidize, which is also controlled, until the desired flavors are reached. This process produces tea known as 黑茶 hēichá (lit. 'black tea') (which is different from the English-language black tea that is called 红茶 hóngchá (lit. 'red tea') in Chinese). Pu'er falls under a larger category of fermented teas commonly translated as dark teas. Pu'er tea: Two main styles of pu'er production exist: a traditional, longer production process known as shēng (raw) pu'er; and a modern, accelerated production process known as shóu (ripe) pu'er. Pu'er traditionally begins with a raw product called "rough" (máo) chá (毛茶, lit. fuzzy/furry tea) and can be sold in this form or pressed into a number of shapes and sold as "shēng chá (生茶, lit. raw tea). Both of these forms then undergo the complex process of gradual fermentation and maturation with time. The wòduī (渥堆) fermentation process developed in 1973 by the Kunming Tea Factory created a new type of pu'er tea. This process involves an accelerated fermentation into shóu (or shú) chá (熟茶, lit. ripe tea) that is then stored loose or pressed into various shapes. The fermentation process was adopted at the Menghai Tea Factory shortly after and technically developed there. The legitimacy of shóu chá is disputed by some traditionalists when compared to the traditionally, longer-aged teas, such as shēng chá. Pu’er can be stored and permitted to age and to mature, like wine, in non-airtight containers before consumption. This is why it has long been standard practice to label all types of pu’er with the year and region of production. Name: Pu'er is the pinyin romanization of the Mandarin pronunciation of Chinese 普洱. Pu-erh is a variant of the Wade-Giles romanization (properly p‘u-êrh) of the same name. In Hong Kong the same Chinese characters are read as Bo-lei, and that is therefore a common alternative English term for this tea. The tea got its name from the ancient tea-trading town of Pu'er (普洱), which is today's Ning'er Town (宁洱镇) in Ning'er County, Pu'er prefecture-level city of Yunnan. Pu'er County had its name changed into Simao, after Simao Town, the new county seat in 1950 following the Communist victory. The County of Simao became a prefecture level City and had its name changed to Pu'er in 2007. Although the urban center of the modern Pu'er City remained in Simao, the whole Pu'er region is now sometimes considered the appellation for pu'er proper. Name: Pu'er (and all tea) terminology varies from language to language. For example, pu-er is known in Chinese as a type of 'dark tea' (heicha) while in Spanish it is considered ‘té rojo’ (red tea) and, conversely, what in Chinese is called 'red tea' (hongcha) is known in Spanish as 'té negro' (black tea). History: Fermented tea leaves has a long history among ethnic groups in Southwest China. These crude teas were of various origins and were meant to be low cost. Darkened tea, or hēichá, is still the major beverage for the ethnic groups in the southwestern borders and, until the early 1990s, was the third major tea category produced by China mainly for this market segment.There had been no standardized processing for the darkening of hēichá until the postwar years in the 1950s, when there was a sudden surge in demand in Hong Kong, perhaps because of the concentration of refugees from the mainland. In the 1970s the improved process was taken back to Yunnan for further development, which has resulted in the various production styles variously referred to as wòduī today. This new process produced a finished product in a matter of months that many thought tasted similar to teas aged naturally for 10–15 years and so this period saw a demand-driven boom in the production of hēichá by the artificial ripening method. History: In recent decades, demand has come full circle and it has become more common again for hēichá, including pu'er, to be sold as the raw product without the artificial accelerated fermentation process. History: Pu'er tea processing, although straightforward, is complicated by the fact that the tea itself falls into two distinct categories: the "raw" shēngchá and the "ripe" shóuchá. All types of pu'er tea are created from máochá (毛 茶), a mostly unoxidized green tea processed from Camellia sinensis var. assamica, which is the large leaf type of Chinese tea found in the mountains of southern and western Yunnan (in contrast to the small leaf type of tea used for typical green, oolong, black, and yellow teas found in the other parts of China). History: Máochá can be sold directly to market as loose leaf tea, compressed to produce "raw" shēngchá, naturally aged and matured for several years before being compressed to also produce "raw" shēngchá or undergo wòduī ripening for several months prior to being compressed to produce "ripe" shóuchá. While unaged and unprocessed, Máochá pǔ'ěr is similar to green tea. Two subtle differences worth noting are that pǔ'ěr is not produced from the small-leaf Chinese varietal but the broad-leaf varietal mostly found in the southern Chinese provinces and India. The second is that pǔ'ěr leaves are picked as one bud and 3-4 leaves whilst green tea is picked as one bud and 1-2 leaves. This means that older leaves contribute to the qualities of pǔ'ěr tea. History: Ripened or aged raw pǔ'ěr has occasionally been mistakenly categorized as a subcategory of black tea due to the dark red color of its leaves and liquor. However, pǔ'ěr in both its ripened and aged forms has undergone secondary oxidization and fermentation caused both by organisms growing in the tea and free-radical oxidation, thus making it a unique type of tea. This divergence in production style not only makes the flavor and texture of pu'er tea different but also results in a rather different chemical makeup of the resulting brewed liquor. History: The fermented dark tea, hēichá (黑茶), is one of the six classes of tea in China, and pǔ'ěr is classified as a dark tea (defined as fermented), something which is resented by some who argue for a separate category for pǔ'ěr tea. As of 2008, only the large-leaf variety from Yunnan can be called a pǔ'ěr. Processing: Pu'er is typically made through two steps. First, all leaves must be roughly processed into maocha to stop oxidation. From there it may be further processed by fermentation, or directly packaged. Summarising the steps:: 207  Maocha: Killing Green (杀青) -- Rolling (揉捻) -- Sun Drying (晒干) green/raw (生普, sheng cha) dark/ripe (熟普, shu cha): -- Piling(渥堆))-- Drying(干燥))Both sheng and ripe pu'er can be shaped into cakes or bricks and aged with time. Processing: Maocha or rough tea The intent of the máochá stage (青 毛 茶 or 毛 茶; literally, "light green rough tea" or "rough tea" respectively) is to dry the leaves and keep them from spoiling. It involves minimal processing and there is no fermentation involved. Processing: The first step in making raw or ripened pu'er is picking appropriate tender leaves. Plucked leaves are handled gingerly to prevent bruising and unwanted oxidation. It is optional to wilt/wither the leaves after picking and it depends on the tea processor, as drying occurs at various stages of processing. If so, the leaves would be spread out in the sun, weather permitting, or a ventilated space to wilt and remove some of the water content. On overcast or rainy days, the leaves will be wilted by light heating, a slight difference in processing that will affect the quality of the resulting maocha and pu'er. Processing: The leaves are then dry-roasted using a large wok in a process called "killing the green" (殺 青; pinyin: shā qīng), which arrests most enzyme activity in the leaf and prevents full oxidation.: 207  After pan-roasting, the leaves are rolled, rubbed, and shaped into strands through several steps to lightly bruise the tea and then left to dry in the sun. Unlike green tea produced in China which is dried with hot air after the pan-frying stage to completely kill enzyme activity, leaves used in the production of pu'er are not air-dried after pan-roasting, which leaves a small amount of enzymes which contribute a minor amount of oxidation to the leaves during sun-drying. The bruising of the tea is also important in helping this minimal oxidation to occur, and both of these steps are significant in contributing to the unique characteristics of pu'er tea. Processing: Once dry, máochá can be sent directly to the factory to be pressed into raw pu'er, or to undergo further processing to make fermented or ripened pu'er.: 208  Sometimes Mao Cha is sold directly as loose-leaf "raw" Sheng Cha or it can be matured in loose-leaf form, requiring only two to three years due to the faster rate of natural fermentation in an uncompressed state. This tea is then pressed into numerous shapes and sold as a more matured "raw" Sheng Cha. Processing: Pressing To produce pu'er, many additional steps are needed prior to the actual pressing of the tea. First, a specific quantity of dry máochá or ripened tea leaves pertaining to the final weight of the bingcha is weighed out. The dry tea is then lightly steamed in perforated cans to soften and make it more tacky. This will allow it to hold together and not crumble during compression. A ticket, called a "nèi fēi" (内 飞) or additional adornments, such as colored ribbons, are placed on or in the midst of the leaves and inverted into a cloth bag or wrapped in cloth. The pouch of tea is gathered inside the cloth bag and wrung into a ball, with the extra cloth tied or coiled around itself. This coil or knot is what produces the dimpled indentation at the reverse side of a tea cake when pressed. Depending on the shape of the pu'er being produced, a cotton bag may or may not be used. For instance, brick or square teas often are not compressed using bags.Pressing can be done by: A press. In the past, hand lever presses were used, but were largely superseded by hydraulic presses. The press forces the tea into a metal form that is occasionally decorated with a motif in sunken-relief. Due to its efficiency, this method is used to make almost all forms of pressed pu'er. Tea can be pressed either with or without it being bagged, with the latter done by using a metal mould. Tightly compressed bǐng, formed directly into a mold without bags using this method are known as tié bǐng (鐵 餅, literally "iron cake/puck") due to its density and hardness. The taste of densely compressed raw pu'er is believed to benefit from careful aging for up to several decades. Processing: A large heavy stone, carved into the shape of a short cylinder with a handle, simply weighs down a bag of tea on a wooden board. The tension from the bag and the weight of the stone together give the tea its rounded and sometimes non-uniform edge. This method of pressing is often referred to as: "hand" or "stone-pressing", and is how many artisanal pu'er bǐng are still manufactured.Pressed pu'er is removed from the cloth bag and placed on latticed shelves, where they are allowed to air dry, which may take several weeks or months, depending on the wetness of the pressed cakes. The pu'er cakes are then individually wrapped by hand, and packed. Processing: Fermentation Pu'er is a microbially fermented tea obtained through the action of molds, bacteria and yeasts on the harvested leaves of the tea plant. It is thus truly a fermented tea, whereas teas known in the west as black teas (known in China as Red teas) have only undergone large-scale oxidation through naturally occurring tea plant enzymes. Mislabelling the oxidation process as fermentation and thus naming black teas, such as Assam, Darjeeling or Keemun, as fermented teas has long been a source of confusion. Only tea such as pu'er, that has undergone microbial processing, can correctly be called a fermented tea.Pu'er undergoes what is known as a solid-state fermentation where water activity is low to negligible. Both endo-oxidation (enzymes derived from the tea-leaves themselves) and exo-oxidation (microbial catalysed) of tea polyphenols occurs. The microbes are also responsible for metabolising the carbohydrates and amino acids present in the tea leaves. Although the microbes responsible have proved highly variable from region to region and even factory to factory, the key organism found and responsible for almost all pu'er fermentation has been identified in numerous studies as Aspergillus niger, with some highlighting the possibility of ochratoxins produced by the metabolism of some strains of A.niger having a potentially harmful effect through consumption of pu'er tea. This notion has recently been refuted through a systematic chromosome analysis of the species attributed to many East Asian fermentations, including those that involve pu'er, where the authors have reclassified the organisms involved as Aspergillus luchuensis. It is apparent that this species does not have the gene sequence for coding ochratoxin and thus pu'er tea should be considered safe for human consumption. Processing: Ripe and raw pu'er "Ripened" Shu Cha (熟茶) tea is pressed maocha that has been specially processed to imitate aged "raw" Sheng Cha tea. Although it is also known in English as cooked pu'er, the process does not actually employ cooking to imitate the aging process. The term may be due to inaccurate translation, as shóu (熟) means both "fully cooked" and "fully ripened". Processing: The process used to convert máochá into ripened pu'er manipulates conditions to approximate the result of the aging process by prolonged bacterial and fungal fermentation in a warm humid environment under controlled conditions, a technique called Wò Duī (渥 堆, "wet piling" in English), which involves piling, dampening, and turning the tea leaves in a manner much akin to composting. Studies have shown that the soil suitable for planting Pu'er tea trees is slightly acidic soil with loose soil, deep soil layer, good drainage and ventilation.The piling, wetting, and mixing of the piled máochá ensures even fermentation. The bacterial and fungal cultures found in the fermenting piles were found to vary widely from factory to factory throughout Yunnan, consisting of multiple strains of Aspergillus spp., Penicillium spp., yeasts, and a wide range of other microflora. Control over the multiple variables in the ripening process, particularly humidity and the growth of Aspergillus spp., is key in producing ripened pu'er of high quality. Poor control in fermentation/oxidation process can result in bad ripened pu'er, characterized by badly decomposed leaves and an aroma and texture reminiscent of compost. The ripening process typically takes between 45 and 60 days on average. Processing: The Wò Duī process was first developed in 1973 by Menghai Tea Factory and Kunming Tea Factory to imitate the flavor and color of aged raw pu'er, and was an adaptation of wet storage techniques used by merchants to artificially simulate ageing of their teas. Mass production of ripened pu'er began in 1975. It can be consumed without further aging, or it can be stored further to "air out" some of the less savory flavors and aromas acquired during fermentation. The tea is sold both in flattened and loose form. Some tea collectors believe "ripened" Shu Cha should not be aged for more than a decade. Processing: Wet pile fermented pu'er has higher levels of caffeine and much higher levels of gallic acid compared with traditionally aged raw pu'er. Additionally, traditionally aged pu'er has higher levels of the antioxidant and carcinogen-trapping epigallocatechin gallate as well as (+)-catechin, (–)-epicatechin, (–)-epigallocatechin, gallocatechin gallate, and epicatechin gallate than wet pile fermented pu'er. Finally, wet pile fermented puer has much lower total levels for all catechins than traditional pu'er and other teas except for black tea which also has low total catechins. Classification: Aside from vintage year, pu'er tea can be classified in a variety of ways: by shape, processing method, region, cultivation, grade, and season. Classification: Shape Pu'er is compressed into a variety of shapes. Other lesser seen forms include: stacked "melon pagodas", pillars, calabashes, yuanbao, and small tea bricks (2–5 cm in width). Pu'er is also compressed into the hollow centers of bamboo stems or packed and bound into a ball inside the peel of various citrus fruits (Xiaoqinggan) or sold as nuggets (Suiyinzi 碎银子 or fossilized tea 茶化石) or bundles made from tea at the center of wet piles (Laotoucha 老头茶). Classification: Process and oxidation Pu'er teas are often collectively classified in Western tea markets as post-fermentation, and in Eastern markets as black teas, but there is general confusion due to improper use of the terms "oxidation" and "fermentation". Typically black tea is termed "fully fermented", which is incorrect as the process used to create black tea is oxidation and does not involve microbial activity. Black teas are fully oxidized, green teas are unoxidized, and Oolong teas are partially oxidized to varying degrees. Classification: All pu'er teas undergo some oxidation during sun drying and then become either: Fully fermented with microbes during a processing phase which is largely anaerobic, i.e. without the presence of oxygen. This phase is similar to composting and results in Shu (ripened) pu'er Partly fermented by microbial action, and partly oxidized during the natural aging process resulting in Sheng (raw) pu'er. The aging process depends on how the sheng pu'er is stored, which determines the degree of fermentation and oxidization achieved.According to the production process, four main types of pu'er are commonly available on the market: Maocha, green pu'er leaves sold in loose form as the raw material for making pressed pu'er. Badly processed maocha will produce an inferior pu'er. Classification: Green/raw pu'er, pressed maocha that has not undergone additional processing; high quality green pu'er is highly sought by collectors. Ripened/cooked pu'er, maocha that has undergone an accelerated fermentation process lasting 45 to 60 days on average. Badly fermented maocha will create a muddy tea with fishy and sour flavors indicative of inferior aged pu'er. Aged raw pu'er, a tea that has undergone a slow secondary oxidation and microbial fermentation. Although all types of pu'er can be aged, the pressed raw pu'er is typically the most highly regarded, since aged maocha and ripened pu'er both lack a clean and assertive taste. Classification: Flavour Ripe pu'er is often described by its multiple layers of aroma: duiwei (堆味) or fermented flavour, cangwei (仓味) or storage flavour, xingwei (腥味) or fish flavour and meiwei (霉味) moldy flavour. The storage locations (Yunnan, Canton or Hong Kong) and storing conditions (wet versus dry storage) will result in distinct flavours. The aromas can be annotated as camphora (樟香), ginseng (参香), jujube (枣香), costus (木香), minty (荷香) or very aged (陈香). Raw pu'er is often distinguished by its floral (花香), grassy (草香), fresh (清香), herbal (药香), fruity (水果) or honey(蜜香)aroma. Classification: Some pu'er are flavour-infused. Sticky rice pu'er (nuomixiang, 糯米香) is infused with leaves of Semnostachya menglaensis, native to Mengla, which gives it a young rice flavour. Bamboo-roasted pu'er is encased in bamboo tubes and undergoes a smoking process. Tangerine pu'er (xiaoqinggan, 小青柑) is made with small green tangerines stuffed with tea. Flower-infused pu'er is made in the form of tea balls (龙珠) or tea cakes. Classification: Regions Yunnan Pu'er is produced in almost every county and prefecture in the province. Proper pu'er is sometimes considered to be limited to that produced in Pu'er City. Classification: Six Great Tea Mountains The best known pu'er areas are the Six Great Tea Mountains (Chinese: 六 大 茶 山; pinyin: liù dà chá shān), a group of mountains in Xishuangbanna, Yunnan, renowned for their climates and environments, which not only provide excellent growing conditions for pu'er, but also produce unique taste profiles (akin to terroir in wine) in the produced pu'er tea. Over the course of history, the designated mountains for the tea mountains have either been changed or listed differently.In the Qing dynasty government records for Pu'er (普洱府志), the oldest historically designated mountains were said to be named after six commemorative items left in the mountains by Zhuge Liang, and using the Chinese characters of the native languages (Hani and Tai) of the region. These mountains are all located northeast of the Lancang River (Mekong) in relatively close proximity to one another. The mountains' names, in the Standard Chinese character pronunciation are: Gedeng (革 登 山) Yiwu (易 武 山) Mangzhi (莽 枝 山) Manzhuan (蠻 磚 山) Yibang (倚 邦 山) Yōulè (攸 樂 山)Southwest of the river there are also nine lesser known tea mountains, which are isolated by the river. They are: Mengsong (勐宋): Pasha (帕沙): Jingmai (景迈): Nánnuò (南 糯): a varietal of tea grows here called zĭjuān (紫 娟, literally "purple lady") whose buds and bud leaves have a purple hue. Classification: Bada (巴达): Hekai (贺开): Bulangshan (布朗山): Mannuo (曼糯): Xiao mengsong (小勐宋):For various reasons, around the end of the Qing dynasty and at the beginning of the ROC period (the early twentieth century), tea production in these mountains dropped drastically, either due to large forest fires, overharvesting, prohibitive imperial taxes, or general neglect. To revitalize tea production in the area, the Chinese government in 1962 selected a new group of six great tea mountains that were named based on the more important tea-producing mountains at the time, including Youle mountain from the original six. Classification: Other areas of Yunnan Many other areas of Yunnan also produce pu'er tea. Yunnan prefectures that are major producers of pu'er include Lincang, Dehong, Simao, Xishuangbanna, and Wenshan. Other notable tea mountains famous in Yunnan include among others: Bāngwǎi (邦 崴 山) Bānzhāng (班 章): this is not a mountain but a Hani village in the Bulang Mountains, noted for producing powerful and complex teas that are bitter with a sweet aftertaste Yìwǔ (易 武 山) Bada (巴達山) Wuliang Ailuo Jinggu Baoshan YushouRegion is only one factor in assessing a pu'er tea, and pu'er from any region of Yunnan can be as prized as any from the Six Great Tea Mountains if it meets other criteria, such as being wild growth, hand-processed tea. Classification: Other provinces While Yunnan produces the majority of pu'er, other regions of China, including Hunan and Guangdong, have also produced the tea. The Guangyun Gong cake, for example, although the early productions were composed of pure Yunnan máochá, after the ‘60s the cakes featured a blend of Yunnan and Guangdong máochá, and the most recent production of these cakes contains mostly from the latter.In late 2008, the Chinese government approved a standard declaring pu'er tea as a "product with geographical indications", which would restrict the naming of tea as pu'er to tea produced within specific regions of the Yunnan province. The standard has been disputed, particularly by producers from Guangdong. Fermented tea in the pu'er style made outside of Yunnan is often branded as "dark tea" in light of this standard. Classification: Other regions In addition to China, border regions touching Yunnan in Vietnam, Laos, and Burma are also known to produce pu'er tea, though little of this makes its way to the Chinese or international markets. Classification: Cultivation The method of cultivation can have as much of an effect on the final product as region or grade. There are three widely used methods of cultivation for pu'er: Plantation bushes (guànmù, 灌 木; taídì, 台 地): Cultivated tea bushes, from the seeds or cuttings of wild tea trees and planted in relatively low altitudes and flatter terrain. The tea produced from these plants are often considered inferior due to the use of pesticides and chemical fertilizer in cultivation, the lack of pleasant flavors, and the presence of bitterness or astringency. Classification: "Wild arbor" trees (yěfàng, 野 放): Though often conflated with wild tree especially by producers, this method involves trees from older plantations that were cultivated in previous generations that have gone feral due to the lack of care. These trees are said to produce teas of better flavor due to the higher levels of secondary metabolites produced in the tea tree. Additionally, the trees are typically cared for using organic practices, which includes the scheduled pruning of the trees in a manner similar to pollarding. Despite the good quality of their produced teas, "wild arbor" trees are often not as prized as truly wild trees. Classification: Wild trees (gŭshù, 古 树; literally "old tree"): Teas from old wild trees, grown without human intervention, are typically the highest valued pu'er teas. Such teas are valued for having deeper and more complex flavors, often with camphor or "mint" notes, said to be imparted by the many camphor trees that grow in the same environment as the wild tea trees. Young raw pu'er teas produced from the leaf tips of these trees also lack overwhelming astringency and bitterness often attributed to young pu'er. Pu'er made from the distinct but closely related so-called wild species Camellia taliensis can command a much higher price than pu'er made from the more common Camellia sinensis.Determining whether or not a tea is wild is a challenging task, made more difficult through the inconsistent and unclear terminology and labeling in Chinese. Terms like yěshēng (野 生; literally "wild" or "uncultivated"), qiáomù (乔 木; literally "tall tree"), yěshēng qiáomù (野 生 乔 木; literally "uncultivated trees"), and gǔshù are found on the labels of cakes of both wild and "wild arbor" variety, and on blended cakes, which contain leaves from tea plants of various cultivations. These inconsistent and often misleading labels can easily confuse uninitiated tea buyers regardless of their grasp of the Chinese language. As well, the lack of specific information about tea leaf sources in the printed wrappers and identifiers that come with the pu'er cake makes identification of the tea a difficult task. Pu'er journals and similar annual guides such as The Profound World of Chi Tse, Pu-erh Yearbook, and Pu-erh Teapot Magazine contain credible sources for leaf information. Tea factories are generally honest about their leaf sources, but someone without access to tea factory or other information is often at the mercy of the middlemen or vendor. Many pu'er aficionados seek out and maintain relationships with vendors who they feel they can trust to help mitigate the issue of finding the "truth" of the leaves. Classification: Even in the best of circumstances, when a journal, factory information, and trustworthy vendor all align to assure a tea's genuinely wild leaf, fakes teas are common and make the issue even more complicated. Because collectors often doubt the reliability of written information, some believe certain physical aspects of the leaf can point to its cultivation. For example, drinkers cite the evidence of a truly wild old tree in a menthol effect ("camphor" in tea specialist terminology) supposedly caused by the Camphor laurel trees that grow amongst wild tea trees in Yunnan's tea forests. As well, the presence of thick veins and sawtooth-edged on the leaves along with camphor flavor elements are taken as signifiers of wild tea. Classification: Grade Pu'er can be sorted into ten or more grades. Generally, grades are determined by leaf size and quality, with higher numbered grades meaning older/larger, broken, or less tender leaves. Grading is rarely consistent between factories, and first grade tea leaves may not necessarily produce first grade cakes. Different grades have different flavors; many bricks blend several grades chosen to balance flavors and strength. Classification: Season Harvest season also plays an important role in the flavor of pu'er. Spring tea is the most highly valued, followed by fall tea, and finally summer tea. Only rarely is pu'er produced in winter months, and often this is what is called "early spring" tea, as harvest and production follows the weather pattern rather than strict monthly guidelines. Tea factories: Factories are generally responsible for the production of pu'er teas. While some individuals oversee small-scale production of high-quality tea, the majority of tea on the market is compressed by factories or tea groups. Until recently factories were all state-owned and under the supervision of the China National Native Produce & Animal Byproducts Import & Export Corporation (CNNP), Yunnan Tea Branch. Kunming Tea Factory, Menghai Tea Factory, Pu'er Tea Factory and Xiaguan Tea Factory are the most notable of these state-owned factories. While CNNP still operates today, few factories are state-owned, and CNNP contracts out much production to privately owned factories. Tea factories: Different tea factories have earned good reputations. Menghai Tea Factory and Xiaguan Tea Factory, which date from the 1940s, have enjoyed good reputations, but in the twenty-first century face competition from many of the newly emerging private factories. For example, Haiwan Tea Factory, founded by former Menghai Factory owner Zhou Bing Liang in 1999, has a good reputation, as do Changtai Tea Group, Mengku Tea Company, and other new tea makers formed in the 1990s. However, due to production inconsistencies and variations in manufacturing techniques, the reputation of a tea company or factory can vary depending on the year or the specific cakes produced during a year. Tea factories: The producing factory is often the first or second item listed when referencing a pu'er cake, the other being the year of production. Recipes: Tea factories, particularly formerly government-owned factories, produce many cakes using recipes for tea blends, indicated by a four-digit recipe number. The first two digits of recipe numbers represent the year the recipe was first produced, the third digit represents the grade of leaves used in the recipe, and the last digit represents the factory. The number 7542, for example, would denote a recipe from 1975 using fourth-grade tea leaf made by Menghai Tea Factory (represented by 2). Recipes: Factory numbers (fourth digit in recipe): Kunming Tea Factory Menghai Tea Factory aka Dayi Xiaguan Lan Cang Tea Factory or Feng Qing Tea Factory Pu-erh Tea Factory (now Pu-erh Tea group Co. Ltd ) Six Famous Tea Mountain Factory unknown / not specified Haiwan Tea Factory and Long Sheng Tea FactoryTea of all shapes can be made by numbered recipe. Not all recipes are numbered, and not all cakes are made by recipe. The term "recipe," it should be added, does not always indicate consistency, as the quality of some recipes change from year-to-year, as do the contents of the cake. Perhaps only the factories producing the recipes really know what makes them consistent enough to label by these numbers. Recipes: Occasionally, a three digit code is attached to the recipe number by hyphenation. The first digit of this code represents the year the cake was produced, and the other two numbers indicate the production number within that year. For instance, the seven digit sequence 8653-602, would indicate the second production in 2006 of factory recipe 8653. Some productions of cakes are valued over others because production numbers can indicate if a tea was produced earlier or later in a season/year. This information allows one to be able to single out tea cakes produced using a better batch of máochá. Tea packaging: Pu'er tea is specially packaged for trade, identification, and storage. These attributes are used by tea drinkers and collectors to determine the authenticity of the pu'er tea. Tea packaging: Individual cakes Pu'er tea cakes, or bǐngchá (饼茶 or 餅茶), are almost always sold with a: Wrapper: Made usually from thin cotton cloth or cotton paper and shows the tea company/factory, the year of production, the region/mountain of harvest, the plant type, and the recipe number. The wrapper can also contain decals, logos and artwork. Occasionally, more than one wrapper will be used to wrap a pu'er cake. Tea packaging: Nèi fēi (内 飞 or 內 飛): A small ticket originally stuck on the tea cake but now usually embedded into the cake during pressing. It is usually used as proof, or a possible sign, to the authenticity of the tea. Some higher end pu'er cakes have more than one nèi fēi embedded in the cake. The ticket usually indicates the tea factory and brand. Tea packaging: Nèi piào (内 票): A larger description ticket or flyer packaged loose under the wrapper. Both aid in assuring the identity of the cake. It usually indicates factory and brand. As well, many nèi piào contain a summary of the tea factories' history and any additional laudatory statements concerning the tea, from its taste and rarity, to its ability to cure diseases and effect weight loss. Tea packaging: Bǐng (饼 or 餅): The tea cake itself. Tea cakes or other compressed pu'er can be made up of two or more grades of tea, typically with higher grade leaves on the outside of the cake and lower grades or broken leaves in the center. This is done to improve the appearance of the tea cake and improve its sale. Predicting the grade of tea used on the inside takes some effort and experience in selection. However, the area in and around the dimple of the tea cake can sometimes reveal the quality of the inner leaves.Recently, nèi fēi have become more important in identifying and preventing counterfeits. Menghai Tea Factory in particular has begun microprinting and embossing their tickets in an effort to curb the growth of counterfeit teas found in the marketplace in the late 1990s and early 2000s. Some nèi fēi also include vintage year and are production-specific to help identify the cake and prevent counterfeiting through a surfeit of different brand labels. Tea packaging: Counterfeit pu'er is common. The practices include claiming the tea is older than it actually is, misidentifying the origin of the leaf as Yunnan instead of a non-Yunnan region, labeling terrace tea as forest tea, and selling green tea instead of raw pu'er. The interpretation of the packing of pu'er is usually dependent on the consumer's knowledge and negotiation between the consumer and trader. Tea packaging: Wholesale When bought in large quantities, pu'er tea is generally sold in stacks, referred to as a tǒng (筒), which are wrapped in bamboo shoot husks, bamboo stem husks, or coarse paper. Some tongs of vintage pu'er will contain a tǒng piào (筒 票), or tong ticket, but it is less common to find them in productions past the year 2000. The number of bǐngchá in a tǒng varies depending on the weight of individual bǐngchá. For instance one tǒng can contain: Seven 357–500 g 'bǐngchá', Five 250 g mini-'bǐngchá' Ten 100 g mini-'bǐngchá'Twelve tǒng are referred to as being one jiàn (件), although some producers/factories vary how many tǒng equal one jiàn. A jiàn of tea, which is bound together in a loose bamboo basket, will usually have a large batch ticket (大 票; pinyin: dàpiào) affixed to its side that will indicate information such as the batch number of the tea in a season, the production quantities, tea type, and the factory where it was produced. Aging and storage: Pu'er teas of all varieties, shapes, and cultivation can be aged to improve their flavor, but the tea's physical properties will affect the speed of aging as well as its quality. These properties include: Leaf quality: Maocha that has been improperly processed will not age to the level of finesse as properly processed maocha. The grade and cultivation of the leaf also greatly affect its quality, and thus its aging. Aging and storage: Compression: The tighter a tea is compressed, the slower it will age. In this respect, looser hand- and stone-pressed pu'er teas will age more quickly than denser hydraulic-pressed pu'er. Aging and storage: Shape and size : The more surface area, the faster the tea will age. Bǐngchá and zhuancha thus age more quickly than golden melon, tuocha, or jincha. Larger bingcha age slower than smaller 'bǐngchá', and so forth.Just as important as the tea's properties, environmental factors for the tea's storage also affect how quickly and successfully a tea ages. They include: Air flow: Regulates the oxygen content surrounding the tea and removes odors from the aging tea. Dank, stagnant air will lead to dank, stale smelling aged tea. Wrapping a tea in plastic will eventually arrest the aging process. Aging and storage: Odors: Tea stored in the presence of strong odors will acquire them, sometimes for the duration of their "lifetime." Airing out pu'er teas can reduce these odors, though often not completely. Humidity : The higher the humidity, the faster the tea will age. Liquid water accumulating on tea may accelerate the aging process but can also cause the growth of mold or make the flavor of the tea less desirable. 60–85% humidity is recommended. There is an ongoing argument as to whether high fluctuations in humidity negatively impact tea quality. Sunlight: Tea that is exposed to sunlight dries out prematurely, and often becomes bitter. Aging and storage: Temperature: Teas should not be subjected to high heat since undesirable flavors will develop. However at low temperatures, the aging of pu'er tea will slow down drastically. It is argued whether tea quality is adversely affected if it is subjected to highly fluctuating temperature.When preserved as part of a tong, the material of the tong wrapper, whether it is made of bamboo shoot husks, bamboo leaves, or thick paper, can also affect the quality of the aging process. The packaging methods change the environmental factors and may even contribute to the taste of the tea itself. Aging and storage: Age is not the sole factor in determining pu'er quality. Similar to aging wine, the tea reaches a peak with age and can degrade in quality afterwards. Due to the many recipes and different processing methods used in the production of different batches of pu'er, the optimal age for each tea will vary. Some may take 10 years while others 20 or 30+ years. It is important to check the status of ageing for your teacakes to know when they have peaked so that proper care can be given to halt the ageing process. Aging and storage: Raw pu'er Over time, raw pu'er acquires an earthy flavor due to slow oxidation and other, possibly microbial processes. However, this oxidation is not analogous to the oxidation that results in green, oolong, or black tea, because the process is not catalyzed by the plant's own enzymes but rather by fungal, bacterial, or autooxidation influences. Pu'er flavors can change dramatically over the course of the aging process, resulting in a brew tasting strongly earthy but clean and smooth, reminiscent of the smell of rich garden soil or an autumn leaf pile, sometimes with roasted or sweet undertones. Because of its ability to age without losing "quality", well aged good pu'er gains value over time in the same way that aged roasted oolong does.Raw pu'er can undergo "wet storage" (shīcāng, 湿仓) and "dry storage" (gāncāng 干仓), with teas that have undergone the latter ageing more slowly, but thought to show more complexity. Dry storage involves keeping the tea in "comfortable" temperature and humidity, thus allowing the tea to age slowly. Wet or "humid" storage refers to the storage of pu'er tea in humid environments, such as those found naturally in Hong Kong, Guangzhou and, to a lesser extent, Taiwan. Aging and storage: The practice of "Pen Shui" 喷 水 involves spraying the tea with water and allowing it dry off in a humid environment. This process speeds up oxidation and microbial conversion, which only loosely mimics the quality of natural dry storage aged pu'er. "Pen Shui" 喷 水 pu'er not only does not acquire the nuances of slow aging, it can also be hazardous to drink because of mold, yeast, and bacteria cultures.Pu'er properly stored in different environments can develop different tastes at different rates due to environmental differences in ambient humidity, temperature, and odors. For instance, similar batches of pu'er stored in the different environments of Yunnan, Guangzhou and Hong Kong are known to age very differently. Because the process of aging pu'er is lengthy, and teas may change owners several times, a batch of pu'er may undergo different aging conditions, even swapping wet and dry storage conditions, which can drastically alter its flavor. Raw pu'er can be ruined by storage at very high temperatures, or exposure to direct contact with sunlight, heavy air flow, liquid water, or unpleasant smells. Aging and storage: Although low to moderate air flow is important for producing a good-quality aged raw pu'er, it is generally agreed by most collectors and connoisseurs that raw pu'er tea cakes older than 30 years should not be further exposed to "open" air since it would result in the loss of flavors or degradation in mouthfeel. The tea should instead be preserved by wrapping or hermetically sealing it in plastic wrapping or ideally glass. Aging and storage: Ripe pu'er Since the ripening process was developed to imitate aged raw pu'er, many arguments surround the idea of whether aging ripened pu'er is desirable. Mostly, the issue rests on whether aging ripened pu'er will, for better or worse, alter the flavor of the tea. Aging and storage: It is often recommended to age ripened pu'er to air out the unpleasant musty flavors and odors formed due to maocha fermentation. However, some collectors argue that keeping ripened pu'er longer than 10 to 15 years makes little sense, stating that the tea will not develop further and possibly lose its desirable flavors. Others note that their experience has taught them that ripened pu'er indeed does take on nuances through aging, and point to side-by-side taste comparisons of ripened pu'er of different ages. Aging the tea increases its value, but may be unprofitable. Aging and storage: Vintaging The common misconception is that all types of pu'er tea will improve in taste—and therefore gain in value—as they get older. Many different factors play into what makes a tea ideal for aging and the aging process itself. Further, ripe (shu) pu'er will not evolve as dramatically as raw (sheng) pu'er will over time due to secondary oxidation and fermentation. Aging and storage: As with aging wine, only finely made and properly stored teas will improve and increase in value. Similarly, only a small percentage of teas will improve over a long period of time. From 2008 Pu'er prices dropped dramatically. Investment-grade Pu'er did not drop as much as the more common varieties. Many producers made large losses, and some decided to leave the industry altogether. Preparation: Preparation of pu'er involves first separating a portion of the compressed tea for brewing. This can be done by flaking off pieces of the cake or by steaming the entire cake until it is soft from heat and hydration. A pu'erh knife, which is similar to an oyster knife or a rigid letter opener, is used to pry large horizontal flakes of tea off the cake to minimize leaf breakage. Smaller cakes such as tuocha or mushroom pu'erh are often steamed until they can be rubbed apart and then dried. In both cases, a vertical sampling of the cake should be obtained since the quality of the leaves in a cake usually varies between the surface and the center. Preparation: Pu'erh is commonly brewed in the Gongfu style using Yixing teaware or a gaiwan, a Chinese brewing vessel consisting of bowl, lid, and saucer. Optimum water temperatures are generally regarded to be in the range of 85-99 °C depending on the quality and processing of the pu'erh. The leaves are traditionally given one or more "rinses" before the first infusion, involving exposing them to hot water for 2–5 seconds and subsequently discarding the extract produced. This is done to saturate the leaf with water and allow it to decompress, as well as remove any small leaf particles that could adversely affect the outcome of the first infusion. The first infusion is steeped for 12 to 30 seconds, followed by later infusions repeatedly increasing by 2–10 seconds. The prolonged steeping sometimes used in the west can produce dark, bitter, and unpleasant brews. Quality aged pu'erh can yield many more infusions, with different flavor nuances when brewed in the traditional Gongfu method. Preparation: Because of the prolonged fermentation in ripened pu'erh and slow oxidization of aged raw pu'erh, these teas often lack the bitter, astringent properties of other teas, and can be brewed much stronger and repeatedly, with some claiming 20 or more infusions of tea from one pot of leaves. On the other hand, young raw pu'erh is known and expected to be strong and aromatic, yet very bitter and somewhat astringent when brewed, since these characteristics are believed to produce better aged raw pu'erh. Preparation: Judging quality Quality of the tea can be determined through inspecting the dried leaves, the tea liquor, or the spent tea leaves. The "true" quality of a specific batch of pu'erh can ultimately only be revealed when the tea is brewed and tasted. Although not concrete and sometimes dependent on preference, there are several general indicators of quality: Dried tea: There should be a lack of twigs, extraneous matter and white or dark mold spots on the surface of the compressed pu'erh. The leaves should ideally be whole, visually distinct, and not appear muddy. The leaves may be dry and fragile, but not powdery. Good tea should be quite fragrant, even when dry. Good pressed pu'er cakes often have a matte sheen on the surface, though this is not necessarily a sole indicator of quality. Preparation: Liquor: The tea liquor of both raw and ripe pu'erh should never appear cloudy. Well-aged raw pu'erh and well-crafted ripe pu'erh tea may produce a dark reddish liquor, reminiscent of a dried jujube, but in either case the liquor should not be opaque, "muddy," or black in color. The flavors of pu'erh liquors should persist and be revealed throughout separate or subsequent infusions, and never abruptly disappear, since this could be the sign of added flavorants. Preparation: Young raw pu'erh: The ideal liquors should be aromatic with a light but distinct odors of camphor, rich herbal notes like Chinese medicine, fragrance floral notes, hints of dried fruit aromas such as preserved plums, and should exhibit only some grassy notes to the likes of fresh sencha. Young raw pu'er may sometimes be quite bitter and astringent, but should also exhibit a pleasant mouthfeel and "sweet" aftertaste, referred to as gān (甘) and húigān (回甘). Preparation: Aged raw pu'erh: Aged pu'er should never smell moldy, musty, or strongly fungal, though some pu'erh drinkers consider these smells to be unoffensive or even enjoyable. The smell of aged pu'erh may vary, with an "aged" but not "stuffy" odor. The taste of aged raw pu'erh or ripe pu'erh should be smooth, with slight hints of bitterness, and lack a biting astringency or any off-sour tastes. The element of taste is an important indicator of aged pu'erh quality, the texture should be rich and thick and should have very distinct gān (甘) and húigān (回甘) on the tongue and cheeks, which together induces salivation and leaves a "feeling" in the back of the throat. Preparation: Spent tea: Whole leaves and leaf bud systems should be easily seen and picked out of the wet spent tea, with a limited amount of broken fragments. Twigs and the fruits of the tea plant should not be found in the spent tea leaves; however, animal (and human) hair, strings, rice grains and chaff may occasionally be included in the tea. The leaves should not crumble when rubbed, and with ripened pu'erh, it should not resemble compost. Aged raw pu'erh should have leaves that unfurl when brewed while leaves of most ripened pu'erh will generally remain closed. Preparation: Practices In Cantonese, the tea is called po-lay (Cantonese Yale: bou2 nei2). It is often drunk during dim sum meals, as it is believed to help with digestion. It is not uncommon to add dried osmanthus flowers, pomelo rinds, or chrysanthemum flowers into brewing pu'er tea in order to add a light, fresh fragrance to the tea liquor. Pu'er with chrysanthemum is the most common pairing, and referred as guk pou or guk bou (菊 普; Cantonese Yale: guk1 pou2; pinyin: jú pǔ). Preparation: Sometimes wolfberries are brewed with the tea, plumping in the process. Research: There is no evidence that consuming pu'er tea promotes loss of body weight in humans. Popular culture: In the Japanese manga Dragon Ball, the name of the character Pu'ar is a pun on pu'er tea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**General line of merchandise** General line of merchandise: General line of merchandise or general merchandise is a term used in retail and wholesale business in reference to merchandise not limited to some particular category. General merchandise stores (general stores) address this sector of retail. General line of merchandise: According to the North American Industry Classification System 2002, the following types of general merchandise are excluded from the line carried by general stores: general line of building and home improvement materials (44411, Home Centres) general line of grocery items (44511, Supermarkets and Other Grocery (except Convenience) Stores) general line of used goods (45331, Used Merchandise Stores)Regardless of this classification system, general stores indeed carry basic grocery items, often limited produce, basic hardware and gardening tools, and other necessaries of rural life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barrett–Crane model** Barrett–Crane model: The Barrett–Crane model is a model in quantum gravity, first published in 1998, which was defined using the Plebanski action.The B field in the action is supposed to be a so(3,1) -valued 2-form, i.e. taking values in the Lie algebra of a special orthogonal group. The term Bij∧Bkl in the action has the same symmetries as it does to provide the Einstein–Hilbert action. But the form of Bij is not unique and can be posed by the different forms: ±ei∧ej ±ϵijklek∧el where ei is the tetrad and ϵijkl is the antisymmetric symbol of the so(3,1) -valued 2-form fields. Barrett–Crane model: The Plebanski action can be constrained to produce the BF model which is a theory of no local degrees of freedom. John W. Barrett and Louis Crane modeled the analogous constraint on the summation over spin foam. Barrett–Crane model: The Barrett–Crane model on spin foam quantizes the Plebanski action, but its path integral amplitude corresponds to the degenerate B field and not the specific definition Bij=ei∧ej ,which formally satisfies the Einstein's field equation of general relativity. However, if analysed with the tools of loop quantum gravity the Barrett–Crane model gives an incorrect long-distance limit [1], and so the model is not identical to loop quantum gravity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Contrast (statistics)** Contrast (statistics): In statistics, particularly in analysis of variance and linear regression, a contrast is a linear combination of variables (parameters or statistics) whose coefficients add up to zero, allowing comparison of different treatments. Definitions: Let θ1,…,θt be a set of variables, either parameters or statistics, and a1,…,at be known constants. The quantity ∑i=1taiθi is a linear combination. It is called a contrast if ∑i=1tai=0 . Furthermore, two contrasts, ∑i=1taiθi and ∑i=1tbiθi , are orthogonal if ∑i=1taibi=0 Examples: Let us imagine that we are comparing four means, μ1,μ2,μ3,μ4 . The following table describes three possible contrasts: The first contrast allows comparison of the first mean with the second, the second contrast allows comparison of the third mean with the fourth, and the third contrast allows comparison of the average of the first two means with the average of the last two.In a balanced one-way analysis of variance, using orthogonal contrasts has the advantage of completely partitioning the treatment sum of squares into non-overlapping additive components that represent the variation due to each contrast. Consider the numbers above: each of the rows sums up to zero (hence they are contrasts). If we multiply each element of the first and second row and add those up, this again results in zero, thus the first and second contrast are orthogonal and so on. Sets of contrast: Orthogonal contrasts are a set of contrasts in which, for any distinct pair, the sum of the cross-products of the coefficients is zero (assume sample sizes are equal). Although there are potentially infinite sets of orthogonal contrasts, within any given set there will always be a maximum of exactly k – 1 possible orthogonal contrasts (where k is the number of group means available). Sets of contrast: Polynomial contrasts are a special set of orthogonal contrasts that test polynomial patterns in data with more than two means (e.g., linear, quadratic, cubic, quartic, etc.). Orthonormal contrasts are orthogonal contrasts which satisfy the additional condition that, for each contrast, the sum squares of the coefficients add up to one. Background: A contrast is defined as the sum of each group mean multiplied by a coefficient for each group (i.e., a signed number, cj). In equation form, L=c1X¯1+c2X¯2+⋯+ckX¯k≡∑jcjX¯j , where L is the weighted sum of group means, the cj coefficients represent the assigned weights of the means (these must sum to 0 for orthogonal contrasts), and X¯ j represents the group means. Coefficients can be positive or negative, and fractions or whole numbers, depending on the comparison of interest. Linear contrasts are very useful and can be used to test complex hypotheses when used in conjunction with ANOVA or multiple regression. In essence, each contrast defines and tests for a particular pattern of differences among the means.Contrasts should be constructed "to answer specific research questions", and do not necessarily have to be orthogonal.A simple (not necessarily orthogonal) contrast is the difference between two means. A more complex contrast can test differences among several means (ex. with four means, assigning coefficients of –3, –1, +1, and +3), or test the difference between a single mean and the combined mean of several groups (e.g., if you have four means assign coefficients of –3, +1, +1, and +1) or test the difference between the combined mean of several groups and the combined mean of several other groups (i.e., with four means, assign coefficients of –1, –1, +1, and +1). The coefficients for the means to be combined (or averaged) must be the same in magnitude and direction, that is, equally weighted. When means are assigned different coefficients (either in magnitude or direction, or both), the contrast is testing for a difference between those means. A contrast may be any of: the set of coefficients used to specify a comparison; the specific value of the linear combination obtained for a given study or experiment; the random quantity defined by applying the linear combination to treatment effects when these are themselves considered as random variables. In the last context, the term contrast variable is sometimes used. Background: Contrasts are sometimes used to compare mixed effects. A common example is the difference between two test scores — one at the beginning of the semester and one at its end. Note that we are not interested in one of these scores by itself, but only in the contrast (in this case — the difference). Since this is a linear combination of independent variables, its variance equals the weighted sum of the summands' variances; in this case both weights are one. This "blending" of two variables into one might be useful in many cases such as ANOVA, regression, or even as descriptive statistics in its own right. Background: An example of a complex contrast would be comparing 5 standard treatments to a new treatment, hence giving each old treatment mean a weight of 1/5, and the new sixth treatment mean a weight of −1 (using the equation above). If this new linear combination has a mean zero, this will mean that there is no evidence that the old treatments are different from the new treatment on average. If the sum of the new linear combination is positive, there is some evidence (the strength of the evidence is often associated with the p-value computed on that linear combination) that the combined mean of the 5 standard treatments is higher than the new treatment mean. Analogous conclusions obtain when the linear combination is negative. However, the sum of the linear combination is not a significance test, see testing significance (below) to learn how to determine if the contrast computed from the sample is significant. Background: The usual results for linear combinations of independent random variables mean that the variance of a contrast is equal to the weighted sum of the variances. If two contrasts are orthogonal, estimates created by using such contrasts will be uncorrelated. If orthogonal contrasts are available, it is possible to summarize the results of a statistical analysis in the form of a simple analysis of variance table, in such a way that it contains the results for different test statistics relating to different contrasts, each of which are statistically independent. Linear contrasts can be easily converted into sums of squares. SScontrast = n(∑cjX¯j)2∑cj2 , with 1 degree of freedom, where n represents the number of observations per group. If the contrasts are orthogonal, the sum of the SScontrasts = SStreatment. Testing the significance of a contrast requires the computation of SScontrast. Testing significance: SScontrast also happens to be a mean square because all contrasts have 1 degree of freedom. Dividing MScontrast by MSerror produces an F-statistic with one and dferror degrees of freedom, the statistical significance of Fcontrast can be determined by comparing the obtained F statistic with a critical value of F with the same degrees of freedom.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blackjack Switch** Blackjack Switch: Blackjack Switch is a casino gambling game invented by Geoff Hall and patented in 2009. It is based on blackjack, but differs in that two hands, rather than one, are dealt to each playing position, and the player is initially allowed to exchange ("switch") the top two cards between hands. Natural blackjacks are paid 1:1 instead of the standard 3:2, and a dealer hard 22 pushes all player hands except a natural. History: Blackjack Switch was conceived after Hall - who was a card counter at the time - became frustrated at being dealt 2 weak hands when playing Blackjack that could be improved dramatically if the top two cards were allowed to be switched. Hall then developed this idea and exhibited the game at the G2E conference in Las Vegas in October 2000. Blackjack Switch was installed in Harvey’s Casino in Iowa in February 2001. After this Hall modified the game to include the ‘Push on 22’ rule in 2003. This modification led to the game being installed in Four Queens in December 2003. The game has since become widely available in offline casinos in Las Vegas and around the world and is offered online only by Playtech casinos. Further games developed by Hall that are found in Las Vegas casinos include Free Bet Blackjack, Zombie Blackjack, and Zappit. Play: Blackjack Switch is played with four, six or eight 52-card decks which are shuffled together. The shuffled cards are dealt from a dealing shoe or a shuffling machine. A semicircular card table with a similar layout to blackjack is used. Each playing position has two betting boxes, rather than one, and the initial wagers in these two boxes must be identical. However, each corresponds to a separate hand; during play they may be doubled and split independently, and are resolved separately. Play: In the initial deal, the dealer puts one card face up on each box of each playing position starting from his left, deals a face-up card to himself, and then a further card to each box left to right. After resolving any side bet, the dealer then consults each player in turn, initially asking them whether they wish to "switch" their top cards. For example, if the player is dealt 10-5 and 6-10, then the player may switch to transform the two hands into 10-10 and 6-5. After a player has made a decision whether or not to switch, the dealer offers him the chance to hit, stand or double, firstly for the hand on the player's right-hand box, then for the one on the left. As in blackjack, a player hand which exceeds 21 is "bust"; its cards are removed and its backing wager acquired by the house. Play: When all players have been consulted, the dealer plays out his hand according to blackjack-style drawing rules, with the difference that a dealer hand of 22 is not a bust but a push (a tie) against any surviving player hand; the only exception is a player blackjack which has not been obtained by switching or splitting. Play: The small variations in dealer drawing rules between casinos which are found in blackjack are also found in Blackjack Switch, such as whether the dealer must stand or hit on soft 17 (a hand totalling 17 but containing an ace counted as 11. A-6 or A-3-3, for example), whether even money/insurance is offered, whether a player may double after a split, and whether a player may hit split aces. Strategy: The strategy of Blackjack Switch covers both the switch decision and the subsequent decisions of whether to stand, double, or draw a further card which are familiar from blackjack strategy. Strategy: The switch decision The correct decision regarding whether to switch is sometimes obvious, particularly when there is the largest difference in advantage. However, borderline and counter-intuitive cases are relatively common, and switching strategy is hard to summarize. While an often-quoted rule of thumb is to choose the option that forms or preserves the best single hand, this is unreliable; sometimes it is even correct to break up a natural by switching, for instance in the case AT + T[3-8] vs. dealer 7, 8 or 9. The correct switching choice depends on the dealer card in a significant minority of cases. Near-optimal schemes which can be learnt have been developed by several authors: Arnold Snyder presents a protocol for switching decisions based on four categories of hand, "winner", "push", "loser" and "chance" which he claims reduces the house edge to 0.25% under his ruleset. Cindy Liu presents a scheme based on assigning a point value to the dealt hands and those produced by switching. Strategy: Basic strategy after the switch decision Basic strategy for playing out blackjack switch hands, after the switching decision has been made, is tabulated below, for a game in which the dealer hits soft 17 and peeks for blackjack. Compared to traditional blackjack, in Blackjack Switch there are fewer occasions where doubling or splitting is rewarding, and more occasions where it is correct to hit at the risk of going bust. The differences originate from the push-on-dealer-22 rule. Strategy: Key: S = Stand H = Hit D = Double SP = Split Side bet: Blackjack Switch tables typically allow a side bet, called Super Match, which rewards pairs, three-of-a-kind, two-pairs or four-of-a-kind among the four initial cards comprising the players two hands. For a 6-deck game, the Super Match bet pays out 1 to 1 if there a pair is present, 5 to 1 for three of a kind, 8 to 1 for two pairs and 40 to 1 for four of a kind. This seems to mitigate the adverse effect on the player of the case where the two top or bottom cards are identical, which robs the player of a meaningful switching decision, although, like most side bets, playing it increases the house edge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Swarmatron** Swarmatron: The Swarmatron is an analogue synthesizer developed by Dewanatron, consisting of cousins Brian and Leon Dewan. Hand-made by the Dewans, it has eight oscillators controlled by a single ribbon controller. The oscillators are linked through a 'swarm' control, giving it a distinctive sound that flows in and out of unison, comparable to the noise a swarm of bees produces.Dewanatron sell their Swarmatron through Big City Music of Los Angeles. As of March 2016, only 83 have been produced. About: The distinctive feature of the Swarmatron is its use of eight individual oscillators controlled monophonically, switchable between sine and sawtooth waves. These oscillators produce an equidistant chord, where their spacing is under the player's immediate control. The oscillators may be switched on and off individually by a row of switches. A fingerboard synthesizer like the Trautonium or Tannerin, a finger is placed or slid from side to side along a ribbon controller to control the pitch. This allows for easy Glissandi, which can be used to achieve smooth rises and falls in the pitch of the instrument. Careful finger placement may be used to play discrete notes similar to the operation of an unfretted or bowed string instrument such as a violin. Due to its adjustable nature, ribbon controller is unlabeled, although some players compensate for this by marking note positions.An additional ribbon controller is used to control the spacing of the oscillators when in 'Swarm' mode. A large central knob on a rotary control may also be used to control the 'swarm'. Using adjustment of the range control, a non-equidistant swarmed chord is able to be produced. The oscillators may also be preset to common chord intervals of major thirds, minor thirds, and more. About: The ribbon controllers can also adjust a low low-pass filter. Using the first ribbon allows for pitch tracking. An ADSR envelope control is also present.There are no facilities for MIDI. However, there are three analogue 0-10V Control Voltage inputs for pitch, filter cutoff and swarm. With the use of a standard MIDI to CV converter, the instrument may be controlled from a sequencer or other MIDI controller. About: Previous Dewanatron instruments such as the Melody Gins and Dual Primate Console have also used the same principle of multiple, interacting oscillators. The Swarmatron's retro styling of a dark wood case with hammer-finish green paint also resembles these previous instruments, all showing a distinctly retro influence. Front panels labels are deliberately minimal, with the controls being labelled with single letters rather than words. Notable uses: Trent Reznor and Atticus Ross used a Swarmatron in their Academy Award winning soundtrack for the 2010 film The Social Network, with the DVD including a bonus feature of Reznor discussing the instrument. Additionally, Reznor and Ross's band How to Destroy Angels used it on their eponymous first album. John Cale used the instrument on the track Time Stands Still on his Mercy album in 2023. Derived instruments: In 2014, music software company reFuse Software released "The Swarm" for free, a 32-bit 'drone synth' standalone application for MacOS and Windows, inspired by the Swarmatron and Reznor and Ross's soundtrack. It emulates the features of the Swarmatron as a MIDI instrument.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tree care** Tree care: Tree care is the application of arboricultural methods like pruning, trimming, and felling/thinning in built environments. Road verge, greenways, backyard and park woody vegetation are at the center of attention for the tree care industry. Landscape architecture and urban forestry also set high demands on professional tree care. High safety standards against the dangers of tree care have helped the industry evolve. Especially felling in space-limited environments poses significant risks: the vicinity of power or telephone lines, insufficient protective gear (against falling dead wood, chainsaw wounds, etc.) and narrow felling zones with endangered nearby buildings, parking cars, etc.. The required equipment and experience usually transcends private means and is often considered too costly as a permanent part of the public infrastructure. In singular cases, traditional tools like handsaws may suffice, but large-scale tree care usually calls for heavy machinery like cranes, bucket trucks, harvesters, and woodchippers. Tree care: Road side trees are especially prone to abiotic stress by exhaust fumes, toxic road debris, soil compaction, and drought which makes them susceptible to fungal infections and various plant pests. When tree removal is not an option, because of road ecology considerations, the main challenge is to achieve road safety (visibility of road signs, blockage-free lanes, etc.) while maintaining tree health. Tree removal: While the perceived risk of death by falling trees (a part of the "tree risk" complex) is influenced by media and often hyped (the objective risk has been reported to be close to 1 : 10.000.000, almost as low as death by lightning), singular events have encouraged a "proactive" stance so that even lightly damaged trees are likely to be removed in urban and public traffic surroundings. As a tree ages and nears the end of its safe useful life expectancy (SULE), its perceived amenity value is decreased greatly. A risk assessment normally carried out by local council's arborist to determine the best course of action. As with all public green spaces, trees in green urban spaces and their careful conservation is sometimes in conflict with aggressive urban development even though it is often understood how urban trees contribute to liveability of suburbs and cities both objectively (reduction of urban heat island effect, etc.) and subjectively. Tree planting programs implemented by a growing number of cities, local councils and organizations is mitigating the losses and in most cases increasing the number of trees in suburbia. Programs include the planting of 2 trees for every 1 tree removed, while some councils are paying land owners to keep trees instead of removing them for farming or construction. Standards: United States The voluntary industry consensus standards developed by TCIA, resulted in the ANSI A300 standard, the generally accepted industry standard for tree care practices including trees, shrubs, and other woody plants. It includes the following parts: Pruning Soil management Supplemental support systems Lightning protection systems Management Planting and transplanting Integrated vegetation management Root management standard Tree risk assessment Integrated pest management Professional associations: Tree Care Industry Association International Society of Arboriculture European Arboricultural Council
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyper-V** Hyper-V: Microsoft Hyper-V, codenamed Viridian, and briefly known before its release as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks. Hyper-V: Hyper-V was first released with Windows Server 2008, and has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free, but has a command-line interface only. The last version of free Hyper-V Server is Hyper-V Server 2019, which is based on Windows Server 2019. History: A beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008. The finalized version was released on June 26, 2008 and was delivered through Windows Update. Hyper-V has since been released with every version of Windows Server.Microsoft provides Hyper-V through two channels: Part of Windows: Hyper-V is an optional component of Windows Server 2008 and later. It is also available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1, Windows 10 and Windows 11. History: Hyper-V Server: It is a freeware edition of Windows Server with limited functionality and Hyper-V component. History: Hyper-V Server Hyper-V Server 2008 was released on October 1, 2008. It consists of Windows Server 2008 Server Core and Hyper-V role; other Windows Server 2008 roles are disabled, and there are limited Windows services. Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, and software. A menu driven CLI interface and some freely downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection. However, administration and configuration of the host OS and the guest virtual machines is generally done over the network, using either Microsoft Management Consoles on another Windows computer or System Center Virtual Machine Manager. This allows much easier "point and click" configuration, and monitoring of the Hyper-V Server. History: Hyper-V Server 2008 R2 (an edition of Windows Server 2008 R2) was made available in September 2009 and includes Windows PowerShell v2 for greater CLI control. Remote access to Hyper-V Server requires CLI configuration of network interfaces and Windows Firewall. Also using a Windows Vista PC to administer Hyper-V Server 2008 R2 is not fully supported. Architecture: Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. There must be at least one parent partition in a hypervisor instance, running a supported version of Windows Server (2008 and later). The virtualization software runs in the parent partition and has direct access to the hardware devices. The parent partition creates child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V.A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, which, depending on the configuration of the hypervisor, might not necessarily be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition. The hypervisor handles the interrupts to the processor, and redirects them to the respective partition using a logical Synthetic Interrupt Controller (SynIC). Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI (formerly NPT) on AMD. Architecture: Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests. The VMBus is a logical channel which enables inter-partition communication. The response is also redirected via the VMBus. If the devices in the parent partition are also virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider (VSP), which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirect the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS. Architecture: Virtual devices can also take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage, networking and graphics subsystems, among others. Enlightened I/O is a specialized virtualization-aware implementation of high level communication protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of VMBus directly. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O. Architecture: Currently only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware: Windows Server 2008 and later Windows Vista and later Linux with a 3.4 or later kernel FreeBSD System requirements: The Hyper-V role is only available in the x86-64 variants of Standard, Enterprise and Datacenter editions of Windows Server 2008 and later, as well as the Pro, Enterprise and Education editions of Windows 8 and later. On Windows Server, it can be installed regardless of whether the installation is a full or core installation. In addition, Hyper-V can be made available as part of the Hyper-V Server operating system, which is a freeware edition of Windows Server. Either way, the host computer needs the following. System requirements: CPU with the following technologies: NX bit x86-64 Hardware-assisted virtualization (Intel VT-x or AMD-V) Second Level Address Translation (in Windows Server 2012 and later) At least 2 GB memory, in addition to what is assigned to each guest machineThe amount of memory assigned to virtual machines depends on the operating system: Windows Server 2008 Standard supports up to 31 GB of memory for running VMs, plus 1 GB for the host OS. System requirements: Windows Server 2008 R2 Standard supports up to 32 GB, but the Enterprise and Datacenter editions support up to 2 TB. Hyper-V Server 2008 R2 supports up to 1 TB. System requirements: Windows Server 2012 supports up to 4 TB.The number of CPUs assigned to each virtual machine also depends on the OS: Windows Server 2008 and 2008 R2 support 1, 2, or 4 CPUs per VM; the same applies to Hyper-V Server 2008 R2 Windows Server 2012 supports up to 64 CPUs per VMThere is also a maximum for the number of concurrently active virtual machines. System requirements: Windows Server 2008 and 2008 R2 support 384 per server; Hyper-V Server 2008 supports the same Windows Server 2012 supports 1024 per server; the same applies to Hyper-V Server 2012 Windows Server 2016 supports 8000 per cluster and per node Supported guests: Windows Server 2008 R2 The following table lists supported guest operating systems on Windows Server 2008 R2 SP1. Supported guests: Fedora 8 or 9 are unsupported; however, they have been reported to run.Third-party support for FreeBSD 8.2 and later guests is provided by a partnership between NetApp and Citrix. This includes both emulated and paravirtualized modes of operation, as well as several HyperV integration services.Desktop virtualization (VDI) products from third-party companies (such as Quest Software vWorkspace, Citrix XenDesktop, Systancia AppliDis Fusion and Ericom PowerTerm WebConnect) provide the ability to host and centrally manage desktop virtual machines in the data center while giving end users a full PC desktop experience. Supported guests: Guest operating systems with Enlightened I/O and a hypervisor-aware kernel such as Windows Server 2008 and later server versions, Windows Vista SP1 and later clients and offerings from Citrix XenServer and Novell will be able to use the host resources better since VSC drivers in these guests communicate with the VSPs directly over VMBus. Non-"enlightened" operating systems will run with emulated I/O; however, integration components (which include the VSC drivers) are available for Windows Server 2003 SP2, Windows Vista SP1 and Linux to achieve better performance. Supported guests: Linux support On July 20, 2009, Microsoft submitted Hyper-V drivers for inclusion in the Linux kernel under the terms of the GPL. Microsoft was required to submit the code when it was discovered that they had incorporated a Hyper-V network driver with GPL-licensed components statically linked to closed-source binaries. Kernels beginning with 2.6.32 may include inbuilt Hyper-V paravirtualization support which improves the performance of virtual Linux guest systems in a Windows host environment. Hyper-V provides basic virtualization support for Linux guests out of the box. Paravirtualization support requires installing the Linux Integration Components or Satori InputVSC drivers. Xen-enabled Linux guest distributions may also be paravirtualized in Hyper-V. As of 2013 Microsoft officially supported only SUSE Linux Enterprise Server 10 SP1/SP2 (x86 and x64) in this manner, though any Xen-enabled Linux should be able to run. In February 2008, Red Hat and Microsoft signed a virtualization pact for hypervisor interoperability with their respective server operating systems, to enable Red Hat Enterprise Linux 5 to be officially supported on Hyper-V. Supported guests: Windows Server 2012 Hyper-V in Windows Server 2012 and Windows Server 2012 R2 changes the support list above as follows: Hyper-V in Windows Server 2012 adds support for Windows 8.1 (up to 32 CPUs) and Windows Server 2012 R2 (64 CPUs); Hyper-V in Windows Server 2012 R2 adds support for Windows 10 (32 CPUs) and Windows Server 2016 (64 CPUs). Supported guests: Minimum supported version of CentOS is 6.0. Minimum supported version of Red Hat Enterprise Linux is 5.7. Maximum number of supported CPUs for Windows Server and Linux operating systems is increased from four to 64. Windows Server 2012 R2 Hyper-V on Windows Server 2012 R2 added the Generation 2 VM. Backward compatibility: Hyper-V, like Microsoft Virtual Server and Windows Virtual PC, saves each guest OS to a single virtual hard disk file. It supports the older .vhd format, as well as the newer .vhdx. Older .vhd files from Virtual Server 2005, Virtual PC 2004 and Virtual PC 2007 can be copied and used in Hyper-V, but any old virtual machine integration software (equivalents of Hyper-V Integration Services) must be removed from the virtual machine. After the migrated guest OS is configured and started using Hyper-V, the guest OS will detect changes to the (virtual) hardware. Installing "Hyper-V Integration Services" installs five services to improve performance, at the same time adding the new guest video and network card drivers. Limitations: Audio Hyper-V does not virtualize audio hardware. Before Windows 8.1 and Windows Server 2012 R2, it was possible to work around this issue by connecting to the virtual machine with Remote Desktop Connection over a network connection and use its audio redirection feature. Windows 8.1 and Windows Server 2012 R2 add the enhanced session mode which provides redirection without a network connection. Limitations: Optical drives pass-through Optical drives virtualized in the guest VM are read-only. Officially Hyper-V does not support the host/root operating system's optical drives to pass-through in guest VMs. As a result, burning to discs, audio CDs, video CD/DVD-Video playback are not supported; however, a workaround exists using the iSCSI protocol. Setting up an iSCSI target on the host machine with the optical drive can then be talked to by the standard Microsoft iSCSI initiator. Microsoft produces their own iSCSI Target software or alternative third party products can be used. Limitations: VT-x/AMD-V handling Hyper-V uses the VT-x on Intel or AMD-V on AMD x86 virtualization. Since Hyper-V is a native hypervisor, as long as it is installed, third-party software cannot use VT-x or AMD-V. For instance, the Intel HAXM Android device emulator (used by Android Studio or Microsoft Visual Studio) cannot run while Hyper-V is installed. Client operating systems: x64 SKUs of Windows 8, 8.1, 10 Pro, Enterprise, Education, come with a special version Hyper-V called Client Hyper-V. Features added per version: Windows Server 2012 Windows Server 2012 introduced many new features in Hyper-V. Hyper-V Extensible Virtual Switch Network virtualization Multi-tenancy Storage Resource Pools .vhdx disk format supporting virtual hard disks as large as 64 TB with power failure resiliency Virtual Fibre Channel Offloaded data transfer Hyper-V replica Cross-premises connectivity Cloud backup Windows Server 2012 R2 With Windows Server 2012 R2 Microsoft introduced another set of new features. Features added per version: Shared virtual hard disk Storage quality of service Generation 2 Virtual Machine Enhanced session mode Automatic virtual machine activation Windows Server 2016 Hyper-V in Windows Server 2016 and Windows 10 1607 adds Nested virtualization (Intel processors only, both the host and guest instances of Hyper-V must be Windows Server 2016 or Windows 10 or later) Discrete Device Assignment (DDA), allowing direct pass-through of compatible PCI Express devices to guest Virtual Machines Windows containers (to achieve isolation at the app level rather than the OS level) Shielded VMs using remote attestation servers Monitoring of host CPU resource utilization by guests and protection (limiting CPU usage by guests) Windows Server 2019 Hyper-V in Windows Server 2019 and Windows 10 1809 adds Shielded Virtual Machines improvements including Linux compatibility Virtual Machine Encrypted Networks vSwitch Receive Segment Coalescing Dynamic Virtual Machine Multi-Queue (d. VMMQ) Persistent Memory support Significant feature and performance improvements to Storage Spaces Direct and Failover Clustering
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Contrapposto** Contrapposto: Contrapposto (Italian pronunciation: [kontrapˈposto]) is an Italian term that means "counterpoise". It is used in the visual arts to describe a human figure standing with most of its weight on one foot, so that its shoulders and arms twist off-axis from the hips and legs in the axial plane. Contrapposto: First appearing in Ancient Greece in the early 5th century BCE, contrapposto is considered a crucial development in the history of Ancient Greek art (and, by extension, Western art), as it marks the first time in Western art that the human body is used to express a psychological disposition. The style was further developed and popularized by sculptors in the Hellenistic and Imperial Roman periods, fell out of use in the Middle Ages, and was later revived during the Renaissance. Michelangelo's statue of David, one of the most iconic sculptures in the world, is a famous example of contrapposto. Definition: Contrapposto was historically an important sculptural development, for its appearance marks the first time in Western art that the human body is used to express a more relaxed psychological disposition. This gives the figure a more dynamic, or alternatively relaxed appearance. In the frontal plane this also results in opposite levels of shoulders and hips, for example: if the right hip is higher than the left; correspondingly the right shoulder will be lower than the left, and vice versa. It can further encompass the tension as a figure changes from resting on a given leg to walking or running upon it (so-called ponderation). The leg that carries the weight of the body is known as the engaged leg, the relaxed leg is known as the free leg. Usually, the engaged leg is straight, or very slightly bent, and the free leg is slightly bent. Contrapposto is less emphasized than the more sinuous S-curve, and creates the illusion of past and future movement. A 2019 eye tracking study, by showing that contrapposto acts as supernormal stimuli and increases perceived attractiveness, has provided evidence and insight as to why, in artistic presentation, goddesses of beauty and love are often depicted in contrapposto pose. This was later supported in a neuroimaging study. The term contrapposto can also be used to refer to multiple figures which are in counter-pose (or opposite pose) to one another. History: Classical The first known statue to use contrapposto is Kritios Boy, c. 480 BCE, so called because it was once attributed to the sculptor Kritios. It is possible, even likely, that earlier bronze statues had used the technique, but if they did, they have not survived and Kenneth Clark called the statue "the first beautiful nude in art". The statue is a Greek marble original and not a Roman copy. History: Prior to the introduction of contrapposto, the statues that dominated ancient Greece were the archaic kouros (male) and the kore (female). Contrapposto has been used since the dawn of classical western sculpture. According to the canon of the Classical Greek sculptor Polykleitos in the 4th century BCE, it is one of the most important characteristics of his figurative works and those of his successors, Lysippos, Skopas, etc. The Polykletian statues (Discophoros ("discus-bearer") and Doryphoros ("spear-bearer"), for example) are idealized athletic young men with the divine sense, and captured in contrapposto. In these works, the pelvis is no longer axial with the vertical statue as in the archaic style of earlier Greek sculpture before Kritios Boy. History: Contrapposto can be clearly seen in the Roman copies of the statues of Hermes and Heracles. A famous example is the marble statue of Hermes and the Infant Dionysus in Olympia by Praxiteles. It can also be seen in the Roman copies of Polyclitus's Amazon. Greek art emphasized humanism along with the human mind and the human body's beauty. Greek youths trained and competed in athletic contests in the nude. A great contribution to the contrapposto pose was the concept of a canon of proportions, in which mathematical properties are used to create proportions. Renaissance Classical contrapposto was revived in Renaissance art by the Italian artists Donatello and Leonardo da Vinci, followed by Michelangelo, Raphael and other artists of the High Renaissance. One of the achievements of the Italian Renaissance was the re-discovery of contrapposto. Modern times The technique continues to be widely employed in sculpture. Modern psychological research confirms the attractiveness of the pose. References and sources: References Sources Andrew Stewart, One Hundred Greek Sculptors: Their Careers and Extant Works Polykleitos of Argos, 16.72 Polykleitos, The J. Paul Getty Museum (archived) Gardner, Percy (1911). "Polyclitus" . Encyclopædia Britannica. Vol. 22 (11th ed.). pp. 22–23. Understanding Contrapposto. (website)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amyraldism** Amyraldism: Amyraldism (sometimes Amyraldianism) is a Calvinist doctrine. It is also known as the School of Saumur, post redemptionism, moderate Calvinism, or hypothetical universalism. It is one of several hypothetical universalist systems.Amyraldism is the belief that God decreed Christ's atonement, prior to his decree of election, for all alike if they believe, but he then elected those whom he will bring to faith in Christ, seeing that none would believe on their own, and thereby preserving the Calvinist doctrine of unconditional election. The efficacy of the atonement remains limited to those who believe. Amyraldism: This doctrine is named after its formulator, Moses Amyraut, and is viewed as a variety of Calvinism in that it maintains the particularity of sovereign grace in the application of the atonement. However, detractors such as B. B. Warfield have termed it "an inconsistent and therefore unstable form of Calvinism". Amyraut additionally proposed an alternative view to covenant theology in which the Mosaic covenant was seen as neither a covenant of grace nor one of works, but rather as a third substance, being a subservient covenant. History: Background Hypothetical universalist teachings may be found in the writings of early Reformed theologians including Heinrich Bullinger, Wolfgang Musculus, Zacharias Ursinus, and Girolamo Zanchi. Several theologians who signed the Canons of Dort were hypothetical universalists.Moses Amyraut, originally a lawyer, but converted to the study of theology by the reading of Calvin's Institutes of the Christian Religion, an able divine and voluminous writer, developed the doctrine of hypothetical or conditional universalism, for which his teacher, John Cameron (1580–1625), a Scot, and for two years headmaster of Saumur Academy, had prepared the way. His object was not to set aside but to moderate Calvinism by ingrafting this doctrine upon the particularism of election, and thereby to fortify it against the objections of Roman Catholics, by whom the French Protestants, or Huguenots, were surrounded and threatened. Being employed by the Reformed Synod in important diplomatic negotiations with the government, he came in frequent contact with bishops, and with Cardinal Richelieu, who esteemed him highly. His system is an approach, not so much to Arminianism, which he decidedly rejected, as to Lutheranism, which likewise teaches a universal atonement and a limited election. History: Amyraut maintained the Calvinistic premises of an eternal foreordination and foreknowledge of God, whereby he caused all things to pass, the good efficiently, the bad permissively. He also admitted the double decree of election and reprobation, but his view on double predestination is modified slightly by his view of double election. He also taught that God foreordained a universal salvation through the universal sacrifice of Christ offered to all alike, on condition of faith, so that on the part of God's will and desire, the grace is universal, but as regards the condition it is particular, or only for those who do not reject it which would thereby make it ineffective. History: The universal redemption scheme precedes the particular election scheme, and not vice versa. He reasons from the benevolence of God towards his creatures; the traditional Reformed presentation of predestination, he thought, improperly reasons from the result and makes facts interpret the decrees. Amyraut distinguished between objective grace which is offered to all, and subjective grace in the heart which is given only to the elect. He also makes a distinction between natural ability and moral ability, or the power to believe and the willingness to believe; man possesses the former but not the latter in consequence of inherent depravity. It, therefore, takes an act of God to illuminate the mind, thereby engaging the will towards action. He was disposed, like Huldrych Zwingli, to extend the grace of God beyond the limits of the visible Church, inasmuch as God by his general providence operates upon the heathen, as in the case of Malachi 1:11,14, and may produce in them a sort of unconscious Christianity, a faith without knowledge; while within the Church he operates more fully and clearly through the means of grace. History: Those who never heard of Christ are condemned if they reject the general grace of providence, but the same persons would also reject Christ if he were offered to them. As regards the result, Amyraut agreed with the particularists. His ideology is unavailable, except for those in whom God previously works the condition of faith: for those who are included in the particular decree of election. History: Amyraut's doctrine created a commotion in the Reformed Churches of France, the Dutch Republic, and Switzerland. Jean Daillé (1594–1670), David Blondel (1591–1655), and others considered it innocent and consistent with the decrees of the Synod of Dort, where German Reformed and Anglican delegates professed similar views against the supralapsarianism of Gomarus. But Pierre Du Moulin (Molinæus) (since 1621 professor of the rival theological school of Sedan), Friedrich Spanheim (1600–1649, Professor in Leiden), André Rivet (1572–1651, Professor in Leiden), and the theologians of Geneva opposed it. History: Similar charges were leveled against the Puritan great, Richard Baxter, who dealt frequently with Cyrus and Peter du Moulin. In Geneva, the chief opponent of Amyraut's scheme was Francis Turretin (1623–1687). Amyraut's teaching was not, however, considered to be heretical or outside the Reformed confessions by its opponents.The friends of Amyraut emphasised the love, benevolence, and impartial justice of God as well as the numerous passages in Scripture which teach that God loves 'the whole world', that he will have 'all men to be saved', that Christ died 'not for our sins only, but also for the sins of the whole world', that 'he shut up all in unbelief that he might have mercy upon all'. On the other hand, it was objected that God does not really will and intend what is never accomplished; that he could not purpose an end without providing adequate means; God did not actually offer salvation to all; and that a hypothetical universalism based on an unlikely condition is an unfruitful abstraction. History: The national Synods at Alençon, 1637; at Charenton, 1645; and at Loudun, 1659 (the last synod permitted by the French government), decided against the excommunication of Amyraut but delimited his views in order to avoid further variance with historic Reformed orthodoxy. He gave the assurance that he did not change the doctrine but only the method of instruction. His opponents allowed that the idea of a universal grace by which no one was actually saved unless included in the particular, effective decree of election, was permissible. In this way hypothetical universalism was sanctioned as a permissible view, along with the particularism that had characterized historic Reformed orthodoxy, and a schism in the French Church was avoided. The literary controversy continued for several years longer and developed a large amount of learning and ability, until it was brought to an abrupt close by the political oppressions of the Reformed Church in France. History: 17th century England and Scotland John Davenant (1576–1641), like Amyraut a student of John Cameron, was an English delegate at the Synod of Dort and influenced some of the members of the Westminster Assembly. He promoted "hypothetical universalism, a general atonement in the sense of intention as well as sufficiency, a common blessing of the cross, and a conditional salvation. The "root principle of the Davenant School" was the "notion of a universal desire in God for the salvation of all men." In the floor debate on redemption at the Westminster Assembly, Edmund Calamy the Elder of the Davenant School attempted to insert Amyraldism into the Catechism.Richard Baxter held to a form of Amyraldism, although he was less Calvinistic than Amyraut. He "devised an eclectic middle route between Reformed, Arminian, and Roman doctrines of grace: interpreting the kingdom of God in terms of contemporary political ideas, he explained Christ's death as an act of universal redemption (penal and vicarious, but not substitutionary), in virtue of which God has made a new law offering pardon and amnesty to the penitent. Repentance and faith, being obedience to this law, are the believer's personal saving righteousness… the fruit of the seeds which Baxter sowed was neonomian Moderatism in Scotland and moralistic Unitarianism in England."Popularised in England by the Reformed pastor Richard Baxter, Amyraldism also gained strong adherence among the Congregationalists and some Presbyterians in the American colonies, during the 17th and 18th centuries. History: Recent In the United States, Amyraldism can be found among various evangelical groups, such as Southern Baptists, the Evangelical Free Church of America, the dispensationalists in independent Bible Churches and independent Baptist churches. In Australia, many in the Anglican Diocese of Sydney hold to a modified "four point" Calvinism, while in England, one author, Dr. Alan Clifford, pastor of the Norwich Reformed Church, tirelessly promotes Amyraldism in self-published pamphlets such as Amyraut Affirmed. Yet "Five point" Calvinism remains prevalent especially in more conservative groups among the Reformed and Presbyterian churches, Reformed Baptists, among evangelical Anglicans in England and in some non-denominational evangelical churches. Contrary views: Amyraldism has come under fire in recent years by contemporary Calvinist theologians who argue that one simply cannot accept that Christ died for all people in the world if not all are saved. That belief either requires a second payment for sin at the judgment, the adoption of a form of universal reconciliation, or abandonment of the penal substitution theory of the atonement. Contrary views: Reformed theologian, pastor, and author R.C. Sproul suggested there is confusion about what the doctrine of limited atonement actually teaches. While he considered it possible for a person to believe four points without believing the fifth, he claimed that a person who really understands the other four points must believe in limited atonement because of what Martin Luther called a resistless logic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microlith (catalytic reactor)** Microlith (catalytic reactor): Microlith is a brand of catalytic reactor invented by engineer William C. Pfefferle. Technology: A catalyst is a substance that speeds a reaction but that itself is left in its original state after the reaction, so that it can assist in the reaction of a large quantity of material over a long period of time. A Microlith reactor is constructed with a very thin metal substrate coated with a variety of materials including catalysts to speed reactions, and adsorbent materials for use in filters. The substrate has short channels (0.001–0.020 in) which resemble screens or meshes. This results in a lower pressure drop than other reactors and allows for high cell density and low thermal mass. Mass and heat transfer are significantly increased, allowing faster reactor response to gas temperatures and improved rates of reactant contact with the surface. By passing an electric current through the metal substrate, the Microlith can be heated rapidly and efficiently. Over 12 Microlith related US patents have been issued. Applications: Fuel reforming Fuel processing Catalytic combustion Regenerable contaminant adsorbers Contaminant oxidizers Automotive catalytic converters Infrared generators for aerial targets
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Japanese mahjong yaku** Japanese mahjong yaku: In Japanese mahjong, yaku (Japanese: 役) is a condition that determines the value of the player's hand. It is essential to know the yaku for game strategy, since a player must have a minimum of one yaku in their hand in order to legally win a hand. Each yaku has a specific han value. Yaku conditions may be combined to produce hands of greater value. The game also features dora, that allow a hand to add han value, but that cannot count as yaku. Altogether, a hand's points value increases exponentially with every han a hand contains. Overview: Yaku are somewhat similar to poker hands. They fit certain patterns based on the numbers or types of tiles included, as well as the relative value of the tiles. Unlike poker, however, multiple hand types may be combined to produce hands of greater value. Yaku is divided into three categories: Hands that are mandatory to be closed (menzen-nomi, 門前のみ). Overview: Hands that loses one han if the hand is open ("Eat and decrease", a literal translation of kuisagari, 喰い下がり) Hands that can be closed or open and has the same han value.Calling for another player's discard to make a meld makes the meld and the hand open. When a winning tile of a closed hand is a discard, the meld including that discard is considered open, while the hand is still regarded as closed. If a hand is closed, the situation is called "menzenchin (門前清)" or "menzen (門前)" in Japanese. Overview: The basic concept of a yaku is that it fits into one of three basic criteria: It contains a pattern of some kind It can consistently be formed during a game, although it does not necessarily need to be common It is based on specific game situations, such as discards or actions taken by other playersFinally, when it comes to points scoring, the total number of han in the hand is counted. When the han value is four or less, fu is also counted. The combination of the han value and fu value corresponds to a points table. List of yaku: The following is a list of all the yaku, their names in English and Japanese, their han values, and any special conditions related to them. They are listed here in groups according to the underlying patterns that define the yaku. Example hands are given, but they are often not the only possible hands with that yaku. All yaku can be divided into seven basic categories, depending on the dominant feature. The features are as follows: patterns based on sequences, patterns based on triplets and/or quads, consistency of the type and numbers of the tiles, lucky circumstances, and special criteria. List of yaku: Special criteria Yaku based on luck These hands are all worth one han. Yaku based on sequences Yaku based on triplets and/or quads When the following hands involve triplets, quads are also acceptable, while if they require quads, triplets do not count. Each yaku is worth two han, regardless of whether the hand is closed or open. Yaku based on terminal or honor tiles These hands involve terminals and/or honors, or lack thereof, such as tan'yao and yakuhai due to their simplicity. Yaku based on suits The following two yaku are related to a single suit. They both lose one han when they are open. Yakuman hands: There is a special set of hands so difficult to attain that they are worth the limit of points just for having them. The limit value, along with the hands themselves, are called yakuman (役満, or yaku-mangan 役満貫). All yakuman hands override all other han values, while some hands can themselves be combined to form multiple yakuman. Some conditions on the limit hands can render themselves double the value, or called daburu yakuman (ダブル役満). Yakuman hands: Yaku can also be formed into a yakuman, otherwise known as kazoe-yakuman (数え役満), or counted yakuman, which consist of yaku hands and dora tiles that adds up to a minimum 13 han. The thirteen orphans, four closed triplets and big three dragons are considered relatively easy to complete among yakuman hands and are collectively called "the three big families of yakuman" (Japanese: 役満御三家).Some of yakuman hands may have respective names in some regions. The names used here mostly come from American publications, which are based on Chinese translations. Yakuman on opening hands The following are yakuman hands completed on the first go-around. Ancient or local yaku: The following table details yaku and yakuman hands that are usually not recognized as valid but may appear in house rules.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fear of needles** Fear of needles: Fear of needles, known in medical literature as needle phobia, is the extreme fear of medical procedures involving injections or hypodermic needles. This can lead to avoidance of medical care and vaccine hesitancy. It is occasionally referred to as aichmophobia, although this term may also refer to a more general fear of sharply pointed objects. Overview and incidence: The condition was officially recognized in 1994 in the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, 4th edition) as a specific phobia of blood-injection-injury type phobia (BII phobia). Phobic level responses to injections cause sufferers to avoid inoculations, blood tests, and in the more severe cases, all medical care. Overview and incidence: It is estimated that at least 10% of American adults have a fear of needles, and it is likely that the actual number is larger, as the most severe cases are never documented due to the tendency of the sufferer to avoid all medical treatment. The diagnosis criteria for BII phobias are stricter, with an estimated 3-4% prevalence in the general population, and this also includes blood-related phobias.Prevalence of fear of needles has been increasing, with two studies showing an increase among children from 25% in 1995 to 65% in 2012 (for those born after 1999). Augusta University professor Amy Baxter attributes this increase to an increase in administration of booster shots around the age of 5, which is old enough to remember and young enough to be more likely to result in formation of a phobia. Evolutionary basis: According to Dr. James G. Hamilton, author of the pioneering paper on needle phobia, it is likely that the form of needle phobia that is genetic has some basis in evolution, given that thousands of years ago humans who meticulously avoided stab wounds and other incidences of pierced flesh would have a greater chance of survival.The discussion of the evolutionary basis of needle phobia in Hamilton's review article concerns the vasovagal type of needle phobia, which is a sub-type of blood-injection-injury type phobia. This type of needle phobia is uniquely characterized by a two-phase vasovagal response. First, there is a brief acceleration of heart rate and blood pressure. This is followed by a rapid plunge in both heart rate and blood pressure, sometimes leading to unconsciousness. The loss of consciousness is sometimes accompanied by convulsions and numerous rapid changes in the levels of many different hormones.Other medical journal articles have discussed additional aspects of this possible link between vasovagal syncope and evolutionary fitness in blood-injection-injury phobias.An evolutionary psychology theory that explains the association to vasovagal syncope is that some forms of fainting are non-verbal signals that developed in response to increased inter-group aggression during the paleolithic. A non-combatant who has fainted signals that they are not a threat. This might explain the association between fainting and stimuli such as bloodletting and injuries. Types: Although needle phobia is defined simply as an extreme fear of medically related shots/injections, it appears in several varieties. Types: Vasovagal Although most specific phobias stem from the individuals themselves, the most common type of needle phobia, affecting 50% of those afflicted, is an inherited vasovagal reflex reaction. Approximately 80% of people with a fear of needles report that a relative within the first degree exhibits the same disorder. Vasovagal reactions may be triggered by the sight, thought, or feeling of needles or needle-like objects. The physiological changes associated with this type of phobia also include feeling faint, sweating, dizziness, nausea, pallor, tinnitus, panic attacks, and initially high blood pressure and heart rate followed by a plunge in both at the moment of injection. The primary symptom of vasovagal fear is vasovagal syncope, or fainting due to a decrease of blood pressure.Many people who suffer from fainting during needle procedures report no conscious fear of the needle procedure itself, but a great fear of the vasovagal syncope reaction. People become more afraid of the side effects of low blood pressure caused by the idea of a needle.A study in the medical journal Circulation concluded that in many patients with this condition (as well as patients with the broader range of blood/injury phobias), an initial episode of vasovagal syncope during a needle procedure may be the primary cause of needle phobia rather than any basic fear of needles. These findings reverse the more commonly held beliefs about the cause-and-effect pattern of needle phobics with vasovagal syncope. Types: Although most phobias are dangerous to some degree, needle phobia is one of the few that actually kill. In cases of severe phobia, the drop in blood pressure caused by the vasovagal shock reflex may cause death. In Hamilton's 1995 review article on needle phobia, he was able to document 23 deaths as a direct result of vasovagal shock during a needle procedure.The best treatment strategy for this type of needle phobia has historically been desensitization or the progressive exposure of the patient to gradually more frightening stimuli, allowing them to become desensitized to the stimulus that triggers the phobic response. In recent years, a technique known as "applied tension" has become increasingly accepted as an often effective means for maintaining blood pressure to avoid the unpleasant, and sometimes dangerous, aspects of the vasovagal reaction. Types: Associative Associative fear of needles is the second most common type, affecting 30% of needle phobics. This type is the classic specific phobia in which a traumatic event such as an extremely painful medical procedure or witnessing a family member or friend undergo such, causes the patient to associate all procedures involving needles with the original negative experience. This form of fear of needles causes symptoms that are primarily psychological in nature, such as extreme unexplained anxiety, insomnia, preoccupation with the coming procedure, and panic attacks. Effective treatments include cognitive therapy, hypnosis, and/or the administration of anti-anxiety medication. Resistive Resistive fear of needles occurs when the underlying fear involves not simply needles or injections but also being controlled or restrained. It typically stems from repressive upbringing or poor handling of prior needle procedures (for example, forced physical or emotional restraint). This form of needle phobia affects around 20% of those afflicted. Symptoms include combativeness, high heart rate coupled with extremely high blood pressure, violent resistance, avoidance, and flight. The suggested treatment is psychotherapy, this may include teaching the patient self-injection techniques or finding a trusted health care provider. Hyperalgesic Hyperalgesic fear of needles is another form that does not have as much to do with fear of the actual needle. Patients with this form have an inherited hypersensitivity to pain, or hyperalgesia. To them, the pain of an injection is unbearably great and many cannot understand how anyone can tolerate such procedures. This form of fear of needles affects approximately 10% of people with needle phobia. The symptoms include extreme explained anxiety, and elevated blood pressure and heart rate at the immediate point of needle penetration or seconds before. The recommended forms of treatment include some form of anesthesia, either topical or general. Vicarious: Whilst witnessing procedures involving needles it is possible for the phobic to suffer the symptoms of a needle phobic attack without actually being injected. Prompted by the sight of the injection the phobic may exhibit the normal symptoms of vasovagal syncope and fainting or collapse is common. While the cause of this is not known, it may be due to the phobic imagining the procedure being performed on themselves. Recent neuroscience research shows that feeling a pin prick sensation and watching someone else's hand get pricked by a pin activate the same part of the brain. Comorbidity and triggers: Fear of needles, especially in its more severe forms, is often comorbid with other phobias and psychological ailments; for example, iatrophobia, or an irrational fear of doctors, is often seen in needle phobic patients. Comorbidity and triggers: A needle phobic patient does not need to physically be in a doctor's office to experience panic attacks or anxiety brought on by needle phobia. There are many triggers in the outside world that can bring on an attack through association. Some of these are blood, injuries, the sight of the needle physically or on a screen, paper pins, syringes, examination rooms, white lab coats, dentists, nurses, the antiseptic smell associated with offices and hospitals, the sight of a person who physically resembles the patient's regular health care provider, or even reading about the fear. Treatment, mitigation, and alternatives: The medical literature suggests a number of treatments that have been proven effective for specific cases of needle phobia, but provides very little guidance to predict which treatment may be effective for any specific case. The following are some of the treatments that have been shown to be effective in some specific cases. Ethyl chloride spray (and other freezing agents). Easily administered, but provides only superficial pain control. Jet injectors. Jet injectors work by introducing substances into the body through a jet of high pressure gas as opposed to by a needle. Though these eliminate the needle, some people report that they cause more pain. Also, they are only helpful in a very limited number of situations involving needles; for example, insulin and inoculations. Iontophoresis. Iontophoresis drives anesthetic through the skin by using an electric current. It provides effective anesthesia, but is generally unavailable to consumers on the commercial market and some regard it as inconvenient to use. Treatment, mitigation, and alternatives: EMLA. EMLA is a topical anesthetic cream that is a eutectic mixture of lidocaine and prilocaine. It is a prescription cream in the United States, and is available without prescription in some other countries. Although not as effective as iontophoresis, since EMLA does not penetrate as deeply as iontophoresis-driven anesthetics, EMLA provides a simpler application than iontophoresis. EMLA penetrates much more deeply than ordinary topical anesthetics, and it works adequately for many individuals. Treatment, mitigation, and alternatives: Ametop. Ametop gel appears to be more effective than EMLA for eliminating pain during venepuncture. Treatment, mitigation, and alternatives: Lidocaine/tetracaine patch. A self-heating anesthetic patch containing a eutectic mixture of lidocaine and tetracaine has been available in several countries, and was specifically approved by government agencies for use in needle procedures. The patch was sold under the trade name Synera in the United States and Rapydan in European Union. Each patch was packaged in an air-tight pouch. It began to heat up slightly when the patch was removed from the packaging and exposed to the air. The patch required 20 to 30 minutes to achieve full anesthetic effect. The Synera patch was approved by the United States Food and Drug Administration on 23 June 2005. On 11 November 2022, the manufacturer announced that it would be discontinuing the manufacture and sales of the patch worldwide by the end of 2022. Treatment, mitigation, and alternatives: Behavioral therapy. Effectiveness of this varies greatly depending on the person and the severity of the condition. There is some debate as to the effectiveness of behavioral treatments for specific phobias, though some data are available to support the efficacy of approaches such as exposure therapy. Any therapy that endorses relaxation methods may be contraindicated for the treatment of fear of needles as this approach encourages a drop in blood pressure that only enhances the vasovagal reflex. In response to this, graded exposure approaches can include a coping component relying on applied tension as a way to prevent complications associated with the vasovagal response to specific blood, injury, injection type stimulus. Treatment, mitigation, and alternatives: Nitrous oxide (laughing gas). This will provide sedation and reduce anxiety for the patient, along with some mild analgesic effects. Inhalation general anesthesia. This will eliminate all pain and also all memory of any needle procedure. However, it is often regarded as a very extreme solution. It is not covered by insurance in most cases, and most physicians will not order it. It can be risky and expensive and may require a hospital stay. Benzodiazepines, such as diazepam (Valium), lorazepam (Ativan), alprazolam (Xanax), or clonazepam (Klonopin), may help alleviate the anxiety of needle phobics, according to Dr. James Hamilton. These medications have an onset of action within 5 to 15 minutes from ingestion. A relatively large oral dose may be necessary. Tensing the stomach muscles can help avoid fainting. Swearing can reduce perceived pain. Distraction can reduce perceived pain, for example pretending to cough, performing a visual task, watching a video, listening to music, or playing a video game. Certain drugs and vaccines, such as the live attenuated influenza vaccine, can be administered nasally.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Box step** Box step: Box step is a basic dance step named after the pattern it creates on the floor, which is that of a square or box. It is used in a number of American Style ballroom dances: rumba, waltz, bronze-level foxtrot. While it can be performed individually, it is usually done with a partner. This is the most common dance step in the waltz. In international standard dance competition, there is a similar step called closed change.In a typical example, the leader begins with the left foot and proceeds as follows. Box step: First half-box: forward-side-together Second half-box: backwards-side-togetherEvery step is with full weight transfer.Rhythm varies. For example, it is "1-2-3, 4-5-6" in waltz and "slow quick quick, slow quick quick" in rumba.In other dances (and in variations) the box may start from the left or right foot, either back or forward, or even sidewise. For example, in the quadrado figure of samba de Gafieira the leader steps (starting with the left foot) "left-together-back, right-together-forward". In waltz: For the left box, the leader starts with their feet closed. On beat 1 they step forward with their left foot, then they step to the side with their right foot on 2, closes their left to their right foot on 3; steps back with their right foot on 4, to the side with their left foot on 5, and closes their right to their left foot on 6. During the second and fifth step, the foot is supposed to travel along two sides of the box, rather than along its diagonal.The follower also starts with their feet closed. On beat 1 they step back with their right foot, then they step to the side with their left foot on 2, closes their right to their left foot on 3; steps forward with their left foot on 4, to the side with their right foot on 5, and closes their left to their right foot on 6.The right box consists of the same steps only mirrored, that is, left and right feet are exchanged for both leader and follower. In popular culture: This dance was featured in an episode of Curious George called "School of Dance". George first saw the Renkins doing it, then he taught it to Bill, the Quints, the Man with the Yellow Hat, and at the end, Allie.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gross motor skill** Gross motor skill: Gross motor skills are the abilities usually acquired during childhood as part of a child's motor learning. By the time they reach two years of age, almost all children are able to stand up, walk and run, walk up stairs, etc. These skills are built upon, improved and better controlled throughout early childhood, and continue in refinement throughout most of the individual's years of development into adulthood. These gross movements come from large muscle groups and whole body movement. These skills develop in a head-to-toe order. The children will typically learn head control, trunk stability, and then standing up and walking. It is shown that children exposed to outdoor play time activities will develop better gross motor skills. Types of motor skills: Motor skills are movements and actions of the muscles. Typically, they are categorized into two groups: gross motor skills and fine motor skills. Gross motor skills are involved in movement and coordination of the arms, legs, and other large body parts and movements. Gross motor skills can be further divided into two subgroups of locomotor skills and object control skills. Gross locomotor skills would include running, jumping, sliding, and swimming. Object control skills would include throwing, catching and kicking. Fine motor skills are involved in smaller movements that occur in the wrists, hands, fingers, and the feet and toes. They participate in smaller actions such as picking up objects between the thumb and finger, writing carefully, and even blinking. These two motor skills work together to provide coordination. Less developed children focus on their gross movements, while more developed children have more control over their fine movements. Development of posture: Gross motor skills, as well as many other activities, require postural control. Infants need to control the heads to stabilize their gaze and to track moving objects. They also must have strength and balance in their legs to walk. Development of posture: Newborn infants cannot voluntarily control their posture. Within a few weeks, though, they can hold their heads erect, and soon they can lift their heads while prone. By 2 months of age, babies can sit while supported on a lap or an infant seat, but sitting independently is not accomplished until 6 or 7 months of age. Standing also develops gradually across the first year of life. By about 8 months of age, infants usually learn to pull themselves up and hold on to a chair, and they often can stand alone by about 10 to 12 months of age. There is a new device called a "Standing Dani" developed to help special needs children with their posture. Learning to walk: Walking upright requires being able to stand up and balance position from one foot to the other. Although infants usually learn to walk around the time of their first birthday, the neural pathways that control the leg alternation component of walking are in place from a very early age, possibly even at birth or before.[1] This is shown because 1- to 2-month-olds are given support with their feet in contact with a motorized treadmill, they show well-coordinated, alternating steps. If it were not for the problem of switching balance from one foot to the other, babies could walk earlier. Tests were performed on crawling and walking babies where slopes were placed in front of the path and the babies had to decide whether or not it was safe. The tests proved that babies who just learned how to walk did not know what they were capable of and often went down slopes that were not safe, whereas experienced walkers knew what they could do. Practice has a big part to do with teaching a child how to walk. Learning to walk: Vision does not have an effect on muscle growth but it could slow down the child's process of learning to walk. According to the nonprofit Blind Children Center, "Without special training, fully capable infants who are visually impaired may not learn to crawl or walk at an appropriate age and gross and fine motor skills will not properly develop." When the child is not able to see an object then there is no motivation for the child to try to reach for it. Therefore, they do not want to learn independently. Learning to walk is done by modeling others and watching them. Children when put in environments with older children will observe and try and copy the movements done. This helps the child learn through trial and error. The babies will imitate others, picking up the skills a lot faster than creating their own errors. Visually impaired children may need physical therapy to help them learn these gross motor skills faster. One hour of therapy each week is not enough so parents have to make sure they are involved in this process. The parent can help by telling the baby the direction of where the object is and encourage them to get it. You must have patience because every child has their own developmental schedule and it is even truer for the children with special needs. Focusing on the progress of your child is better than focusing on comparing your child to other children. (Humphrey) Infancy development: It has been observed by scientists that motor skills generally develop from the center to the body outward and head to tail. Babies need to practice their skills; therefore they will grow and strengthen better. They need space and time to explore in their environment and use their muscles. "Tummy-time" is a good example of this. At first they are only able to lay their belly on the floor but by around two months they start to gain muscle to raise their head and chest off the ground. Some are also able to go on their elbows. They will also start to kick and bend their legs while lying there, this helps to prepare for crawling. By four months they are able to start to control their head and hold it steady while sitting up. Rolling from belly to back movements is started. At about five months the baby will start to wiggle their limbs to strengthen crawling muscles. Infants can start to sit up by themselves and put some weight on their legs as they hold onto something for support by six months. As they enter their first-year caregivers needs to be more active. The babies will want to get into everything so the house needs to become 'baby proofed'. Babies are able to start to reach and play with their toys too. It is said that the use of baby walkers or devices that help to hold the baby upright are said to delay the process of walking. Research has been found that it delays developing the core torso strength, which can lead to different issues down in their future. Around ten months they should be able to stand on their own. Throughout their years of life different motor skills are formed. (Oswalt) With regards to the gait pattern, study shows that infant at 12 months old exhibit larger mediolateral motion, which may be caused by weak muscle strength and lack of stability. They also show a synchronized use of hip and shoulder while they are walking, which is different from a mature gait pattern performed by adults. The ankles didn't move as much among 12-month infants as compared to that of adults performing a mature walking. Infancy development: Development in the second year Development in the second year of life, toddlers become more motorically skilled and mobile. They are no longer content with being in a playpen and want to move all over the place. Infancy development: Child development experts believe that motor activity during the second year is vital to the child's competent development and that few restrictions, except for safety, should be placed on their motoric adventures. By 13 to 18 months, toddlers can move up and down steps and carry toys. Once they reach the top of the stairs though, they are not able to get back down. They also begin to move from one position to another more smoothly. (Oswalt) Significant changes in gait patterns are also observed in the second year. Infants in the second year have a discordant use of hip and shoulder while walking, which is closer to an adult walking pattern. They are also able to utilize the range of motion of their ankles, toes, and heels more, which is similar to a mature walk. By 18 to 24 months, can move quicker or run for a short distance along with other motor skills. They also start to walk backwards and in circles and begin to run. They can also not only walk up the stairs with their hands and feet but are now able to hold onto the handrail and walk up. Near the end of their second year, complex gross motor skills begin to develop including throwing and kicking. Their skills becomes more natural. Pedaling a tricycle and jumping in place is acquired. At the end they are very mobile and can go from place to place. It is normal for them to get themselves into small situations that could be dangerous such as walking into the street because their brain can't send the information fast enough to their feet. Parents need to keep an eye on their children at all times. (Oswalt) In a majority of the select kinematic and kinetic variables, there are greater differences between two-year-old children and four-year-old-children than there are between four year old children and six year old children. The variables for which there were significant differences tended to be in displacement, velocity, and magnitude of force measurements. Development of children with disabilities: Children with disabilities who are as young as seven months can learn to drive a power wheelchair. This will give specific benefits to the leg, is paralyzed.This chair may decrease the rate of development of the child's gross motor skills, but there are ways to compensate for this. These children usually work with a physical therapist to help with their leg movements. Walkers and other devices are used to help aide this process and avoid obstacles. The negative side to this is that they are limited in their mobility. There is research out to find a device to encourage children to explore their environment while gaining their gross motor skills. This will also hopefully help them with their exercise.A 2017 Cochrane review found that for children with delays associated with cerebral palsy or Down Syndrome up to the age of six the use of a treadmill may accelerate the development of independent walking. Childhood: Early childhood is a critical period for the development of fundamental motor skills. Children in preschooler, develop depending on their interactions with the surrounding environment. A child in an encouraging environment with constructive feedback will develop fundamental motor skills at a faster rate. Typically, females perform better fundamental movement skills at an earlier age than males. Although many studies prove this to be true, it is dominantly true in walking. Girls typically go through maturity faster than boys do, causing them to also be less active. This allows boys to be deemed more active, due to the fact that they mature much later than their opposing gender. However, this does not give a clear answer as to whether or not girls learn to walk before boys. One would think that learning to walk sooner would allow for a higher activity level, though since girls have a noticeably lower activity level than boys, one would assume that this would mean that girls would learn to walk after boys. But since they mature earlier, that would involve the walking stage. As they grow older, children become more proficient in their tasks allowing them to use their high developed skill for events such as sports where these motor skills are highly required. Children who do not master fundamental motor skills are less likely to participate in sport games during childhood and adolescence. This is one explanation of why boys tend to be more or less athletic than their opposite gender. Children at a younger age might not be able to perform at the level of older children, but their ability to learn and improve on motor skills is much higher.At 3 years of age, children enjoy simple movements, such as hopping, jumping, and running back and forth, just for the sheer delight of performing these activities. However, the findings in the article "The relationship between fine and gross motor ability, self-perceptions and self-worth in children and adolescents" it stated that there was not a statistical significance in athletic competence and social competence. This correlation coefficient was .368 and simply means that there is a low correlation between those two relationships. A child being able to perform certain gross and fine motor skills does not mean that they will have the ability to demonstrate social skills such as conversation, social awareness, sensitivity, and body language. This Their body stability is focused on the child's dynamic body base and is related to their visual perceptions such as height, depth, or width. A study was done to assess motor skill development and the overall rate and level of growth development. This study shows that at the preschool age children develop more goal-directed behaviors. This plays a big role, because their learning focuses around play and physical activity. While assessing the gross motor skills in children can be challenging, it is essential to do so in order to ensure that children are prepared to interact with the environment they live in. Different tests are given to these children to measure their skill level.At age 4, children continue to do the same actions as they did at age 3, but further their moving. They are beginning to be able to go down the stairs with one foot on each step. At 5 years of age, they are fully able to go down the stairs one foot at a time in addition to improvements in their balance and running. Their body stability becomes more mature and their trunk is fixed on their posture. Performances are more fluent and are less influenced by factors such a slope and width. During middle and late childhood, children's motor development becomes much smoother and more coordinated than it was in early childhood. Childhood: As they age, children become able to have control over their bodies and have an increased attention span. Having a child in a sport can help them with their coordination, as well as some social aspects. Teachers will suggest that their students may need occupational therapists in different situations. Students could get frustrated doing writing exercises if they are having difficulties with their writing skills. It also may affect the teacher because it is illegible. Some children also may have reports of their "hands getting tired". There are many occupational therapists out there today to give students the help they need. These therapists were once used when something was seriously wrong with your child but now they are used to help children be the best they can be. Childhood: According to the article "The Relationship Between Fundamental Motor Skills and Outside-School Physical Activity of Elementary School Children", the developmental level of overhand throwing and jumping of elementary kids is related to skill specific physical activity outside of school. In the studies done boys were seen to have higher scores in developmental level of overhand throwing and higher scores for the Caltrac accelerometer, rapid-trunk movement, and motor skill related physical activity. Girls were seen to have higher scores in lower-intensity physical activities and physical inactivity. The study showed that the developmental level of the fundamental skills (Overhand-throwing and Jumping) are related to skill-specific physical activity outside of school in elementary children. We can conclude that boys at a younger age develop fundamental motor skills quicker than girls will. In other studies it has been seen that having a higher motor proficiency leads to kids being more active, and in most cases more athletic. This can lead to some issues in childhood development such as issues with weight, and increasing the public health epidemic of childhood obesity. Adolescence and adulthood: Between the ages of 7 and 12 there is an increase in running speed and are able to skip. Jumping is also acquired better and there is an increase in throwing and kicking. They're able to bat and dribble a ball. (Age) Gross motor skills usually continue improving during adolescence. The peak of physical performance is before 30, between 18 and 26. Even though athletes keep getting better than their predecessors—running faster, jumping higher, and lifting more weight—the age at which they reach their peak performance has remained virtually the same. After age 30, most functions begin to decline. Older adults move slower than younger adults. This can be moving from one place to another or continually moving. Exercising regularly and maintaining a healthy lifestyle can slow this process. Aging individuals who are active and biologically healthy perform motor skills at a higher level than their less active, less healthy aging counterparts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toxbot** Toxbot: Toxbot is a computer worm that was primarily active in 2005. On infected computers, it opened up a backdoor to allow command and control over the IRC network, thus creating a botnet that at its peak comprised about 1.5 million computers. The two makers of the botnet were arrested in October 2005 and received jail sentences of 24 and 18 months from a Dutch court.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Noise-predictive maximum-likelihood detection** Noise-predictive maximum-likelihood detection: Noise-Predictive Maximum-Likelihood (NPML) is a class of digital signal-processing methods suitable for magnetic data storage systems that operate at high linear recording densities. It is used for retrieval of data recorded on magnetic media. Noise-predictive maximum-likelihood detection: Data are read back by the read head, producing a weak and noisy analog signal. NPML aims at minimizing the influence of noise in the detection process. Successfully applied, it allows recording data at higher areal densities. Alternatives include peak detection, partial-response maximum-likelihood (PRML), and extended partial-response maximum likelihood (EPRML) detection.Although advances in head and media technologies historically have been the driving forces behind the increases in the areal recording density, digital signal processing and coding established themselves as cost-efficient techniques for enabling additional increases in areal density while preserving reliability. Accordingly, the deployment of sophisticated detection schemes based on the concept of noise prediction are of paramount importance in the disk drive industry. Principles: The NPML family of sequence-estimation data detectors arise by embedding a noise prediction/whitening process into the branch metric computation of the Viterbi algorithm. The latter is a data detection technique for communication channels that exhibit intersymbol interference (ISI) with finite memory. Principles: Reliable operation of the process is achieved by using hypothesized decisions associated with the branches of the trellis on which the Viterbi algorithm operates as well as tentative decisions corresponding to the path memory associated with each trellis state. NPML detectors can thus be viewed as reduced-state sequence-estimation detectors offering a range of implementation complexities. The complexity is governed by the number of detector states, which is equal to 2K , 0≤K≤M , with M denoting the maximum number of controlled ISI terms introduced by the combination of a partial-response shaping equalizer and the noise predictor. By judiciously choosing K , practical NPML detectors can be devised that improve performance over PRML and EPRML detectors in terms of error rate and/or linear recording density.In the absence of noise enhancement or noise correlation, the PRML sequence detector performs maximum-likelihood sequence estimation. As the operating point moves to higher linear recording densities, optimality declines with linear partial-response (PR) equalization, which enhances noise and renders it correlated. A close match between the desired target polynomial and the physical channel can minimize losses. An effective way to achieve near optimal performance independently of the operating point—in terms of linear recording density—and the noise conditions is via noise prediction. In particular, the power of a stationary noise sequence n(D) , where the D operator corresponds to a delay of one bit interval, at the output of a PR equalizer can be minimized by using an infinitely long predictor. A linear predictor with coefficients {pl},l=1,2 ,..., operating on the noise sequence n(D) produces the estimated noise sequence n´(D) . Then, the prediction-error sequence given by e(D)=n(D)−n´(D)=n(D)(1−P(D)) is white with minimum power. The optimum predictor P(D)=p1D+p2D2+ ... Principles: or the optimum noise-whitening filter W(D)=1−P(D) is the one that minimizes the prediction error sequence e(D) in a mean-square senseAn infinitely long predictor filter would lead to a sequence detector structure that requires an unbounded number of states. Therefore, finite-length predictors that render the noise at the input of the sequence detector approximately white are of interest. Principles: Generalized PR shaping polynomials of the form G(D)=F(D)×W(D) where F(D) is a polynomial of order S and the noise-whitening filter W(D) has a finite order of L , give rise to NPML systems when combined with sequence detection In this case, the effective memory of the system is limited to M=L+S requiring a 2L+S -state NPML detector if no reduced-state detection is employed. Principles: As an example, if F(D)=1−D2 then this corresponds to the classical PR4 signal shaping. Using a whitening filter W(D) , the generalized PR target becomes G(D)=(1−D2)×W(D) and the effective ISI memory of the system is limited to M=L+2 symbols. In this case, the full-state NMPL detector performs maximum likelihood sequence estimation (MLSE) using the 2L+2 -state trellis corresponding to G(D) The NPML detector is efficiently implemented via the Viterbi algorithm, which recursively computes the estimated data sequence. Principles: a^(D)=argmina(D)∥z(D)−a(D)G(D)∥2 where a(D) denotes the binary sequence of recorded data bits and z(D) the signal sequence at the output of the noise whitening filter W(D) Reduced-state sequence-detection schemes have been studied for application in the magnetic-recording channel and the references therein. For example, the NPML detectors with generalized PR target polynomials G(D)=F(D)×W(D) can be viewed as a family of reduced-state detectors with embedded feedback. These detectors exist in a form in which the decision-feedback path can be realized by simple table look-up operations, whereby the contents of these tables can be updated as a function of the operating conditions. Analytical and experimental studies have shown that a judicious tradeoff between performance and state complexity leads to practical schemes with considerable performance gains. Thus, reduced-state approaches are promising for increasing linear density. Principles: Depending on the surface roughness and particle size, particulate media might exhibit nonstationary data-dependent transition or medium noise rather than colored stationary medium noise. Improvements o\in the quality of the readback head as well as the incorporation of low-noise preamplifiers may render the data-dependent medium noise a significant component of the total noise affecting performance. Because medium noise is correlated and data-dependent, information about noise and data patterns in past samples can provide information about noise in other samples. Thus, the concept of noise prediction for stationary Gaussian noise sources developed in can be naturally extended to the case where noise characteristics depend highly on local data patterns.By modeling the data-dependent noise as a finite-order Markov process, the optimum MLSE for channels with ISI has been derived. In particular, it when the data-dependent noise is conditionally Gauss–Markov, the branch metrics can be computed from the conditional second-order statistics of the noise process. In other words, the optimum MLSE can be implemented efficiently by using the Viterbi algorithm, in which the branch-metric computation involves data-dependent noise prediction. Because the predictor coefficients and prediction error both depend on the local data pattern, the resulting structure has been called a data-dependent NPML detector. Reduced-state sequence detection schemes can be applied to data-dependent NPML, reducing implementation complexity. Principles: NPML and its various forms represent the core read-channel and detection technology used in recording systems employing advanced error-correcting codes that lend themselves to soft decoding, such as low-density parity check (LDPC) codes. For example, if noise-predictive detection is performed in conjunction with a maximum a posteriori (MAP) detection algorithm such as the BCJR algorithm then NPML and NPML-like detection allow the computation of soft reliability information on individual code symbols, while retaining all the performance advantages associated with noise-predictive techniques. The soft information generated in this manner is used for soft decoding of the error-correcting code. Moreover, the soft information computed by the decoder can be fed back again to the soft detector to improve detection performance. In this way it is possible to iteratively improve the error-rate performance at the decoder output in successive soft detection/decoding rounds. History: Beginning in the 1980s several digital signal-processing and coding techniques were introduced into disk drives to improve the drive error-rate performance for operation at higher areal densities and for reducing manufacturing and servicing costs. In the early 1990s, partial-response class-4 (PR4) signal shaping in conjunction with maximum-likelihood sequence detection, eventually known as PRML technique replaced the peak detection systems that used run-length-limited (RLL) (d,k)-constrained coding. This development paved the way for future applications of advanced coding and signal-processing techniques in magnetic data storage. History: NPML detection was first described in 1996 and eventually found wide application in HDD read channel design. The “noise predictive” concept was later extended to handle autoregressive (AR) noise processes and autoregressive moving-average (ARMA) stationary noise processes The concept was extended to include a variety of non-stationary noise sources, such as head, transition jitter and media noise; it was applied to various post-processing schemes. Noise prediction became an integral part of the metric computation in a wide variety of iterative detection/decoding schemes. History: The pioneering research work on partial-response maximum-likelihood (PRML) and noise-predictive maximum-likelihood (NPML) detection and its impact on the industry were recognized in 2005 by the European Eduard Rhein Foundation Technology Award. Applications: NPML technology were first introduced into IBM's line of HDD products in the late 1990s. Eventually, noise-predictive detection became a de facto standard and in its various instantiations became the core technology of the read channel module in HDD systems.In 2010, NPML was introduced into IBM's Linear Tape Open (LTO) tape drive products and in 2011 in IBM's enterprise-class tape drives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reflection lines** Reflection lines: Engineers use reflection lines to judge a surface's quality. Reflection lines reveal surface flaws, particularly discontinuities in normals indicating that the surface is not C 2 C^{2} . Reflection lines may be created and examined on physical surfaces or virtual surfaces with the help of computer graphics. For example, the shiny surface of an automobile body is illuminated with reflection lines by surrounding the car with parallel light sources. Virtually, a surface can be rendered with reflection lines by modulating the surfaces point-wise color according to a simple calculation involving the surface normal, viewing direction and a square wave environment map. Mathematical definition: Consider a point p p on a surface M M with (possibly non-unit length) normal n n . If an observer views this point from infinity at an incoming direction v v then the reflected view direction r r is: r = ( 2 / | n | 2 ) ( ( n ⋅ v ) n − v ) . Mathematical definition: For reflection lines we consider repeated infinite, non-dispersive light sources parallel to some line a a and therefore perpendicular to a plane P P . Define the vector d d to be the reflection direction r r projected onto the plane P P : d = r − ( r ⋅ a ) a d=r-(r\cdot a)a and similarly let v a v_{a} be the unit viewing direction projected onto P P : v a = v a ^ / | v a ^ | , v a ^ = v − ( v ⋅ a ) a v_{a}={\hat {v_{a}}}/|{\hat {v_{a}}}|,{\hat {v_{a}}}=v-(v\cdot a)a Finally, define a ⊥ a^{\perp } to be the direction lying in P P perpendicular to a a and v a v_{a} : a ⊥ = a × v a a^{\perp }=a\times v_{a} Then the *reflection line function* θ ( p ) : M → ( − π , π ] \theta (p):M\rightarrow (-\pi ,\pi ] is a scalar function mapping points on the surface to angles between v a v_{a} and the projected reflected view direction d d : θ = arctan ⁡ L ( r ⋅ a ⊥ , r ⋅ v a ) \theta =\arctan {L(r\cdot a^{\perp },r\cdot v_{a})} where a r c t a n ( y , x ) arctan(y,x) is the atan2 function producing a number in the range ( − π , π ] (-\pi ,\pi ] . Mathematical definition: Finally, to render the reflection lines positive values θ > 0 \theta >0 are mapped to a light color and non-positive values to a dark color. Highlight lines: Highlight lines are a view-independent alternative to reflection lines. Here the projected normal is directly compared against some arbitrary vector x x perpendicular to the light source: θ = arctan ⁡ ( n a ⋅ a ⊥ , n a ⋅ x ) \theta =\arctan {(n_{a}\cdot a^{\perp },n_{a}\cdot x)} where n a n_{a} is the surface normal projected on the light source plane P P : n a ^ / | n a ^ | , n a ^ = n − ( n ⋅ a ) a {\hat {n_{a}}}/|{\hat {n_{a}}}|,{\hat {n_{a}}}=n-(n\cdot a)a The relationship between reflection lines and highlight lines is likened to that between specular and diffuse shading.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UUCP** UUCP: UUCP (Unix-to-Unix Copy) is a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers. UUCP: A command named uucp is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes uux (user interface for remote command execution), uucico (the communication program that performs the file transfers), uustat (reports statistics on recent activity), uuxqt (execute commands sent from remote machines), and uuname (reports the UUCP name of the local system). Some versions of the suite include uuencode/uudecode (convert 8-bit binary files to 7-bit text format and vice versa). UUCP: Although UUCP was originally developed on Unix in the 1970s and 1980s, and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including DOS, OS/2, OpenVMS (for VAX hardware only), AmigaOS, classic Mac OS, and even CP/M. History: UUCP was originally written at AT&T Bell Laboratories by Mike Lesk. By 1978 it was in use on 82 UNIX machines inside the Bell system, primarily for software distribution. It was released in 1979 as part of Version 7 Unix.The first UUCP emails from the U.S. arrived in the United Kingdom in 1979 and email between the UK, the Netherlands and Denmark started in 1980, becoming a regular service via EUnet in 1982.The original UUCP was rewritten by AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman around 1983. The rewrite is referred to as HDB or HoneyDanBer uucp, which was later enhanced, bug fixed, and repackaged as BNU UUCP ("Basic Network Utilities").Each of these versions was distributed as proprietary software, which inspired Ian Lance Taylor to write a new free software version from scratch in 1991.Taylor UUCP was released under the GNU General Public License. Taylor UUCP addressed security holes which allowed some of the original network worms to remotely execute unexpected shell commands. Taylor UUCP also incorporated features of all previous versions of UUCP, allowing it to communicate with any other version and even use similar config file formats from other versions. History: UUCP was also implemented for non-UNIX operating systems, most-notably DOS systems. Packages such as UUSLAVE/GNUUCP (John Gilmore, Garry Paxinos, Tim Pozar), UUPC/extended (Drew Derbyshire of Kendra Electronic Wonderworks) and FSUUCP (Christopher Ambler of IODesign), brought early Internet connectivity to personal computers, expanding the network beyond the interconnected university systems. FSUUCP formed the basis for many bulletin board system (BBS) packages such as Galacticomm's Major BBS and Mustang Software's Wildcat! BBS to connect to the UUCP network and exchange email and Usenet traffic. As an example, UFGATE (John Galvin, Garry Paxinos, Tim Pozar) was a package that provided a gateway between networks running Fidonet and UUCP protocols. History: FSUUCP was the only other implementation of Taylor's enhanced 'i' protocol, a significant improvement over the standard 'g' protocol used by most UUCP implementations. Technology: Before the widespread availability of Internet access, computers were only connected by smaller local area networks within a company or organization. They were also often equipped with modems so they could be used remotely from character-mode terminals via dial-up telephone lines. UUCP used the computers' modems to dial out to other computers, establishing temporary, point-to-point links between them. Each system in a UUCP network has a list of neighbor systems, with phone numbers, login names and passwords, etc. When work (file transfer or command execution requests) is queued for a neighbor system, the uucico program typically calls that system to process the work. The uucico program can also poll its neighbors periodically to check for work queued on their side; this permits neighbors without dial-out capability to participate. Technology: Over time, dial-up links were replaced by Internet connections, and UUCP added a number of new link layer protocols. These newer connections also reduced the need for UUCP at all, as newer application protocols developed to take advantage of the new networks. Today, UUCP is rarely used over dial-up links, but is occasionally used over TCP/IP. The number of systems involved, as of early 2006, ran between 1500 and 2000 sites across 60 enterprises. UUCP's longevity can be attributed to its low cost, extensive logging, native failover to dialup, and persistent queue management. Technology: Sessions UUCP is normally started by having a user log into the target system and then running the UUCP program. In most cases, this is automated by logging into a known user account used for transfers, whose account's shell has been set to uucico. Thus, for automated transfers, another machine simply has to open a modem connection to the called machine and log into the known account. Technology: When uucico runs, it will expect to receive commands from another UUCP program on the caller's machine and begin a session. The session has three distinct stages: Initial handshake File request(s) Final handshake Initial handshake On starting, uucico will respond by sending an identification string, \20Shere=hostname\0, where \20 is the control-P character, and \0 is a trailing null. The caller's UUCP responds with \20Scallername options\0, where options is a string containing zero or more Unix-like option switches. These can include packet and window sizes, the maximum supported file size, debugging options, and others. Technology: Depending on the setup of the two systems, the call may end here. For instance, when the caller responds with their system name, the called system may optionally hang up if it does not recognize the caller, sending the RYou are unknown to me\0 response string and then disconnecting. Technology: File requests If the two systems successfully handshake, the caller will now begin to send a series of file requests. There are four types: S causes a file to be Sent from the caller to the called system (upload). The from and to names are provided, allowing the filename to be changed on the receiver. When the S command is received on the called system, it responds with SY if it succeeded and it is ready to accept the file, or SNx if it failed, where x is a failure reason. If an SY is received by the caller, it begins uploading the file using the protocol selected during the initial handshake (see below). When the transfer is complete, the called system responds with CY if it successfully received the file, or CN5 if it failed. Technology: R is a Request for the called system to send a file to the caller (download). It is otherwise similar to S, using RY and RN to indicate the command was accepted and it will begin to send data or had a problem, and expecting a CY and CN5 from the caller at the end of the transfer. X uploads commands to be eXecuted on the called system. This can be used to make that system call another and deliver files to it. The called system responds with XY if it succeeded, or XN if it failed. H, for Hangup, indicates the caller is done. The called system responds with HY if it succeeded, or HN if it failed. Final handshake After sending an H command, the calling system sends a final packet \20OOOOOO\0 (control-P, six ohs, null-terminator) and the called system responds with \20OOOOOO\0 (control-P, seven ohs, null-terminator). Some systems will simply hang up on the successful reception of the H command and not bother with the final handshake. Technology: g-protocol Within the suite of protocols in UUCP, the underlying g-protocol is responsible for transferring information in an error-free form. The protocol originated as a general-purpose system for packet delivery, and thus offers a number of features that are not used by the UUCP package as a whole. These include a secondary channel that can send command data interspersed with a file transfer, and the ability to renegotiate the packet and window sizes during transmission. These extra features may not be available in some implementations of the UUCP stack.The packet format consisted of a 6-byte header and then between zero and 4096 bytes in the payload. The packet starts with a single \020 (control-P). This is followed by a single byte, known as "K", containing a value of 1 to 8 indicating a packet size from 32 to 4096 bytes, or a 9 indicating a control packet. Many systems only supported K=2, meaning 64 bytes. The next two bytes were a 16-bit checksum of the payload, not including the header. The next byte is the data type and finally, the last byte is the XOR of the header, allowing it to be checked separately from the payload.The control byte consists of three bit-fields in the format TTXXXYYY. TT is the packet type, 0 for control packets (which also requires K=9 to be valid), 1 for alternate data (not used in UUCP), 2 for data, and 3 indicates a short packet that re-defines the meaning of K. In a data packet, XXX is the packet number for this packet from 0 to 7, and YYY is the last that was received correctly. This provides up to 8 packets in a window. In a control packet, XXX indicates the command and YYY is used for various parameters. For instance, transfers are started by sending a short control packet with TT=0 (control), XXX=7 and YYY the number of packets in a window, then sending another packet with XXX=6 and YYY as the packet length (encoded as it would be in K) and then a third packet that is identical to the first but XXX=5.g-protocol uses a simple sliding window system to deal with potentially long latencies between endpoints. The protocol allows packets to size from 64 to 4096 8-bit bytes, and windows that include 1 to 7 packets. In theory, a system using 4k packets and 7 packet windows (4096x7) would offer performance matching or beating the best file-transfer protocols like ZMODEM. In practice, many implementations only supported a single setting of 64x3. As a result, the g-protocol has an undeserved reputation for poor performance. Confusion over the packet and window sizes led to the G-protocol, differing only in that it always used 4096x3. Taylor UUCP did not support G, but did support any valid requested window or packet size, so remote systems starting G would work fine with Taylor's g, while two Taylor systems could negotiate even faster connections.Telebit modems used protocol spoofing to improve the performance of g-protocol transfers by noticing end-of-packet markers being sent to the remote system and immediately sending an ACK back to the local host, pretending that the remote system had already received the packet and decoded it correctly. This triggered the software stack to send the next packet, so rapidly that the transfer became almost continuous. The data between the two modems was error-corrected using a proprietary protocol based on MNP that ran over Telebit's half-duplex connections much better than g-protocol would normally, because in the common 64x3 case the remote system would be sending a constant stream of ACKs that would overflow the low-speed return channel. Combined with the modem's naturally higher data rates, they greatly improved overall throughput and generally performed about seven times the speed of a 2400 bit/s modem. They were widely used on UUCP hosts as they could quickly pay for themselves in reduced long-distance charges. Technology: Other protocols UUCP implementations also include other transfer protocols for use over certain links. Technology: f-protocol is designed to run over 7-bit error-corrected links. This was originally intended for use on X.25 links, which were popular for a time in the 1980s. It does not packetize data, instead, the entire file is sent as a single long string followed by a whole-file checksum. The similar x-protocol appears to have seen little or no use. d-protocol was similar to x, but intended for use on Datakit networks that connected many of Bell Labs offices.t-protocol originated in the BSD versions of UUCP and like some similar ones, is designed to run over 8-bit error-free TCP/IP links. It has no error correction at all, and the protocol consists simply of breaking up command and file data into 512 or 1024-byte packets to easily fit within typical TCP frames. Technology: e-protocol ("e" for Ethernet) was developed by Clem Cole at MASSCOMP and was widely released by Brian Redman in the later HoneyDanBer versions. It was developed and released before the t-protocol, but the t-protocol was more commonly used because the BSD version of UUCP was the dominant implementation. The e-protocol differs from the t-protocol only in that commands are not packetized and are instead sent as normal strings, while files are padded to the nearest 20 bytes. Mail routing: The uucp and uuxqt capabilities could be used to send email between machines, with suitable mail user interfaces and delivery agent programs. A simple UUCP mail address was formed from the adjacent machine name, an exclamation mark (often pronounced bang), followed by the user name on the adjacent machine. For example, the address barbox!user would refer to user user on adjacent machine barbox. Mail routing: Mail could furthermore be routed through the network, traversing any number of intermediate nodes before arriving at its destination. Initially, this had to be done by specifying the complete path, with a list of intermediate host names separated by bangs. For example, if machine barbox is not connected to the local machine, but it is known that barbox is connected to machine foovax which does communicate with the local machine, the appropriate address to send mail to would be foovax!barbox!user. Mail routing: User barbox!user would generally publish their UUCP email address in a form such as …!bigsite!foovax!barbox!user. This directs people to route their mail to machine bigsite (presumably a well-known and well-connected machine accessible to everybody) and from there through the machine foovax to the account of user user on barbox. Publishing a full path would be pointless, because it would be different, depending on where the sender was. (e.g. Ann at one site may have to send via path gway!tcol!canty!uoh!bigsite!foovax!barbox!user, whereas from somewhere else, Bill has to send via the path pdp10!router22!bigsite!foovax!barbox!user). Many users would suggest multiple routes from various large well-known sites, providing even better and perhaps faster connection service from the mail sender. Mail routing: Bang path An email address of this form was known as a bang path. Mail routing: Bang paths of eight to ten machines (or hops) were not uncommon in 1981, and late-night dial-up UUCP links could cause week-long transmission times. Bang paths were often selected by both transmission time and reliability, as messages would often get lost. Some hosts went so far as to try to "rewrite" the path, sending mail via "faster" routes—this practice tended to be frowned upon. Mail routing: The "pseudo-domain" ending .uucp was sometimes used to designate a hostname as being reachable by UUCP networking, although this was never formally registered in the domain name system (DNS) as a top-level domain. The uucp community administered itself and did not mesh well with the administration methods and regulations governing the DNS; .uucp works where it needs to; some hosts punt mail out of SMTP queue into uucp queues on gateway machines if a .uucp address is recognized on an incoming SMTP connection.Usenet traffic was originally transmitted over the UUCP protocol using bang paths. These are still in use within Usenet message format Path header lines. They now have only an informational purpose, and are not used for routing, although they can be used to ensure that loops do not occur. Mail routing: In general, like other older e-mail address formats, bang paths have now been superseded by the "@ notation", even by sites still using UUCP. A UUCP-only site can register a DNS domain name, and have the DNS server that handles that domain provide MX records that cause Internet mail to that site to be delivered to a UUCP host on the Internet that can then deliver the mail to the UUCP site. UUCPNET and mapping: UUCPNET was the name for the totality of the network of computers connected through UUCP. This network was very informal, maintained in a spirit of mutual cooperation between systems owned by thousands of private companies, universities, and so on. Often, particularly in the private sector, UUCP links were established without official approval from the companies' upper management. The UUCP network was constantly changing as new systems and dial-up links were added, others were removed, etc. UUCPNET and mapping: The UUCP Mapping Project was a volunteer, largely successful effort to build a map of the connections between machines that were open mail relays and establish a managed namespace. Each system administrator would submit, by e-mail, a list of the systems to which theirs would connect, along with a ranking for each such connection. These submitted map entries were processed by an automatic program that combined them into a single set of files describing all connections in the network. These files were then published monthly in a newsgroup dedicated to this purpose. The UUCP map files could then be used by software such as "pathalias" to compute the best route path from one machine to another for mail, and to supply this route automatically. The UUCP maps also listed contact information for the sites, and so gave sites seeking to join UUCPNET an easy way to find prospective neighbors. Connections with the Internet: Many UUCP hosts, particularly those at universities, were also connected to the Internet in its early years, and e-mail gateways between Internet SMTP-based mail and UUCP mail were developed. A user at a system with UUCP connections could thereby exchange mail with Internet users, and the Internet links could be used to bypass large portions of the slow UUCP network. A "UUCP zone" was defined within the Internet domain namespace to facilitate these interfaces. Connections with the Internet: With this infrastructure in place, UUCP's strength was that it permitted a site to gain Internet e-mail and Usenet connectivity with only a dial-up modem link to another cooperating computer. This was at a time when true Internet access required a leased data line providing a connection to an Internet Point of Presence, both of which were expensive and difficult to arrange. By contrast, a link to the UUCP network could usually be established with a few phone calls to the administrators of prospective neighbor systems. Neighbor systems were often close enough to avoid all but the most basic charges for telephone calls. Remote commands: uux is remote command execution over UUCP. The uux command is used to execute a command on a remote system, or to execute a command on the local system using files from remote systems. The command is run by the uucico daemon, which handles remote execution requests as simply another kind of file to batch-send to the remote system whenever a next-hop node is available. The remote system will then execute the requested command and return the result, when the original system is available. Both of these transfers may be indirect, via multi-hop paths, with arbitrary windows of availability. Even when executing a command on an always-available neighbor, uux is not instant. Decline: UUCP usage began to die out with the rise of Internet service providers offering inexpensive SLIP and PPP services. The UUCP Mapping Project was formally shut down in late 2000. The UUCP protocol has now mostly been replaced by the Internet TCP/IP based protocols SMTP for mail and NNTP for Usenet news. In July 2012, Dutch Internet provider XS4ALL closed down its UUCP service, claiming it was "probably one of the last providers in the world that still offered it"; it had only 13 users at that time (however prior to its shut-down it had refused requests from new users for several years). Current uses and legacy: One surviving feature of UUCP is the chat file format, largely inherited by the Expect software package. Current uses and legacy: UUCP was in use over special-purpose high cost links (e.g. marine satellite links) long after its disappearance elsewhere, and still remains in legacy use. In addition to legacy use, in 2021 new and innovative UUCP uses are growing, especially for telecommunications in the HF band, for example, for communities in the Amazon rainforest for email exchange and other uses. A patch to Ian's UUCP was contributed to UUCP Debian Linux package to adapt for the HERMES (High-Frequency Emergency and Rural Multimedia Exchange System) project, which provides UUCP HF connectivity.In the mid 2000s, UUCP over TCP/IP (often encrypted, using the SSH protocol) was proposed for use when a computer does not have any fixed IP addresses but is still willing to run a standard mail transfer agent (MTA) like Sendmail or Postfix. Current uses and legacy: Bang-like paths are still in use within the Usenet network, though not for routing; they are used to record, in the header of a message, the nodes through which that message has passed, rather than to direct where it will go next. "Bang path" is also used as an expression for any explicitly specified routing path between network hosts. That usage is not necessarily limited to UUCP, IP routing, email messaging, or Usenet. Current uses and legacy: The concept of delay-tolerant networking protocols was revisited in the early 2000s. Similar techniques as those used by UUCP can apply to other networks that experience delay or significant disruption.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dyson conjecture** Dyson conjecture: In mathematics, the Dyson conjecture (Freeman Dyson 1962) is a conjecture about the constant term of certain Laurent polynomials, proved independently in 1962 by Wilson and Gunson. Andrews generalized it to the q-Dyson conjecture, proved by Zeilberger and Bressoud and sometimes called the Zeilberger–Bressoud theorem. Macdonald generalized it further to more general root systems with the Macdonald constant term conjecture, proved by Cherednik. Dyson conjecture: The Dyson conjecture states that the Laurent polynomial ∏1≤i≠j≤n(1−ti/tj)ai has constant term (a1+a2+⋯+an)!a1!a2!⋯an!. The conjecture was first proved independently by Wilson (1962) and Gunson (1962). Good (1970) later found a short proof, by observing that the Laurent polynomials, and therefore their constant terms, satisfy the recursion relations F(a1,…,an)=∑i=1nF(a1,…,ai−1,…,an). The case n = 3 of Dyson's conjecture follows from the Dixon identity. Sills & Zeilberger (2006) and (Sills 2006) used a computer to find expressions for non-constant coefficients of Dyson's Laurent polynomial. Dyson integral: When all the values ai are equal to β/2, the constant term in Dyson's conjecture is the value of Dyson's integral 1(2π)n∫02π⋯∫02π∏1≤j<k≤n|eiθj−eiθk|βdθ1⋯dθn. Dyson's integral is a special case of Selberg's integral after a change of variable and has value Γ(1+βn/2)Γ(1+β/2)n which gives another proof of Dyson's conjecture in this special case. q-Dyson conjecture: Andrews (1975) found a q-analog of Dyson's conjecture, stating that the constant term of ∏1≤i<j≤n(xixj;q)ai(qxjxi;q)aj is (q;q)a1+⋯+an(q;q)a1⋯(q;q)an. Here (a;q)n is the q-Pochhammer symbol. q-Dyson conjecture: This conjecture reduces to Dyson's conjecture for q=1, and was proved by Zeilberger & Bressoud (1985), using a combinatorial approach inspired by previous work of Ira Gessel and Dominique Foata. A shorter proof, using formal Laurent series, was given in 2004 by Ira Gessel and Guoce Xin, and an even shorter proof, using a quantitative form, due to Karasev and Petrov, and independently to Lason, of Noga Alon's Combinatorial Nullstellensatz, was given in 2012 by Gyula Karolyi and Zoltan Lorant Nagy. q-Dyson conjecture: The latter method was extended, in 2013, by Shalosh B. Ekhad and Doron Zeilberger to derive explicit expressions of any specific coefficient, not just the constant term, see http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/qdyson.html, for detailed references. Macdonald conjectures: Macdonald (1982) extended the conjecture to arbitrary finite or affine root systems, with Dyson's original conjecture corresponding to the case of the An−1 root system and Andrews's conjecture corresponding to the affine An−1 root system. Macdonald reformulated these conjectures as conjectures about the norms of Macdonald polynomials. Macdonald's conjectures were proved by (Cherednik 1995) using doubly affine Hecke algebras. Macdonald's form of Dyson's conjecture for root systems of type BC is closely related to Selberg's integral.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudohypoaldosteronism** Pseudohypoaldosteronism: Pseudohypoaldosteronism (PHA) is a condition that mimics hypoaldosteronism. However, the condition is due to a failure of response to aldosterone, and levels of aldosterone are actually elevated, due to a lack of feedback inhibition. Presentation: PHA2 is clinically characterised by hypertension, hyperkalaemia, metabolic acidosis and normal renal function. Mechanism: PHA2 is also known as familial hyperkalaemic hypertension, or Gordon syndrome. The underlying genetic defect leads to increased sodium chloride reabsorption in the distal tubule in the kidney, leading to volume expansion, hypertension and lowered renin levels. The hyperkalemia found in PHA2 is proposed to be a function of diminished sodium delivery to the cortical collecting tubule (potassium excretion is mediated by the renal outer medullary potassium channel ROMK in which sodium reabsorption plays a role). Alternatively, WNK4 mutations that result in a gain of function of the Na-Cl co-transporter may inhibit ROMK activity resulting in hyperkalemia. Unlike in PHA1 in which aldosterone resistance is present, in PHA2 the volume expansion leads to relatively low aldosterone levels. Treatment: Treatment of severe forms of PHA1 requires relatively large amounts of sodium chloride. These conditions also involve hyperkalemia.In contrast, PHA2 (Gordon's syndrome) requires salt restriction and use of thiazide diuretics to block sodium chloride reabsorption and normalise blood pressure and serum potassium. History: This syndrome was first described by Cheek and Perry in 1958. History: Later pediatric endocrinologist Aaron Hanukoglu reported that there are two independent forms of PHA with different inheritance patterns: A renal form with autosomal dominant inheritance exhibiting salt loss mainly from the kidneys, and a multi-system form with autosomal recessive form exhibiting salt loss from kidney, lung, and sweat and salivary glands.The hereditary lack of responsiveness to aldosterone could be due to at least two possibilities: 1. A mutation in the mineralocorticoid receptor that binds aldosterone, or 2. A mutation in a gene that is regulated by aldosterone. Linkage analysis on patients with the severe form of PHA excluded the possibility of linkage of the disease with the mineralocorticoid receptor gene region. Later, the severe form of PHA was discovered to be due to mutations in the genes SCNN1A, SCNN1B, and SCNN1G that code for the epithelial sodium channel subunits, α, β, and γ, respectively.A stop mutation in the SCNN1A gene has been shown to be associated with female infertility.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzathine** Benzathine: Benzathine is a diamine used as a component in some medications including benzathine phenoxymethylpenicillin and benzathine benzylpenicillin. It stabilises penicillin and prolongs its sojourn when injected into tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kac's lemma** Kac's lemma: In ergodic theory, Kac's lemma, demonstrated by mathematician Mark Kac in 1947, is a lemma stating that in a measure space the orbit of almost all the points contained in a set A of such space, whose measure is μ(A) , return to A within an average time inversely proportional to μ(A) .The lemma extends what is stated by Poincaré recurrence theorem, in which it is shown that the points return in A infinite times. Application: In physics, a dynamical system evolving in time may be described in a phase space, that is by the evolution in time of some variables. If this variables are bounded, that is having a minimum and a maximum, for a theorem due to Liouville, a measure can be defined in the space, having a measure space where the lemma applies. As a consequence, given a configuration of the system (a point in the phase space) the average return period close to this configuration (in the neighbourhood of the point) is inversely proportional to the considered size of volume surrounding the configuration. Normalizing the measure space to 1, it becomes a probability space and the measure P(A) of its set A represents the probability of finding the system in the states represented by the points of that set. In this case the lemma implies that the smaller is the probability to be in a certain state (or close to it), the longer is the time of return near that state.In formulas, if A is the region close to the starting point and TR is the return period, its average value is: ⟨TR⟩=τ/P(A) Where τ is a characteristic time of the system in question. Application: Note that since the volume of A , therefore P(A) , depends exponentially on the n variables in the system ( A=ϵn , with ϵ infinitesimal side, therefore less than 1, of the volume in n dimensions), P(A) decreases very rapidly as the variables of the system increase and consequently the return period increases exponentially.In practice, as the variables needed to describe the system increase, the return period increases rapidly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IXP1200** IXP1200: The IXP1200 is a network processor fabricated by Intel Corporation. The processor was originally a Digital Equipment Corporation (DEC) project that had been in development since late 1996. When parts of DEC's Digital Semiconductor business was acquired by Intel in 1998 as part of an out-of-court settlement to end lawsuits each company had launched at each other for patent infringement, the processor was transferred to Intel. The DEC design team was retained and the design was completed by them under Intel. Samples of the processor were available for Intel partners since 1999, with general sample availability in late 1999. The processor was introduced in early 2000 at 166 and 200 MHz. A 232 MHz version was introduced later. The processor was later succeeded by the IXP2000, an XScale-based family developed entirely by Intel. IXP1200: The processor was intended to replace the general-purpose embedded microprocessors and specialized application-specific integrated circuit (ASIC) combinations used in network routers. The IXP1200 was designed for mid-range and high-end routers. For high-end models, the processor could be combined with others to increase the capability and performance of the router. IXP1200: The IXP1200 integrates a StrongARM SA-1100-derived core and six microengines, which were RISC microprocessors with an instruction set optimized for network packet workloads. The StrongARM core performed non-real-time functions while the microengines manipulated network packets. The processor also integrates static random access memory (SRAM) and synchronous dynamic random access memory (SDRAM) controllers, a PCI interface and an IX bus interface. IXP1200: The IXP1200 contains 6.5 million transistors and measures 126 mm2. It was fabricated in a 0.28 µm, complementary metal–oxide–semiconductor (CMOS) process with three levels of interconnect. It was packaged in a 432-ball enhanced ball grid array (EBGA). The IXP1200 was fabricated at DEC's former Hudson, Massachusetts plant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 196** Radical 196: Radical 196 or radical bird (鳥部) meaning "bird" is one of the 6 Kangxi radicals (214 radicals in total) composed of 11 strokes. In the Kangxi Dictionary, there are 750 characters (out of 49,030) to be found under this radical. 鸟 (5 strokes), the simplified form of 鳥, is the 114th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China, with 鳥 listed as its associated indexing component. The simplified form 鸟 is derived from the cursive script form of 鳥. Literature: Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1. Kunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PhTx-2** PhTx-2: PhTx-2 is a toxic fraction of the venom of the Brazilian wandering spider Phoneutria nigriventer. Target: This fraction is responsible for most of the venom's effects, acts on voltage-gated ion channels, this fraction is composed of nine different peptides, of which PhTx-2-5 and PhTx-2-6 activate voltage-gated ion channels. PhTx-2 has been shown to be related to the activation and delay of inactivation of neuronal sodium channels, leading to an increase in the concentration of neuronal Ca++ and the release of glutamate, resulting in the release of neurotransmitters such as acetylcholine and catecholamines. Primates are more sensitive to the PhTx-1 & 2 components than in the case of mice, about 4 to 5 times more sensitive. The LD50 for a 70kg adult human is 6.3 mg, but the spider has only 1-2 mg and usually delivers 0.4 mg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**828 film** 828 film: 828 is a film format for still photography. Kodak introduced it in 1935, only a year after 135 film. 828 film was introduced with the Kodak Bantam, a consumer-level camera. The 828 format uses the same basic film stock as 135 film (standard 35mm film), but the film lacks the sprocket holes of 135. The standard image format is 40 × 28 mm. This provides a 30% larger image compared to 135's standard 24 × 36 mm, yet on the same film stock. Because Kodak targeted 828 at a lower-end consumer market, the film was much shorter, at a standard 8 exposures per roll. 828 film originally had one perforation per frame, much like 126 film. Unlike 135 (a single-spool cartridge film) or 126 (a dual-spool cartridge film), 828 is a roll film format, like 120 film. Like 120, it has a backing paper and frames are registered through a colored window on the back of the camera (except on the original folding Bantams, where images were registered with an index hole). 828 cameras never achieved widespread popularity, and the format had a rather limited run. Kodak's last 828 cameras were the Pony 828 in the US, produced until 1959, and the Bantam Colorsnap 3 in the UK, produced until 1963. Kodak ceased production of 828 format film in 1985. The Traid Fotron, sold in the late 1960s, used 828 format film as well. However, the film was enclosed in a proprietary pop-in cartridge and so the consumer never actually saw the film; instead, they merely returned the entire cartridge to Traid for processing. 828 film: Those wishing to photograph with an 828-format camera have a few options. As of 2005, 828 film is available for purchase on the Internet; this film is probably respooled from bulk unperforated 35mm film. Another option is to use standard 135 film, with sprocket holes, and respool it with used 828 backing paper onto old spools. The effective image size will be reduced with this method as the perforations will intrude on the image area. Finally, as with other obsolete film types, 120 film can be cut (with backing paper) and respooled onto 828 spools.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corticotropin-like intermediate peptide** Corticotropin-like intermediate peptide: Corticotropin-like intermediate [lobe] peptide (CLIP), also known as adrenocorticotropic hormone fragment 18-39 (ACTH(18-39)), is a naturally occurring, endogenous neuropeptide with a docosapeptide structure and the amino acid sequence Arg-Pro-Val-Lys-Val-Tyr-Pro-Asn-Gly-Ala-Glu-Asp-Glu-Ser-Ala-Glu-Ala-Phe-Pro-Leu-Glu-Phe. CLIP is generated as a proteolyic cleavage product of adrenocorticotropic hormone (ACTH), which in turn is a cleavage product of proopiomelanocortin (POMC). Its physiological role has been investigated in various tissues, specifically in the central nervous system.It has been suggested to function as an insulin secretagogue in the pancreas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Patellar plexus** Patellar plexus: The patellar plexus is a nerve plexus within the subcutaneous tissue overlying and surrounding the patella and ligamentum patellae. It is a fine network of communicating nerve fibres. It is formed by the anterior division of lateral femoral cutaneous nerve, terminal branches of the intermediate femoral cutaneous nerve, terminal branches of the medial femoral cutaneous nerve, and the infrapatellar branch of saphenous nerve.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spinning count** Spinning count: Spinning count is a measure of fibre fineness and distribution developed by the English. It is defined as the number of hanks of yarn that can be spun from a pound of wool. A hank of wool is 560 yards long (560 yd/lb = 1.129 km/kg). In theory a pound of 62s wool could produce 34720 yards of yarn.As it is now a relatively simple matter to measure the average fibre diameter and distribution, spinning count is being replaced with the specification of average fibre diameter in micrometers and fiber distribution in standard deviations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Punk (fireworks)** Punk (fireworks): A punk is a smoldering stick used for lighting firework fuses. It is safer than a match or a lighter because it can be used from a greater distance and does not use an open flame. They are made of bamboo and a brown coating of compressed sawdust. Punks often resemble sticks of incense, and in some countries actual incense sticks are used in a similar fashion. Punks are sold at nearly all firework stands and many stands will include them for free with a purchase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CCG Profiles** CCG Profiles: CCG Profiles is a software for designing joinery constructions for windows and doors industry. History: The first version was released in 1995 – named Alumin, as a software for design and calculation of aluminium constructions for Windows. In 1999 the software was renamed Profiles and it was redesigned, in order to calculate PVC and timber constructions. Reviews and awards: At the 62nd International Fair Plovdiv, 2006, the program CCG Profiles was awarded a golden medal. Reviews and awards: Articles for the program were published in the Bulgarian magazine AMS Aspects and the Serbian one Aluminium & PVC magazin. According to an unofficial data, CCG Profiles is one of the most popular software for windows & doors industry in Bulgaria, and a large number of companies – manufacturers and suppliers of profile systems (Etem, Blick, Veka, Weiss Profil, Exalco, Profilink, Altest, Profilko, Roplasto) offer the software product to its customers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxford Ophthalmological Congress** Oxford Ophthalmological Congress: The Oxford Ophthalmological Congress (OOC) is an annual meeting of ophthalmic surgeons at the University of Oxford.Established in 1909, the Congress is the longest running continuous gathering in the United Kingdom of ophthalmic surgeons. Until recently it was also the largest and is now second only to the expanded Congress of the Royal College of Ophthalmologists itself. It brings together some 450 representatives each year. Oxford Ophthalmological Congress: The results of the conference are summarized in the British Journal of Ophthalmology. and also, where useful to the wider profession, in the British Medical Journal. History: In 1902, Robert Walter Doyne was appointed the first Reader in Ophthalmology at the University of Oxford. The post was inaugurated thanks to a benefaction from Mrs. Margaret Ogilvie. Doyne held the chair for 11 years and was also consulting ophthalmic surgeon to the Radcliffe Infirmary in Oxford. He later founded the Oxford Eye Hospital. In 1904 he was the lead representative for Ophthalmology at the annual meeting of the British Medical Association (BMA), which was held at Oxford in the summer. This programme was such a success that he was asked to arrange a similar meeting the following year and this then became a regular event each summer. As a result, the Oxford Ophthalmological Congress was formally established in 1909, and Doyne was appointed its first Master the following year. In 2014 Parul Desai became the first woman to be appointed Master, with the original title being maintained. Programme: The Congress is a 3-day colloquium, held each July in Oxford in England, at which leading practitioners give talks on issues of interest to the profession. The main event of the Congress is the Doyne Memorial Lecture, but there are, in addition, opportunities for a number of quick-fire presentations, allowing newcomers to introduce themselves and their projects to a distinguished gathering of professional colleagues. The Founder's Cup is awarded for the best presentation and the Ian Fraser Cup is the other main award. The evenings are for socialising: an opportunity to catch up with old colleagues from other universities and hospitals, and a chance for the present and the future of the profession to meet each other. Prizes: The Founder's Cup and Ian Fraser Cup are the leading prizes of the Congress and a fair indicator of the leading British-trained ophthalmologists of the future: Founder's Cup 2005: Robert MacLaren: Cup and medal - now Professor 2011: Mandeep Singh: Cup and medal - now Professor 2013: Mandeep Singh: Cup (bis) - (see above) 2014: Samantha de Silva 2015: Alun Barnard 2016: Sofia Theodoropoulou 2017: George Cook 2018: Harry Orlans 2019: Imran Mohammed 2020: Cancelled - COVID-19 2021: Liying Low 2022: Simona Degli EspostiIan Fraser Cup: 2014: Naz Raouff 2015: Paul Flavahan 2016: Matthew Edmunds 2017: Kanmin Xue 2018: Minak Bhalla 2019: Neda Minakaran 2020: Cancelled - COVID-19 2021: Susan Mollan 2022: Hong Kai Lim
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classic Ethernet** Classic Ethernet: Classic Ethernet is a family of 10 Mbit/s Ethernet standards, which is the first generation of Ethernet standards. In 10BASE-X, the 10 represents its maximum throughput of 10 Mbit/s, BASE indicates its use of baseband transmission, and X indicates the type of medium used. The first standard for Fast Ethernet, was approved in 1995. Fibre-based standards (10BASE-F): 10BASE-F, or sometimes 10BASE-FX, is a generic term for the family of 10 Mbit/s Ethernet standards using fiber optic cable. In 10BASE-F, the 10 represents a maximum throughput of 10 Mbit/s, BASE indicates its use of baseband transmission, and F indicates that it relies on a medium of fiber-optic cable. The technical standard requires two strands of 62.5/125 µm multimode fiber. One strand is used for data transmission while the other is used for reception, making 10BASE-F a full-duplex technology. There are three different variants of 10BASE-F: 10BASE-FL, 10BASE-FB and 10BASE-FP. Of these only 10BASE-FL experienced widespread use. With the introduction of later standards 10 Mbit/s technology has been largely replaced by faster Fast Ethernet, Gigabit Ethernet and 100 Gigabit Ethernet standards. Fibre-based standards (10BASE-F): FOIRL Fiber-optic inter-repeater link (FOIRL) is a specification of Ethernet over optical fiber. It was specially designed as a back-to-back transport between repeater hubs to decrease latency and collision detection time, thus increasing the possible network radius. It was replaced by 10BASE-FL. 10BASE-FL 10BASE-FL is the most commonly used 10BASE-F specification of Ethernet over optical fiber. In 10BASE-FL, FL stands for fiber optic link. It replaces the original fiber-optic inter-repeater link (FOIRL) specification, but retains compatibility with FOIRL-based equipment. When mixed with FOIRL equipment, the maximum segment length is limited to FOIRL's 1000 meters. Fibre-based standards (10BASE-F): 10BASE-FB The 10BASE-FB is a network segment used to bridge Ethernet hubs. Here FB abbreviates FiberBackbone. Due to the synchronous operation of 10BASE-FB, delays normally associated with Ethernet repeaters are reduced, thus allowing segment distances to be extended without compromising the collision detection mechanism. The maximum allowable segment length for 10BASE-FB is 2000 meters. This media system allowed multiple half-duplex Ethernet signal repeaters to be linked in series, exceeding the limit on the total number of repeaters that could be used in a given 10 Mbit/s Ethernet system. 10BASE-FB links were attached to synchronous signaling repeater hubs and used to link the hubs together in a half-duplex repeated backbone system that could span longer distances. Fibre-based standards (10BASE-F): 10BASE-FP In 10BASE-FP, FP denotes fibre passive. This variant calls for a non-powered optical signal coupler capable of linking up to 33 devices, with each segment being up to 500 m in length. This formed a star network centered on the signal coupler. There are no devices known to have implemented this standard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Assistant's Revenge** Assistant's Revenge: The Assistant's Revenge is a transposition illusion in which two performers change places. It was created by magician and inventor Robert Harbin. Description: One of the two performers, the restrainee, is placed in a standing position in a large frame and restrained there with various straps, manacles, chains and locks. The second performer, the restrainer, circles the frame, drawing a curtain first across the front, and then around one side and the back. Almost as soon as the restrainer disappears behind the frame, the restrainee appears from the other side of the apparatus, drawing back the curtain as they come. This reveals the restrainer now restrained in the frame; the two seem to have changed places by magic. Description: In practice, the roles of restrainer and restrainee have been interchangeable between magician and assistant. Sometimes it begins with the assistant restraining the magician, with the implication that in this way the assistant is gaining "revenge" for all the other tricks they do where the assistant is put in a box. However it is also performed with the assistant being restrained at the beginning and emerging at the end to with the magician restrained as "revenge" for the opening part. Method: In Magic's Biggest Secrets Finally Revealed, it is shown that the assistant can come out from the back of the frame and switch places with the magician. Because it is hidden by the curtain, it looks like they magically switched places.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dihydroxyphenylalanine transaminase** Dihydroxyphenylalanine transaminase: In enzymology, a dihydroxyphenylalanine transaminase (EC 2.6.1.49) is an enzyme that catalyzes the chemical reaction 3,4-dihydroxy-L-phenylalanine + 2-oxoglutarate ⇌ 3,4-dihydroxyphenylpyruvate + L-glutamateThus, the two substrates of this enzyme are 3,4-dihydroxy-L-phenylalanine and 2-oxoglutarate, whereas its two products are 3,4-dihydroxyphenylpyruvate and L-glutamate. Dihydroxyphenylalanine transaminase: This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is 3,4-dihydroxy-L-phenylalanine:2-oxoglutarate aminotransferase. Other names in common use include dopa transaminase, dihydroxyphenylalanine aminotransferase, aspartate-DOPP transaminase (ADT), L-dopa transaminase, dopa aminotransferase, glutamate-DOPP transaminase (GDT), phenylalanine-DOPP transaminase (PDT), DOPA 2-oxoglutarate aminotransferase, and DOPAATS. This enzyme participates in tyrosine metabolism. It employs one cofactor, pyridoxal phosphate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baseball scorekeeping** Baseball scorekeeping: Baseball scorekeeping is the practice of recording the details of a baseball game as it unfolds. Professional baseball leagues hire official scorers to keep an official record of each game (from which a box score can be generated), but many fans keep score as well for their own enjoyment. Scorekeeping is usually done on a printed scorecard and, while official scorers must adhere precisely to one of the few different scorekeeping notations, most fans exercise some amount of creativity and adopt their own symbols and styles. History: Sportswriter Henry Chadwick is generally credited as the inventor of baseball scorekeeping. His basic scorecard and notation have evolved significantly since their advent in the 1870s but they remain the basis for most of what has followed. Abbreviations and grammar: Some symbols and abbreviations are shared by nearly all scorekeeping systems. For example, the position of each player is indicated by a number: Pitcher (P) Catcher (C) First baseman (1B) Second baseman (2B) Third baseman (3B) Shortstop (SS) Left fielder (LF) Center fielder (CF) Right fielder (RF) Rover or short fielder (used primarily in softball)The designated hitter (DH), if used, is marked using a zero (0). Scorecards: Scorecards vary in appearance but almost all share some basic features, including areas for: Recording general game information (date and time, location, teams, etc.) Listing the batting lineup (with player positions and uniform numbers) Recording the play-by-play action (usually the majority of the scorecard) Tallying each player's total at-bats, hits, runs, etc. at the end of the game Listing the pitchers in the game, including their statistics, such as innings pitched, strikeouts, earned runs, and bases on ballsUsually two scorecards (one for each team) are used to score a game. Traditional scorekeeping: There is no authoritative set of rules for scorekeeping. The traditional method has many variations in its symbols and syntax, but this is a typical example. In the traditional method, each cell in the main area of the scoresheet represents the "lifetime" of an offensive player, from at-bat, to baserunner, to being put out, scoring a run, or being left on base. Traditional scorekeeping: Outs When an out is recorded, the combination of defensive players executing that out is recorded. For example: If a batter hits a ball on the ground to the shortstop, who throws the ball to the first baseman to force the first out, it would be noted on the scoresheet as 6–3, with 6 for the shortstop and 3 for the first baseman. Traditional scorekeeping: If the next batter hits a ball to the center fielder who catches it on the fly for the second out, it would be noted as F8, with F for flyout and 8 for the center fielder. (In some systems, the letter 'F' is reserved for foul outs. A fly out would therefore be scored simply as '8'.) Other systems append a lower-case "ƒ" for foul balls, as in F9ƒIf the following batter strikes out, it would be noted as K, with the K being the standard notation for a strikeout. If the batter did not swing at the third strike, a "backwards K" (K, see right) is traditionally used. Other forms include "Kc" for a called third strike with no swing, or "Ks" if the batter did swing. A slash should be drawn across the lower right corner to indicate the end of the inning. Traditional scorekeeping: If a runner is put out while on base, the next basepath is filled-in halfway, then ended with a short stroke perpendicular to the basepath. A notation is then added to indicate how the runner was out, along with the defensive combination that resulted in the out: CS means the runner was caught trying to steal the base ahead. The notation for a runner caught trying to steal second is normally 2–4 or 2–6 for a catcher-to-second-base play. Traditional scorekeeping: PK means the runner was picked off by the pitcher while he was off the base. This almost always occurs at first base, so the notation is usually 1–3. DP or TP means the runner was out as part of a double or triple play. Usually, the full notation is left on the batter's line (the last out of the play); 6–4–3, 4–6–3, and 5–4–3 are common double-play sequences. FC means the out was the result of a fielder's choice to get out the runner on base rather than force out the batter. This can also be indicative of an unsuccessful attempt at a double or triple play as such a move is often the first move to make such a play. Reaching base If a batter reaches first base, either due to a walk, a hit, or an error, the basepath from home to first base is drawn, and the method described in the lower-righthand corner. For example: If a batter gets a base hit, the basepath is drawn and 1B or – (for a single-base hit) is written below. Traditional scorekeeping: If the batter hits a double, however, the basepaths from home to first and first to second are drawn, and 2B or = is written above. This change of position is done to indicate that the runner did not advance on another hit. If the batter hits a triple, the basepaths are drawn from home to first to second to third and 3B or ≡ is written in the upper lefthand corner for the same reason. Traditional scorekeeping: If a batter gets a walk, the basepath is drawn and BB (for Base on Balls) or W (for Walk) is written below. IBB is written for an intentional base on balls. Other indicators may be used if the batter is awarded first base for other reasons (HBP for being hit by a pitch, CI for catcher's interference, etc.). If the batter reaches first base due to fielder's choice (ex. the shortstop decides to force out the runner heading to second instead), the basepath is drawn and FC is written along with the sequence of the defense's handling of the ball, e.g., 6–4. If the batter reaches base because the first baseman dropped the throw from the shortstop, the basepath is drawn and E3 (an error committed by the first baseman) is written below. Traditional scorekeeping: If a batter gets a base hit then in the same play advances due to a fielding error by the second baseman (ex. he misses the throw from the outfield, allowing the ball to get away), these are written as two events. First, the path to first is drawn with a 1B noted as for a single, then the path to second is drawn with an E4 noted above. This correctly describes the scoring—a single plus an error. Traditional scorekeeping: Advancing When a runner advances due to a following batter, it can be noted by the batting position or the uniform number of the batter that advanced the runner. This kind of information is not always included by amateur scorers, and there is a lot of variation in notation. For example: If a runner on first is advanced to third base due to action from the 4th batter, number 22, the paths from first to second to third are drawn in and either a 4 or 22 could be written in the upper left hand corner. Whether that action was a base hit or a sacrifice will be noted on the batter's annotation. Traditional scorekeeping: If a runner steals second while the 7th batter, number 32, is up to bat, the path from first to second would be drawn and SB followed by either a 7 or 32 could be written in the upper right hand corner. Note that Defensive Indifference (no attempt to throw out the runner) is denoted differently from a Stolen Base. Traditional scorekeeping: For a batter to be credited with advancing the runner, the base advance must be the result of the batter's action. If a runner advances beyond that due to an error (such as a bad catch) or a fielder's choice (such as a throw to tag out a runner ahead of him), the advance due to the batter's action and the advance due to the other action are noted separately. Traditional scorekeeping: To advance a player home to score a run, a runner must touch all 4 bases and cross all four base paths, therefore the scorer draws a complete diamond and, usually, fills it in. However, some scorers only fill in the diamond on a home run; they might then place a small dot in the center of the diamond to indicate a run scored but not a home run. The player that bats the runner home (or the other event such as an error that allows the runner to reach home) is noted in the lower left hand corner. Traditional scorekeeping: Miscellaneous End of an inning – When the offensive team has made three outs, a slash is drawn diagonally across the lower right corner of the cell of the third out. After each half-inning, the total number of hits and runs can be noted at the bottom of the column. After the game, totals can be added up for each team and each batter. Traditional scorekeeping: Extra innings – There are extra columns on a scoresheet that can be used if a game goes to extra innings, but if a game requires more columns, another scorecard will be needed for each team. Substitutions – When a substitution is made, a vertical line is drawn after the last at-bat for previous player, and the new player's name and number is written in the second line of the Player Information section. A notation of PH or PR should be made for pinch hit and pinch run situations. Batting around – After the ninth batter has batted, the record of the first batter should be noted in the same column. However, if more than nine batters bat in a single inning, the next column will be needed. Draw a diagonal line across the lower left hand corner, to indicate that the original column is being extended. Traditional scorekeeping: Example The scorecard on the right describes the August 8, 2000 game between the Milwaukee Brewers and San Francisco Giants, played at Pacific Bell Park, in San Francisco. The scorecard describes the following events in the top of the 1st inning: 1st Batter, #10 Ronnie Belliard (the Brewers' 2nd baseman) grounded the ball to the Giants' 3rd baseman (5), who fielded the ball and threw it to 1st base (3) for the out. The play is recorded as "5-3." The notation ("3-2") in the lower right corner of the "Belliard:Inning 1 cell" indicates the pitch count at the time Belliard put the ball into play (3 balls and 2 strikes). Traditional scorekeeping: 2nd batter, #9 Marquis Grissom (the Brewers' center fielder) grounded out 5-3 (3rd baseman to 1st baseman) on a 2-ball, 2-strike count. Traditional scorekeeping: 3rd batter, #5 Geoff Jenkins (the Brewers' left fielder) grounded the ball to the 1st baseman (3) who took the ball to the base himself for an unassisted put out (3U).One hard and fast rule of baseball scorekeeping is that every out and every time a baserunner advances must be recorded. The scoring can get a little more complicated when a batter who has reached base, is then "moved up" (advances one or more bases) by his own actions or by the actions of a hitter behind him. This is demonstrated in the Giants' first inning: 1st Batter, #7 Marvin Benard (the Giants' center fielder) hit a fly ball that was caught by the right fielder (9) for an out. Other scorekeepers might abbreviate this out using "F9" for fly out to right field. Traditional scorekeeping: 2nd batter, #32 Bill Mueller (the Giants' 3rd baseman) hit a single: he hit the ball into play and made it safely to first base. This is denoted by the single line running from "home" to "1st" next to the diamond in that cell. Commonly, scorekeepers will place some abbreviation, such as "1B-7", to designate a single hit to left field. In addition, many scorekeepers also place a line across the diamond to show the actual path of the baseball on the field. Traditional scorekeeping: 3rd batter, #25 Barry Bonds (the Giants' left fielder, incorrectly noted as a right fielder ("RF") on this scorecard) struck out (K) on a 1-ball, 2-strike count. At some point during Bonds' at-bat, Mueller, the runner on 1st base, stole 2nd base. This advancement was recorded in Mueller's cell by writing the notation "SB" next to the upper-right edge of the diamond. Traditional scorekeeping: 4th batter, #21 Jeff Kent (the Giants' 2nd baseman) hit a fly ball that was caught by the Brewers' right fielder (9) for the third and final out of the inning. Mueller was stranded on 2nd base.Stranded baserunners might be notated as being "LOB" (Left On Base) for that inning, with a number from 1-3 likely at the bottom of the inning column. For example, if two runners are left on base after the 3rd out, the scorekeeper might note "LOB:2", then at the end of the game calculate a total number of LOB for the game. Traditional scorekeeping: A more complicated example of scorekeeping is the record of the bottom of the 5th inning: 1st batter, #6 J. T. Snow (the Giants' 1st baseman and the fifth hitter in the Giants' lineup) advanced to first base on a walk (base-on-balls; BB). 2nd batter, #23 Ellis Burks (the Giants' right fielder) grounded out 5-3 (3rd baseman to 1st baseman). In the process, Snow advanced to second base. 3rd batter, #25 Rich Aurilia (the Giants' shortstop) flied out to the center fielder (8) for the second out of the inning. 4th batter, #29 Bobby Estalella (the Giants' catcher) drew a walk (BB) to advance to first base. Snow remained at 2nd base. Traditional scorekeeping: 5th batter, #48 Russ Ortiz (the Giants' starting pitcher) hit a single (diagonal single line drawn next to the lower-right side of the diamond). Snow advanced to home plate on that single (the diagonal line drawn next to the lower left side of the diamond in Snow's "cell") to score the game's only run. Ortiz is given credit for an RBI (run batted in), denoted by the "R" written in the bottom left corner of his cell (incorrectly, since "R" indicates a 'run scored' and would more appropriately been noted in Snow's cell; "RBI" should have been written in Ortiz's cell). Estalella advanced from 1st to 3rd base on Ortiz's single (the diagonal line drawn next to the upper left side of the diamond in Estalella's "cell"). Traditional scorekeeping: 6th batter Marvin Bernard, up for the third time in this game, drew a walk (BB). Ortiz advanced to 2nd base on that walk (indicated by "BB" written on the "1st to 2nd" portion of the diamond in his cell. Traditional scorekeeping: 7th batter Bill Mueller hit a ground ball to the shortstop (6), who then threw the ball to the 2nd baseman (4) to force out Bernard at 2nd base (6-4) for the third and final out of the inning. As a force out also could have been performed by throwing the ball to 1st base, this is scored as a fielder's choice ("FC"). Project Scoresheet: Project Scoresheet was an organization run by volunteers in the 1980s for the purpose of collecting baseball game data and making it freely available to the public (the data collected by Major League Baseball was and still is not freely available). To collect and distribute the data, Project Scoresheet needed a method of keeping score that could be easily input to a computer. This limited the language to letters, numbers, and punctuation (no baseball diamonds or other symbols not found on a computer keyboard). Project Scoresheet: Scorecard In addition to the new language introduced by Project Scoresheet, a few major changes were made to the traditional scorecard. First, innings of play are not recorded in a one-per-column fashion; instead all boxes are used sequentially and new innings are indicated with a heavy horizontal line. This saves considerable space on the card (since no boxes are left blank) and reduces the likelihood of a game requiring a second set of scorecards. Project Scoresheet: The second major change is the detailed offensive and defensive in/out system, which allows the scorekeeper to specify very specifically when players enter and leave the game. This is vital for attributing events to the proper players. Project Scoresheet: Lastly, each "event box" on a Project Scoresheet scorecard is broken down into three sections: before the play, during the play, and after the play. All events are put into one of these three slots. For example, a stolen base happens "before the play" because it occurs before the batter's at-bat is over. A hit is considered "during the play" because it ends the batter's plate appearance, and baserunner movement subsequent to the batter's activity is considered "after the play". Project Scoresheet: Language The language developed by Project Scoresheet can be used to record trajectories and locations of batted balls and every defensive player who touched the ball, in addition to the basic information recorded by the traditional method. Here are some examples: In the "before the play" slot: CS2(26): runner caught stealing 2B (catcher to shortstop) 1-2/SB: runner on 1B steals 2BIn "during the play" slot: 53: ground-out to third baseman ("5-3" in the traditional system) E5/TH1: error on the third baseman (on his throw to 1B)In the "after the play" slot: 2-H: runner on 2B advances to home (scores) 1XH(92): runner on 1B thrown out going home (right fielder to catcher) Reisner Scorekeeping: Project Scoresheet addressed a lack of precision in the traditional scorekeeping method, and introduced several new features to the scorecard. But while the Project Scoresheet language continues to be the baseball research community's standard for storing play-by-play game data in computers, the scorecards it yields are difficult to read due to the backtracking required to reconstruct a mid-inning play. Hence, despite its historical importance, the system has never gained favor with casual fans. Reisner Scorekeeping: In 2002 Alex Reisner developed a new scorekeeping method that took the language of Project Scoresheet but redefined the way the event boxes on the scorecard worked, virtually eliminating the backtracking required by both Project Scoresheet and the traditional method. The system also makes it easy to reconstruct any mid-inning situation, a difficult task with the other two systems (for this reason it was originally promoted as "Situational Scorekeeping"). Reisner Scorekeeping: Scorecard A Reisner scorecard looks like a cross between a traditional and Project Scoresheet scorecard. It has a diamond (representing the field, as in the traditional system) and a single line for recording action during and after the play (like Project Scoresheet's second and third lines). The diamond in each event box is used to show which bases are occupied by which players at the start of an at-bat. Stolen bases, pickoffs, and other "before the play" events are also marked on the diamond, so that one can see the "situation" in which an at-bat took place by simply glancing at the scorecard. Computer Generated Scorecards: With the advent of online baseball event application programming interface feeds, it is now possible to have software generate and update a scorecard in real-time. Such scorecards can include a level of detail and precision which would not be practical for a human keeping score manually.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanotribology** Nanotribology: Nanotribology is the branch of tribology that studies friction, wear, adhesion and lubrication phenomena at the nanoscale, where atomic interactions and quantum effects are not negligible. The aim of this discipline is characterizing and modifying surfaces for both scientific and technological purposes. Nanotribology: Nanotribological research has historically involved both direct and indirect methodologies. Microscopy techniques, including Scanning Tunneling Microscope (STM), Atomic-Force Microscope (AFM) and Surface Forces Apparatus, (SFA) have been used to analyze surfaces with extremely high resolution, while indirect methods such as computational methods and Quartz crystal microbalance (QCM) have also been extensively employed.Changing the topology of surfaces at the nanoscale, friction can be either reduced or enhanced more intensively than macroscopic lubrication and adhesion; in this way, superlubrication and superadhesion can be achieved. In micro- and nano-mechanical devices problems of friction and wear, that are critical due to the extremely high surface volume ratio, can be solved covering moving parts with super lubricant coatings. On the other hand, where adhesion is an issue, nanotribological techniques offer a possibility to overcome such difficulties. History: Friction and wear have been technological issues since ancient periods. On the one hand, the scientific approach of the last centuries towards the comprehension of the underlying mechanisms was focused on macroscopic aspects of tribology. On the other hand, in nanotribology, the systems studied are composed of nanometric structures, where volume forces (such as those related to mass and gravity) can often be considered negligible compared to surface forces. Scientific equipment to study such systems have been developed only in the second half of the 20th century. In 1969 the very first method to study the behavior of a molecularly thin liquid film sandwiched between two smooth surfaces through the SFA was developed. From this starting point, in 1980s researchers would employ other techniques to investigate solid state surfaces at the atomic scale. History: Direct observation of friction and wear at the nanoscale started with the first Scanning Tunneling Microscope (STM), which can obtain three-dimensional images of surfaces with atomic resolution; this instrument was developed by Gerd Binnig and Henrich Rohrer in 1981. STM can study only conductive materials, but in 1985 with the invention of the Atomic Force Microscope (AFM) by Binning and his colleagues, also non conductive surfaces can be observed. Afterwards, AFMs were modified to obtain data on normal and frictional forces: these modified microscopes are called Friction Force Microscopes (FFM) or Lateral Force Microscopes (LFM). The term "Nanotribology" was first used in the title of a 1990 publication and in a 1991 publication . in a title of a major review paper published in Nature in 1995 and in a title of a major Nanotrobology Handbook in 1995.From the beginning of the 21st century, computer-based atomic simulation methods have been employed to study the behaviour of single asperities, even those composed by few atoms. Thanks to these techniques, the nature of bonds and interactions in materials can be understood with a high spatial and time resolution. Surface analysis: Surface forces apparatus The SFA (Surface Forces Apparatus) is an instrument used for measuring physical forces between surfaces, such as adhesion and capillary forces in liquids and vapors, and van der Waals interactions. Since 1969, the year in which the first apparatus of this kind was described, numerous versions of this tool have been developed. Surface analysis: SFA 2000, which has fewer components and is easier to use and clean than previous versions of the apparatus, is one of the currently most advanced equipment utilized for nanotribological purposes on thin films, polymers, nanoparticles and polysaccharides. SFA 2000 has one single cantilever which is able to generate mechanically coarse and electrically fine movements in seven orders of magnitude, respectively with coils and with piezoelectric materials. The extra-fine control enables the user to have a positional accuracy lesser than 1 Å. The sample is trapped by two molecularly smooth surfaces of mica in which it perfectly adheres epitaxially.Normal forces can be measured by a simple relation: Fnormal(D)=k(ΔDapplied−ΔDmeasured) where ΔDapplied is the applied displacement by using one of the control methods mentioned before, k is the spring constant and ΔDmeasured is the actual deformation of the sample measured by MBI. Moreover, if ∂F(D)∂D>k then there is a mechanical instability and therefore the lower surface will jump to a more stable region of the upper surface. And so, the adhesion force is measured with the following formula: Fadhesion=kΔDjump .Using the DMT model, the interaction energy per unit area can be calculated: Wflat(D)=Fcurved(D)2πR where R is the curvature radius and Fcurved(D) is the force between cylyndrically curved surfaces. Surface analysis: Scanning probe microscopy SPM techniques such as AFM and STM are widely used in nanotribology studies. The Scanning Tunneling Microscope is used mostly for morphological topological investigation of a clean conductive sample, because it is able to give an image of its surface with atomic resolution. Surface analysis: The Atomic Force Microscope is a powerful tool in order to study tribology at a fundamental level. It provides an ultra-fine surface-tip contact with a high refined control over motion and atomic-level precision of measure. The microscope consists, basically, in a high flexible cantilever with a sharp tip, which is the part in contact with the sample and therefore the crossing section must be ideally atomic-size, but actually nanometric (radius of the section varies from 10 to 100 nm). In nanotribology AFM is commonly used for measuring normal and friction forces with a resolution of pico-Newtons.The tip is brought close to the sample's surface, consequently forces between the last atoms of the tip and the sample's deflect the cantilever proportionally to the intensity of this interactions. Normal forces bend the cantilever vertically up or down of the equilibrium position, depending on the sign of the force. The normal force can be calculated by means of the following equation: Fnormal=kΔV/σ where k is the spring constant of the cantilever, ΔV is the output of the photodetector, which is an electric signal, directly with the displacement of the cantilever and σ is the optical-lever sensitivity of the AFM.On the other hand, lateral forces can be measured with the FFM, which is fundamentally very similar to the AFM. The main difference resides in the tip motion, that slides perpendicularly to its axis. These lateral forces, i.e. friction forces in this case, result in twisting the cantilever, which is controlled to ensure that only the tip touches the surface and not other parts of the probe. At every step the twist is measured and related with the frictional force with this formula: Ffrictional=ΔVkϕ2heffδ where ΔV is the output voltage, kϕ is the torsional constant of the cantilever, heff is the height of the tip plus the cantilever thickness and δ is the lateral deflection sensitivity.Since the tip is part of a compliant apparatus, the cantilever, the load can be specified and so the measurement is made in load-control mode; but in this way the cantilever has snap-in and snap-out instabilities and so in some regions measurements cannot be completed stably. These instabilities can be avoided with displacement-controlled techniques, one of this is the interfacial force microscopy.The tap can be at contact with the sample in the whole measurement process, and this is called contact mode (or static mode), otherwise it can be oscillated and this is called tapping mode (or dynamic mode). Contact mode is commonly applied on hard sample, on which the tip cannot leave any sign of wear, such as scars and debris. For softer materials tapping mode is used to minimize the effects of friction. In this case the tip is vibrated by a piezo and taps the surface at the resonant frequency of the cantilever, i.e. 70-400 kHz, and with an amplitude of 20-100 nm, high enough to allow the tip to not get stuck to the sample because of the adhesion force.The atomic force microscope can be used as a nanoindenter in order to measure hardness and Young's modulus of the sample. For this application, the tip is made of diamond and it is pressed against the surface for about two seconds, then the procedure is repeated with different loads. The hardness is obtained dividing the maximum load by the residual imprint of the indenter, which can be different from the indenter section because of sink-in or pile-up phenomena. The Young's modulus can be calculated using the Oliver and Pharr method, which allows to obtain a relation between the stiffness of the sample, function of the indentation area, and its Young's and Poisson's moduli. Surface analysis: Atomistic simulations Computational methods are particularly useful in nanotribology for studying various phenomena, such as nanoindentation, friction, wear or lubrication. In an atomistic simulation, every single atom's motion and trajectory can be tracked with a very high precision and so this information can be related to experimental results, in order to interpret them, to confirm a theory or to have access to phenomena, that are invisible to a direct study. Moreover, many experimental difficulties do not exist in an atomistic simulation, such as sample preparation and instrument calibration. Theoretically every surface can be created from a flawless one to the most disordered. As well as in the other fields where atomistic simulations are used, the main limitations of these techniques relies on the lack of accurate interatomic potentials and the limited computing power. For this reason, simulation time is very often small (femtoseconds) and the time step is limited to 1 fs for fundamental simulations up to 5 fs for coarse-grained models.It has been demonstrated with an atomistic simulation that the attraction force between the tip and sample's surface in a SPM measurement produces a jump-to-contact effect. This phenomenon has a completely different origin from the snap-in that occurs in load-controlled AFM, because this latter is originated from the finite compliance of the cantilever. The origin of the atomic resolution of an AFM was discovered and it has been shown that covalent bonds form between the tip and the sample which dominate van der Waals interactions and they are responsible for a such high resolution. Simulating an AFM scansion in contact mode, It has been found that a vacancy or an adatom can be detected only by an atomically sharp tip. Whether in non-contact mode vacancies and adatoms can be distinguished with the so-called frequency modulation technique with a non-atomically sharp tip. In conclusion only in non-contact mode can be achieved atomic resolution with an AFM. Properties: Friction Friction, the force opposing to the relative motion, is usually idealized by means of some empirical laws such as Amonton’s First and Second laws and Coulomb's law. At the nanoscale, however, such laws may lose their validity. For instance, Amonton's second law states that friction coefficient is independent from the area of contact. Surfaces, in general, have asperities, that reduce the real area of contact and therefore, minimizing such area can minimize friction.During the scanning process with an AFM or FFM, the tip, sliding on the sample's surface, passes through both low (stable) and high potential energy points, determined, for instance, by atomic positions or, on a larger scale, by surface roughness. Without considering thermal effects, the only force that makes the tip overcome these potential barriers is the spring force given by the support: this causes the stick-slip motion. Properties: At the nanoscale, friction coefficient depends on several conditions. For example, with light loading conditions, tend to be lower than those at the macroscale. With higher loading conditions, such coefficient tends to be similar to the macroscopic one. Temperature and relative motion speed can also affect friction. Properties: Lubricity and superlubricity at the atomic scale Lubrication is the technique used to reduce friction between two surfaces in mutual contact. Generally, lubricants are fluids introduced between these surfaces in order to reduce friction.However, in micro- or nano-devices, lubrication is often required and traditional lubricants become too viscous when confined in layers of molecular thickness. A more effective technique is based on thin films, commonly produced by Langmuir–Blodgett deposition, or self-assembled monolayersThin films and self-assembled monolayers are also used to increase adhesion phenomena. Properties: Two thin films made of perfluorinated lubricants (PFPE) with different chemical composition were found to have opposite behaviors in humid environment: hydrophobicity increases the adhesive force and decreases lubrication of films with nonpolar end groups; instead, hydrophilicity has the opposite effects with polar end groups. Superlubricity “Superlubricity is a frictionless tribological state sometimes occurring in nanoscale material junctions”.At the nanoscale, friction tends to be non isotropic: if two surfaces sliding against each other have incommensurate surface lattice structures, each atom is subject to different amount of force from different directions . Forces, in this situation, can offset each other, resulting in almost zero friction. Properties: The very first proof of this was obtained using a UHV-STM to measure. If lattices are incommensurable, friction was not observed, however, if the surfaces are commensurable, friction force is present. At the atomic level, these tribological properties are directly connected with superlubricity.An example of this is given by solid lubricants, such as graphite, MoS2 and Ti3SiC2: this can be explained with the low resistance to shear between layers due to the stratified structure of these solids.Even if at the macroscopic scale friction involves multiple microcontacts with different size and orientation, basing on these experiments one can speculate that a large fraction of contacts will be in superlubric regime. This leads to a great reduction in average friction force, explaining why such solids have a lubricant effect. Properties: Other experiments carried out with the LFM shows that the stick-slip regime is not visible if the applied normal load is negative: the sliding of the tip is smooth and the average friction force seems to be zero.Other mechanisms of superlubricity may include: (a) Thermodynamic repulsion due to a layer of free or grafted macromolecules between the bodies so that the entropy of the intermediate layer decreases at small distances due to stronger confinement; (b) Electrical repulsion due to external electrical voltage; (c) Repulsion due to electrical double layer; (d) Repulsion due to thermal fluctuations. Properties: Thermolubricity at the atomic scale With the introduction of AFM and FFM, thermal effects on lubricity at the atomic scale could not be considered negligible any more. Thermal excitation can result in multiple jumps of the tip in the direction of the slide and backward. When the sliding velocity is low, the tip takes a long time to move between low potential energy points and thermal motion can cause it to make a lot of spontaneous forward and reverse jumps: therefore, the required lateral force to make the tip follow the slow support motion is small, so the friction force becomes very low. Properties: For this situation was introduced the term thermolubricity. Properties: Adhesion Adhesion is the tendency of two surfaces to stay attached together.The attention in studying adhesion at the micro- and nanoscale increased with the development of AFM: it can be used in nanoindentation experiments, in order to quantify adhesion forcesAccording to these studies, hardness was found to be constant with film thickness, and it's given by: H=PcAc where {\textstyle A_{c}} is the indentation's area and {\textstyle P_{c}} is the load applied to the indenter. Properties: Stiffness, defined as {\textstyle S={\frac {dP}{dh}}} , where h is the indentation's depth, can be obtained from {\textstyle r_{c}} , the radius of the indenter-contact line. S=2⋅E′⋅rc 1E′=1−νi2Ei+1−νs2Es {\textstyle E'} is the reduced Young's modulus, {\textstyle E_{i}} and νi are the indenter's Young's modulus and Poisson's ratio and Es , νs are the same parameters for the sample. However, {\textstyle r_{c}} can't always be determined from direct observation; it could be deduced from the value of {\textstyle h_{c}} (depth of indentation), but it's possible only if there is no sink-in or pile-up (perfect Sneddon's surface conditions).If there is sink in, for example, and the indenter is conical the situation is described below. From the image, we can see that: h=hc+he and tan ⁡α From Oliver and Pharr's study he=ϵ⋅h where ε depends on the geometry of the indenter; {\textstyle \epsilon =1-{\frac {2}{\pi }}} if it's conical, {\textstyle \epsilon ={\frac {1}{2}}} if it's spherical and {\textstyle \epsilon =1} if it's a flat cylinder. Properties: Oliver and Pharr, therefore, did not consider adhesive force, but only elastic force, so they concluded: tan ⁡α⋅(h−hf)2 Considering adhesive force P=Fe+Fa Introducing {\textstyle W_{a}} as the adhesion energy and γa as the work of adhesion: tan cos ⁡α⋅hc2 obtaining tan cos ⁡α⋅(h−hf) In conclusion: tan tan cos ⁡α⋅(h−hf) The consequences of the additional term of adhesion is visible in the following graph: During loading, indentation depth is higher when adhesion is not negligible: adhesion forces contributes to the work of indentation; on the other hand, during unloading process, adhesion forces opposes indentation process. Properties: Adhesion is also related to capillary forces acting between two surfaces when in presence of humidity. Applications of adhesion studies This phenomenon is very important in thin films, because a mismatch between the film and the surface can cause internal stresses and, consequently interface debonding. Properties: When a normal load is applied with an indenter, the film deforms plastically, until the load reaches a critical value: an interfacial fracture starts to develop. The crack propagates radially, until the film is buckled.On the other hand, adhesion was also investigated for its biomimetic applications: several creatures including insects, spiders, lizards and geckos have developed a unique climbing ability that are trying to be replicated in synthetic materials . Properties: It was shown that a multi-level hierarchical structure produces adhesion enhancement: a synthetic adhesive replicating gecko feet organization was created using nanofabrication techniques and self-assembly. Properties: Wear Wear is related to the removal and the deformation of a material caused by the mechanical actions. At the nanoscale, wear is not uniform. The mechanism of wear generally begins on the surface of material. The relative motion of two surfaces can cause indentations obtained by the removal and deformation of surface material. Continued motion can eventually grow in both width and depth these indentations.At the macro scale wear is measured by quantifying the volume (or mass) of material loss or by measuring the ratio of wear volume per energy dissipated. At the nanoscale, however, measuring such volume can be difficult and therefore, it is possible to use evaluate wear by analyzing modifications in surface topology, generally by means of AFM scanning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear function** Linear function: In mathematics, the term linear function refers to two distinct but related notions: In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For distinguishing such a linear function from the other concept, the term affine function is often used. In linear algebra, mathematical analysis, and functional analysis, a linear function is a linear map. As a polynomial function: In calculus, analytic geometry and related areas, a linear function is a polynomial of degree one or less, including the zero polynomial (the latter not being considered to have degree zero). When the function is of only one variable, it is of the form f(x)=ax+b, where a and b are constants, often real numbers. The graph of such a function of one variable is a nonvertical line. a is frequently referred to as the slope of the line, and b as the intercept. If a > 0 then the gradient is positive and the graph slopes upwards. If a < 0 then the gradient is negative and the graph slopes downwards. For a function f(x1,…,xk) of any finite number of variables, the general formula is f(x1,…,xk)=b+a1x1+⋯+akxk, and the graph is a hyperplane of dimension k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial. Its graph, when there is only one variable, is a horizontal line. In this context, a function that is also a linear map (the other meaning) may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, the polynomial functions of degree 0 or 1 are the scalar-valued affine maps. As a linear map: In linear algebra, a linear function is a map f between two vector spaces s.t. f(x+y)=f(x)+f(y) f(ax)=af(x). Here a denotes a constant belonging to some field K of scalars (for example, the real numbers) and x and y are elements of a vector space, which might be K itself. In other terms the linear function preserves vector addition and scalar multiplication. Some authors use "linear function" only for linear maps that take values in the scalar field; these are more commonly called linear forms. The "linear functions" of calculus qualify as "linear maps" when (and only when) f(0, ..., 0) = 0, or, equivalently, when the constant b equals zero in the one-degree polynomial above. Geometrically, the graph of the function must pass through the origin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pianoteq** Pianoteq: Pianoteq is a software synthesizer that features real-time MIDI-control of digital physically modeled pianos and related instruments, including electric piano, harp, harpsichord, fortepiano, and various metallophones. It is usable as a stand-alone program for Microsoft Windows, Mac OS X and Linux (including ARM architecture) platforms, or as a plug in for VSTi hosts and two VSTi counterpart for use with digital audio workstations. History and technology: The original version of the program was released in August 2006.The software's physically modeled synthesis create sound from scratch using several megabytes of mathematical algorithms (Fourier construction) to generate electric piano and acoustic piano sounds that can be manipulated analogously to those produced by their material counterparts. Pianoteq's modeled sounds are supplemented with sampled pedal noise, key release, and hammer noise. History and technology: Patches for additional instruments are also available. Several of the historical instruments were created as part of the KIViR (Keyboard Instruments Virtual Restoration) project, which aims to create playable digital models of historical keyboard instruments in museums like the Händel-Haus in Halle. Instruments: Pianoteq models several modern as well as historical pianos, including the Steinway model B and model D, the Antonin Petrof 275 and 284, Bechstein DG, Steingraeber E-272, Grotrian Concert Royal, and Blüthner Model 1. Other instruments include models for the Hohner Pianet models N and T and Clavinet D6, as well as models for harpsichord, concert harp and Celtic harp, various tine and reed electric pianos, vibraphones, celeste, xylophone and marimba, and various steelpans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double tap** Double tap: A double tap is a shooting technique where two shots are fired in rapid succession at the same target with the same sight picture (as opposed to the controlled pair, whereby a second sighting is acquired for the second shot). Instruction and practice of the double-tap improves accuracy as shooters often do not have the gun fully extended on the first shot meaning the second of a double-tap is usually the better. The term hammer is sometimes used to describe a double tap in which the firearm's sights are not reacquired by the shooter between shots. History: The origin of the double tap technique is credited to William Ewart Fairbairn and Eric A. Sykes, British police chiefs working in Shanghai during the 1930s who developed the technique in order to overcome the limitations of full metal jacketed (FMJ) ammunition. FMJ ammunition is commonly used by militaries for feeding reliability, adherence to the Hague Conventions regarding non-expanding ammunition, and improved penetration. FMJ rounds can sometimes fail to cause sufficient damage (at least when compared to expanding bullets), requiring more hits and better shot placement. In Ian Dear’s book Sabotage and Subversion about British Special Operations Executive (SOE) and United States Office of Strategic Services (OSS) forces, Fairbairn is reported to have instructed SOE personnel in the double tap from 1944 to 1945 at the SOE training school directed by Fairbairn and Sykes near Arisaig in Scotland. The term double tap is now used to describe the more general technique of firing two rounds quickly and accurately to disable an opponent. The tactic is still used by firearms handlers, police tactical teams, military personnel, counter-terrorist combat units, and other special operations forces personnel. History: The Russian assault rifle AN-94 can automatically shoot two bullets in a rapid burst; this feature was intended to improve the single shot hit probability of the rifle. History: Double taps are an integral part of the El Presidente combat pistol shooting drill developed by Jeff Cooper during the 1970s and published in the January/February 1979 issue of American Handgunner. Also developed by Cooper during the 1970s is the Mozambique Drill or Failure Drill, for a situation whereby a double-tap to the torso fails to stop an attacker, adding a third shot to the head. Technique: In the double-tap technique, after the first round is fired, the shooter quickly reacquires the sights for a fast second shot. This skill can be practiced by firing two shots at a time, taking time between the shots to reacquire the sights. With practice, the time between shots becomes briefer and briefer until it seems to an observer as if the shooter is just pulling the trigger twice very quickly. Technique: According to a U.S. Army training manual, "There is a natural arc of the front sight post after the round is fired and the recoil kicks in. The soldier lets the barrel go with this arc and immediately brings the front sight post back on target and takes a second shot. The soldier does not fight the recoil. In combat, soldiers shoot until the enemy goes down. For multiple targets, each target should receive a double tap. After all targets are engaged, soldiers engage the targets again as needed." Double tap strike: The term has also been used more recently to refer to the practice of following a strike, e.g., a missile, air strikes, artillery shelling or improvised explosive device attack with a second strike several minutes later, hitting response teams, helpers, and medics rushing to the site. A Florida Law Review article argued that the practice likely is a war crime since it grossly violates the Geneva Conventions of 1949, which prohibit targeting civilians, the wounded, or those no longer able to continue fighting.Double-tap strikes have been used by Saudi Arabia during its military intervention in Yemen, by the United States in Pakistan and Yemen, by Israel in Gaza, by Palestinians in Israel, by Russia and the Syrian governments in the Syrian civil war and by Russia in the war in Ukraine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superflat** Superflat: Superflat is a postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime. However, superflat does not have an explicit definition because Takashi Murakami does not want to limit the movement, but rather leave room for it to grow and evolve over time.Superflat is also the name of a 2000 art exhibition, curated by Murakami, that toured West Hollywood, Minneapolis and Seattle. Description: "Superflat" is used by Murakami to refer to various flattened forms in Japanese graphic art, animation, pop culture and fine arts, as well as the "shallow emptiness of Japanese consumer culture." Superflat has been embraced by American artists, who have created a hybrid called "SoFlo Superflat".Murakami defines Superflat in broad terms, so the subject matter is very diverse. Some works explore the consumerism and sexual fetishism that is prevalent in post-war Japanese culture. This often includes lolicon art, which is parodied by works such as those by Henmaru Machino. These works are an exploration of otaku sexuality through grotesque and/or distorted images. Other works are more concerned with a fear of growing up. For example, Yoshitomo Nara's work often features playful graffiti on old Japanese ukiyo-e executed in a childish manner. And some works focus on the structure and underlying desires that comprise otaku and overall post-war Japanese culture. Murakami is influenced by directors such as Hideaki Anno.Superflat is not limited to contemporary art alone. Murakami cites older Japanese pieces as superflat as well, including Katsushika Hokusai's "Thunderstorm Beneath the Summit" (1830–32) as an example of superflat.A subversive look at otakuism is not a defining factor of Kaikai Kiki's galleries; Bome, one of the most important artists involved with the first Superflat exhibition, is a famous otaku figure sculptor and his work based on existing bishoujo anime characters has been showcased in multiple galleries including a solo exhibition in the Kaikai Kiki Gallery. The artist Mr. is a self-described lolicon and views his artwork to be not a cultural commentary but a portrayal of his own personal fantasies. Artists: Superflat artists include Chiho Aoshima, Mahomi Kunikata, Sayuri Michima, Yoshitomo Nara, Aya Takano and Takashi Murakami. In addition, some animators within anime and some manga artists have had their past and present work exhibited in Superflat exhibitions, especially Kōji Morimoto, and the work of Hitoshi Tomizawa, author of Alien 9 and Milk Closet. Origins: There are multiple factors that played a role for Murakami to come up with his Superflat claim. In his Manifesto, he describes “Super flatness” as an original concept of Japanese who have been completely Westernized, that simultaneously links the past with the present and the future.The past, in this case, refers to art made during the Edo period in Japan, where Murakami finds his foremost inspiration in the works of Fine Art painters such as Kano Sansetsu, Ito Jakuchu, Soga Shohaku and Katsushika Hokusai. Murakami explains that his theory was born from a hypothesis created by art historian Nobuo Tsuji in his book The Lineage of Eccentricity.In his book, Tsuji critically analyses works from Edo period painters and explains how the picture controls the speed and course of its observer's gaze, creating an interaction between the surface and the viewer with a zigzag motion. This is further elaborated in Takashi Murakami: Lineage of Eccentrics, a book that presents key examples of Murakami's work alongside a selection of Japanese masterpieces arranged according to the concepts laid out by Tsuji himself. It is mentioned that the juxtaposition of foreground forms extending horizontally across broad compositions and two-dimensional surfaces is another feature that Murakami has adapted for his own theory and contemporary subject matter.The particular sensibility of the gaze and inspiration from old masters is what Murakami continues to incorporate in his own works. An example of this is his painting called 727, a work made with acrylics on three panels. In the middle is his alter ego depicted, also known as 'Mr. DOB', riding a stylized wave that is a direct reference to Hokusai his famous Great Wave off Kanagawa. The panels on which it was painted show a resemblance to the flat and often 'blank' backgrounds characterizing in Nihonga paintings and folding screens, illustrating features of Superflatness. Origins: Another field within the arts that, according to both Murakami and Tsuji, is closely related to eccentricity of traditional Japanese art and also carries Superflat features, is animation. In his manifesto, Murakami takes Yoshinori Kanada as a prime example of an animator whose work contains a compositional dynamic that resembles that of the “eccentric” artists to a startling degree.A connection can be made of modern-day animation back to twelfth- and thirteenth-century Japanese handscrolls, where the narrative is composed across multiple sheets of joined paper, read from right to left, providing the observer once again a two-dimensional 'flat' space and composition where the gaze leads the viewer through the story.A different factor that played a role for the emergence of Superflatness was the bursting bubble of the Japanese economy in the 1990s, where Japan was led into uncertain territory and a loss of its sense of security. Michael Darling explains that "rabid consumerism and the slavish following of fads, especially in fashion, have further contributed to a culture of surfaces and superficiality, representing still another facet of the Superflat concept". Darling, 2001). He uses photography and fashion as further examples to illustrate Superflatness and the hype and high consumer demand of Japan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pneumobilia** Pneumobilia: Pneumobilia is the presence of gas in the biliary system. It is typically detected by ultrasound or a radiographic imaging exam, such as CT, or MRI. It is a common finding in patients that have recently undergone biliary surgery or endoscopic biliary procedure. While the presence of air within biliary system is not harmful, this finding may alternatively suggest a pathological process, such as a biliary-enteric anastomosis, an infection of the biliary system, an incompetent sphincter of Oddi, or spontaneous biliary-enteric fistula. Causes: In a healthy individual with normal anatomy, there is no air within the biliary tree. When this finding is present, it may be secondary to: Recent surgical or endoscopic biliary procedure (e.g. ERCP, biliary enteric anastomosis) Incompetent sphincter of Oddi (e.g. passage of large gallstone, scarring related to chronic pancreatitis) Spontaneous biliary enteric fistula (e.g. gallstone ileus) Infection by gas-forming organisms (e.g. emphysematous cholangitis) Congenital abnormalitiesOther rare causes that have been reported include duodenal diverticulum, paraduodenal abscess, operative trauma, and carcinoma of the duodenum, stomach and bile duct.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Institute of Higher National Diploma in Engineering** Institute of Higher National Diploma in Engineering: Higher National Diploma in Engineering (HNDE) is a public institute in Sri Lanka mainly focused on higher education. Since 1986 the Higher National Diploma in Engineering (HNDE) program has been the pioneer in producing incorporated engineers for the local and international industry. The demand in the industry for HNDE students and the knowledge to work with internationally recognized modern equipment shows the quality and the standard of the course. Compared to local universities offering engineering degrees, ours as an engineering diploma awarding institute has a lot to be proud of in terms of the curriculum, facilities and employ-ability. The course content ensures that those who complete this program are satisfactorily employed both inside Sri Lanka and abroad. History: In 1985 the NTTTC (National Technical Teachers Training College) and Technical education unit of the Ministry of Education conducted a survey on the workforce. It was found that the ratio of chartered engineers to middle level engineers should be 1:3. However, according to the statistics it was 1:5:20, the ratio of chartered engineers, middle-level engineers and technicians respectively and it was not fulfilling necessary industry requirements. So the authorities realized the necessity of a new Engineering course to cater to the demand of Middle-Level Engineers for Sri Lankan and international industry as well. As a proposal UK based HNDE was introduced to Sri Lanka and which It was modified with Advanced Engineering Theoretical concept and Communication skills to suit modern-day industry and the course was extended for three and one-half years duration, including its six-month in-plant training period, with the approval of the Bolton University - England and was funded by the ADB. History: On 30 October 1990, the HNDE was gazetted as a parallel course to other Engineering diplomas in Sri Lanka, under the 46/90, governmental circular. After a year, on 11 December 1991, the HNDE course which was at Rathmalana NTTTC, was affiliated to the B-TEC - England under registration number 78/981. History: On 15 November 1992 was a special day for the HNDE 2nd batch, with the presence of the chief guest lecturer, Mr. A.N.S Kulasinghe, the convocation was held at BMICH. 17 August 1994 was a turning point of HNDE.On that day HNDE was transferred to the ministry of labour and vocational training from the ministry of education and higher education. But again after 2 years of time on 1 May 1996. HNDE was connected with the SLIATE, which was mainly under the ministry of education & higher education.12 December 1996 was a significant day for all HNDE students. They celebrated the 10th anniversary. Distinguished guests lecturer Mr.A.N.S Kulesinghe, participated in the event to add colour to the occasion. Though the programme of the anniversary was scheduled for 7 days, the programme was terminated by 2 days of time. Anyhow students did their maximum to make the programme a success. At that time students faced so many problems as they didn’t have a building for their hostel. But on 23 May 2001, after great effort and dedication of the students, they were able to attain two buildings for their hostel. History: The HNDE students were lucky enough to communicate via modern technology, as a result, the deputy minister Mr. Dinesh Gunawardana launched a website for the HNDE, (WWW.HNDESL.com) on 6 July 2004 making it a memorable day for the HNDE students. On the same day, a newly built 4 storeyed building was opened and was given to the public use.30 & 31 October 2004 was a couple of glorious days for the Mattakkuliya HNDE institute. The students were able to conduct an art festival on the theme "Mini Sengavi Soya" with the presence of the deputy minister Mr. Ranjith Siyambalapitiya and Mr. Rathna Sri Wijesingha. History: When it was 2006, the HNDE had already completed 20 years of time giving out middle-level engineers. To celebrate the 20th anniversary an exhibition was organized, in parallel with the INCO exhibition of IIESL was held on 23, 24 and 25 June 2006 at the BMICH. Minister Mr.Sarath Amunugama was the chief guest on the first day."Himidiriya" a collection of short stories was presented to the educationalist Mr.Leelananda Gamachchi. By the way on 7 January 2008 HNDE students’ council erected a signboard at the entrance. History: For the first time at a parliament prorogue debate all the ministers except Mr. VishwaWarnapala (who was the minister of Education & Higher Education that time) proposed that HNDE under SLIATE should be approved by the UGC. While the works went on as usual, it was discussed to modify the syllabus in 2010. Departments: Civil Engineering Electrical & Electronics Engineering Mechanical Engineering Building Services Engineering Quantity Surveying Courses: Higher National Diploma in Civil Engineering Higher National Diploma in Electrical & Electronics Engineering Higher National Diploma in Mechanical Engineering Higher National Diploma in Building Services Engineering Higher National Diploma in Quantity Surveying Institutes: Higher National Diploma In Engineering - Mattakkuliya(Colombo) Higher National Diploma In Engineering - Labuduwa(Galle) Higher National Diploma In Engineering - Jaffna
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tempest in a teapot** Tempest in a teapot: Tempest in a teapot (American English), or also phrased as storm in a teacup (British English), or tempest in a teacup, is an idiom meaning a small event that has been exaggerated out of proportion. There are also lesser known or earlier variants, such as storm in a cream bowl, tempest in a glass of water, storm in a wash-hand basin, and storm in a glass of water. Etymology: Cicero, in the first century BC, in his De Legibus, used a similar phrase in Latin, possibly the precursor to the modern expressions, Excitabat enim fluctus in simpulo ut dicitur Gratidius, translated: "For Gratidius raised a tempest in a ladle, as the saying is". Then in the early third century AD, Athenaeus, in the Deipnosophistae, has Dorion ridiculing the description of a tempest in the Nautilus of Timotheus by saying that he had seen a more formidable storm in a boiling saucepan. The phrase also appeared in its French form une tempête dans un verre d'eau ('a tempest in a glass of water'), to refer to the popular uprising in the Republic of Geneva near the end of the eighteenth century.One of the earliest occurrences in print of the modern version is in 1815, where Britain's Lord Chancellor Thurlow, sometime during his tenure of 1783–1792, is quoted as referring to a popular uprising on the Isle of Man as a "tempest in a teapot". Also Lord North, Prime Minister of Great Britain, is credited for popularizing this phrase as characterizing the outbreak of American colonists against the tax on tea. This sentiment was then satirized in Carl Guttenberg's 1778 engraving of the Tea-Tax Tempest (shown above right), where Father Time flashes a magic lantern picture of an exploding teapot to America on the left and Britannia on the right, with British and American forces advancing towards the teapot. Just a little later, in 1825, in the Scottish journal Blackwood's Edinburgh Magazine, a critical review of poets Hogg and Campbell also included the phrase "tempest in a teapot". The first recorded instance of the British English version, "storm in teacup", occurs in Catherine Sinclair's Modern Accomplishments in 1838. There are several instances though of earlier British use of the similar phrase "storm in a wash-hand basin".In 2008, Fall Out Boy used the phrase "Tempest in a Teacup" as lyrics in their song 'Headfirst Slide Into Cooperstown on a Bad Bet' Other languages: A similar phrase exists in numerous other languages: Persian: از کاه کوه ساختن az kah kouh sakhtan ('a storm in a cup') Arabic: زوبعة في فنجان zawba'a fi finjan ('a storm in a cup') Bengali: চায়ের কাপে ঝড় cha-er cup-e jhor ('storm in a teacup') Bulgarian: Буря в чаша вода burya v chasha voda ('storm in a glass of water') Chinese: 茶杯裡的風波、茶壺裡的風暴 ('winds and waves in a teacup; storm in a teapot') Czech: bouře ve sklenici vody ('a storm in a glass of water') Danish: en storm i et glas vand ('a storm in a glass of water') Dutch: een storm in een glas water ('a storm in a glass of water') Esperanto: granda frakaso en malgranda glaso ('a large storm in a small glass') Estonian: torm veeklaasis ('storm in a glass of water') Filipino: bagyo sa baso ('typhoon in a teacup') Finnish: myrsky vesilasissa ('storm in a glass of water') French: une tempête dans un verre d'eau ('a storm in a glass of water') German: Sturm im Wasserglas ('storm in a glass of water') Hebrew: סערה בכוס תה se'arah bekos teh ('storm in a teacup') Hindi: चाय की प्याली में तूफ़ान ('storm in a teacup') Hungarian: vihar egy pohár vízben ('a storm in a glass of water') Icelandic: stormur í vatnsglasi ('a storm in a glass of water') Italian: una tempesta in un bicchiere d'acqua ('a storm in a glass of water') Japanese: コップの中の嵐 koppu no naka no arashi ('a storm in a glass') Korean: 찻잔속의 태풍 chat jan sokui taepung ('a typhoon in a teacup') Latin: excitare fluctus in simpulo ('to stir up waves in a ladle') Latvian: vētra ūdens glāzē ('storm in a glass of water') Lithuanian: audra stiklinėje ('storm in a glass') Malayalam: ചായക്കോപ്പയിലെ കൊടുങ്കാറ്റ് chaya koppayile kodunkattu ('storm in a tea cup') Norwegian: storm i et vannglass (Bokmål)/storm i eit vassglas (Nynorsk) ('a storm in a glass of water') Polish: burza w szklance wody ('a storm in a glass of water') Portuguese: tempestade em copo d'água/uma tempestade num copo d'água ('storm in a glass of water/a tempest in a glass of water') Romanian: furtună într-un pahar cu apă ('storm in a glass of water') Russian: Буря в стакане воды burya v stakane vody ('storm in a glass of water') Serbian: Бура у чаши воде bura u čaši vode ('storm in a glass of water') Spanish: una tormenta en un vaso de agua ('a storm in a glass of water') Swedish: storm i ett vattenglas ('storm in a glass of water') Turkish: bir kaşık suda fırtına ('storm in a spoon of water') Telugu: tea kappu lo thufaanu ('storm in a tea cup') Tamil: தேநீர் கோப்பையில் புயல் ('storm in a tea cup') Ukrainian: Буря в склянці води buria v sklyantsi vody ('a tempest in a glass of water') Urdu: چائے کی پیالی میں طوفان chaye ki pyali main toofan ('storm in a teacup') Yiddish: אַ שטורעם אין אַ גלאָז וואַסער a shturem in a gloz vaser ('a storm in a glass of water'), or אַ בורע אין אַ לעפֿל וואַסער a bure in a lefl vaser ('a tempest in a spoon of water')
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hybrid incompatibility** Hybrid incompatibility: Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species. Background: Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents. Background: Dobzhansky-Muller model The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid individual for the first time rather than throughout the population - thus, hybrids that are infertile can develop while the parent populations remain viable. The negative fitness effects of infertility are not present in the original population. In this way, hybrid infertility contributes in some part to speciation by ensuring that gene flow between diverging species remains limited. Further analysis of the issue has supported this model, although it does not include conspecific genic interactions, a potential factor that more recent research has begun to look in to. Background: Gene identification Decades after the research of Dobzhansky and Muller, the specifics of hybrid incompatibility were explored by Jerry Coyne and H. Allen Orr. Using introgression techniques to analyze the fertility in Drosophila hybrid and non-hybrid offspring, specific genes that contribute to sterility were identified; a study by Chung-I Wu which expanded on Coyne and Orr's work found that the hybrids of two Drosophila species were made sterile by the interaction of around 100 genes. These studies widened the scope of the Dobzhansky-Muller model, who thought it likely that more than two genes would be responsible. The ubiquity of Drosophila as a model organism has allowed many of the sterility genes to be sequenced in the years since Wu's study. Modern directions: With modern molecular techniques, researchers have been able to more accurately identify the underlying genetic causes of hybrid incompatibility. This has led to both the development of expansions to the Dobzhansky-Muller model. Recent research has also explored the possibility of external influences on sterility as well. Modern directions: The "snowball effect" An extension of the Dobzhansky-Muller model is the "snowball effect"; an accumulation of incompatible loci due to increased species divergence. Since the model posits that sterility is due to negative allelic interaction between the hybridizing species, as species become more diverged it follows that more negative interactions should develop. The snowball effect states that the number of these incompatibilities will increase exponentially over the time of divergence, particularly when more than two loci contribute to the incompatibility. This concept has been exhibited in tests with the flowering plant genus Solanum, with the findings supporting the genetic underpinnings of Dobzhansky-Muller: "Overall, our results indicate that the accumulation of sterility loci follows a different trajectory from the accumulation of loci for other quantitative species differences, consistent with the unique genetic basis expected to underpin species reproductive isolating barriers. ...In doing so, we uncover direct empirical support for the Dobzhansky-Muller model of hybrid incompatibility, and the snowball prediction in particular." Environmental influences Though the primary causes of hybrid incompatibility appear to be genetic, external factors may play a role as well. Studies focused primarily on model plants have found that the viability of hybrids can be dependent on environmental influence. Several studies on rice and Arabidopsis species identify temperature as an important factor in hybrid viability; generally, low temperatures seem to cause negative hybrid symptoms to be expressed while high temperatures suppress them, although one rice study found the opposite to be true. There has also been evidence in an Arabidopsis species that in poor environmental conditions (in this case, high temperatures), hybrids did not express negative symptoms and are viable with other populations. When environmental conditions return to normal, however, the negative symptoms are expressed and the hybrids are once again incompatible with other populations. Modern directions: Lynch-Force model Though a multitude of evidence supports the Dobzhansky-Muller model of hybrid sterility and speciation, this does not rule out the possibility that other situations besides the inviable combination of benign genes can lead to hybrid incompatibility. One such situation is incompatibility by way of gene duplication, or the Lynch and Force model (put forth by Michael Lynch and Allan Force in 2000). When gene duplication occurs, there is a possibility that a redundant gene can be rendered non-functional over time by mutations. From Lynch and Force's paper:"The divergent resolution of genomic redundancies, such that one population loses function from one copy while the second population loses function from a second copy at a different chromosomal location, leads to chromosomal repatterning such that gametes produced by hybrid individuals can be completely lacking in functional genes for a duplicate pair." This hypothesis is relatively recent compared to Dobzhansky-Muller, but has support as well. Modern directions: Epigenetic influences A possible contributor to hybrid incompatibility that fits with the Lynch and Force model better than the Dobzhansky-Muller model is epigenetic inheritance. Epigenetics broadly refers to heritable elements that affect offspring phenotype without adjusting the DNA sequence of the offspring. When a particular allele has been epigenetically modified, it is referred to as an epiallele A study found that an Arabidopsis gene is not expressed because it is a silent epiallele, and when this epiallele is inherited by hybrids in combination with a mutant gene at the same locus, the hybrid is inviable. This fits with the Lynch and Force model because the heritable epiallele, ordinarily not an issue in non-hybrid populations with non-epiallele copies of the gene, becomes problematic when it is the only copy of the gene in the hybrid population.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fujifilm X-E3** Fujifilm X-E3: The Fujifilm X-E3 is a digital rangefinder-style mirrorless camera announced by Fujifilm on September 7, 2017. It is part of the company's X-series range of cameras. Design: The design of the Fujifilm X-E3 is based on that of its predecessor models X-E2 and X-E2s, only that the case is about eight millimeters narrower and partly designed slightly differently. Both the X-E2s and the X-T20 have some differences in the operating concept, especially on the rear of the case. With 337 grams of operational weight (without lens), the X-E3 is also slightly lighter than its predecessor and significantly lighter than the X-T20.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Visuotinė lietuvių enciklopedija** Visuotinė lietuvių enciklopedija: The Visuotinė lietuvių enciklopedija or VLE (transl. Universal Lithuanian Encyclopedia) is a 25-volume universal Lithuanian-language encyclopedia published by the Science and Encyclopaedia Publishing Institute from 2001 to 2014. VLE is the first published universal encyclopedia in independent Lithuania (it replaces the former Lietuviškoji Tarybinė Enciklopedija which was published in thirteen volumes from 1976 to 1985). The last volume, XXV, was published in July 2014. An additional volume of updates, error corrections, and indexes was published in 2015. The encyclopedia's twenty-five volumes contain nearly 122,000 articles and about 25,000 illustrations. Since June 2017, VLE is published as an online encyclopedia being updated to present day. Description: VLE is an encyclopedia published in Lithuanian, therefore it focuses on Lithuania, Lithuanians and Lithuanian topics (Lithuanian personalities, organizations, language, culture, national activities). These articles make up about 20–25% of all articles and illustrations (about 6,000 articles are intended to describe personalities alone). Description: Separate thematic articles contain articles on Lithuanian studies - about Lithuania and Lithuanians. Much space in the VLE is devoted to topics banned or ignored during the Soviet era: Lithuanian statehood, the history of the Catholic Church, the army, Lithuania's resistance to Nazi and Soviet occupation, the history of the Lithuanian state, Lithuania Minor. Juozapas Girdzijauskas stated that VLE "will objectively acquaint the society with the world's culture, capture the modern level of science, promote the humanistic values of humanity, and help the Lithuanian people to overcome the Soviet ideology". Editorial board: Coordination and editing of Visuotinė lietuvių enciklopedija is carried by prominent Lithuanian scholars: prof., habil. dr. Algirdas Ambrazas; prof. habil. dr. Antanas Bandzaitis; dr. Jonas Boruta; prof. habil. dr. Juozapas Girdzijauskas; prof. habil. dr. Bronius Grigelionis; prof. habil. dr. Leonas Kadžiulis; doc. dr. Algirdas Kiselis; prof. habil. dr. Vytautas Kubilius Dr. Elvyra Janina Kunevičienė; prof. habil. dr. Alfonsas Skrinska; prof. habil. dr. Antanas Tyla; akad. prof. habil. dr. Zigmas ZinkevičiusAdditionally, editorial-scientific boards of 23–25 scholars were established for the main science branches. In total, there were 2,957 authors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symmetrical Defense** Symmetrical Defense: The Symmetrical Defense (or Austrian Defense) is a chess opening that begins with the moves: 1. d4 d5 2. c4 c5First described in print by Alessandro Salvio in 1604, the opening is often called the Austrian Defense because it was studied by Austrian chess players including Hans Haberditz (c. 1901–57), Hans Müller (1896–1971), and GM Ernst Grünfeld.The Symmetrical Defense is an uncommon variation of the Queen's Gambit Declined. Symmetrical Defense: It poses the purest test of Queen's Gambit theory—whether Black can equalize by simply copying White's moves. Most opening theoreticians believe that White should gain the advantage and at best Black is playing for a draw. 3.cxd5: White often replies 3.cxd5, but other moves are playable and may lead to transpositions into more well-known variations such as the Queen's Gambit Accepted and the Tarrasch Defense. After 3.cxd5 it is not advisable for Black to play 3...Qxd5, because either 4.Nf3 cxd4 5.Nc3 Qa5 6.Nxd4 or 5...Qd8 6.Qxd4 Qxd4 7.Nxd4 give White a big lead in development. Instead, Black should play 3...Nf6 intending to recapture on d5 with his knight. White should be able to maintain the advantage with either 4.Nf3 or 4.e4. Possible continuations are 4.Nf3 cxd4 5.Nxd4 Nxd5 6.e4 Nc7 or 4.e4 Nxe4 5.dxc5 Nxc5 6.Nc3 e6.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nu jazz** Nu jazz: Nu jazz (also spelt nü jazz or known as jazztronica, or future jazz) is a genre of jazz and electronic music. The music blends jazz elements with other musical styles, such as funk, electronic music, and free improvisation.Nu jazz typically ventures further into the electronic territory than does its close cousin, acid jazz. Nu jazz can be very experimental in nature and can vary widely in sound and concept. The sound departs further from its blues roots than acid jazz does, and instead explores electronic sounds and ethereal jazz sensualities. "The star of Nu jazz is the music itself and not the individual dexterity of the musicians." Sources: "A Flourish of Jazz", Time Magazine article, including mention of the use of electronics in jazz fusion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eighteen Arms of Wushu** Eighteen Arms of Wushu: The Eighteen Arms is a list of the eighteen main weapons of Chinese martial arts. The origin of the list is unclear and there have been disputes as to what the eighteen weapons actually are. However, all lists contain at least one or more of the following weapons: Wuzazu version: The Wuzazu , written by the Ming-dynasty Fujianese scholar Xie Zhaozhe (1567–1624), lists the following: Water Margin version: The Ming novel Water Margin lists the following:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4F-MDMB-BINACA** 4F-MDMB-BINACA: 4F-MDMB-BINACA (also known as 4F-MDMB-BUTINACA or 4F-ADB) is an indazole-based synthetic cannabinoid from the indazole-3-carboxamide family. It has been used as an active ingredient in synthetic cannabis products and sold as a designer drug since late 2018. 4F-MDMB-BINACA is an agonist of the CB1 receptor (EC50 = 7.39 nM), though it is unclear whether it is selective for this target. In December 2019, the UNODC announced scheduling recommendations placing 4F-MDMB-BINACA into Schedule II throughout the world. Related compounds: The corresponding indole core analogue, 4F-MDMB-BICA (4F-MDMB-BUTICA), has also been widely sold as a designer drug by chemical providers on the internet, first being identified in May 2020. Legal Status: United Kingdom It is illegal to sell, distribute, supply, transport or trade the pharmaceutical drug under the Psychoactive Substances Act 2016. Legal Status: United States The DEA has filed a notice for its intent to temporarily place 4F-MDMB-BUTICA (but not 4F-MDMB-BINACA itself) into Schedule I status starting on or after May 4, 2023, for up to 2 years, during which it's possible the DEA could file for permanent scheduling within those 2 years. If it is filed on May 4, 2023, and the DEA does not file to permanent placement the temporary Schedule I order will expire on May 4, 2025.North Dakota has placed 4F-MDMB-BINACA into Schedule I on 04/27/2023.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vinyl ester resin** Vinyl ester resin: Vinyl ester resin, or often just vinyl ester, is a resin produced by the esterification of an epoxy resin with acrylic or methacrylic acids. The "vinyl" groups refer to these ester substituents, which are prone to polymerize and thus an inhibitor is usually added. The diester product is then dissolved in a reactive solvent, such as styrene, to approximately 35–45 percent content by weight. Polymerization is initiated by free radicals, which are generated by UV-irradiation or peroxides. Vinyl ester resin: This thermoset material can be used as an alternative to polyester and epoxy materials as the thermoset polymer matrix in composite materials, where its characteristics, strengths, and bulk cost are intermediate between polyester and epoxy. Vinyl ester has lower resin viscosity (approx. 200 cps) than polyester (approx. 500cps) and epoxy (approx. 900cps). Uses: In homebuilt airplanes, the Glasair and Glastar kit planes made extensive use of vinylester fiberglass-reinforced structures. It is a common resin in the marine industry due to its corrosion resistance and ability to withstand water absorption. Vinyl ester resin is extensively used to manufacture FRP tanks and vessels as per BS4994. For laminating process, vinyl ester is usually initiated with methyl ethyl ketone peroxide. It has greater strength and mechanical properties than polyester and less than epoxy resin. Renewable precursors to vinyl ester resins have been developed.Vinyl resins are often used in repair materials and laminating because they are waterproof and reliable. Bisphenol A is a precursor in production of major classes of resins, including the vinyl ester resins along with epoxy resins and polycarbonate. This application usually begins with alkylation of BPA with epichlorohydrin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kirchhoff's diffraction formula** Kirchhoff's diffraction formula: Kirchhoff's diffraction formula (also called Fresnel–Kirchhoff diffraction formula) approximates light intensity and phase in optical diffraction: light fields in the boundary regions of shadows. The approximation can be used to model light propagation in a wide range of configurations, either analytically or using numerical modelling. It gives an expression for the wave disturbance when a monochromatic spherical wave is the incoming wave of a situation under consideration. This formula is derived by applying the Kirchhoff integral theorem, which uses the Green's second identity to derive the solution to the homogeneous scalar wave equation, to a spherical wave with some approximations. Kirchhoff's diffraction formula: The Huygens–Fresnel principle is derived by the Fresnel–Kirchhoff diffraction formula. Derivation of Kirchhoff's diffraction formula: Kirchhoff's integral theorem, sometimes referred to as the Fresnel–Kirchhoff integral theorem, uses Green's second identity to derive the solution of the homogeneous scalar wave equation at an arbitrary spatial position P in terms of the solution of the wave equation and its first order derivative at all points on an arbitrary closed surface S as the boundary of some volume including P. Derivation of Kirchhoff's diffraction formula: The solution provided by the integral theorem for a monochromatic source is where U is the spatial part of the solution of the homogeneous scalar wave equation (i.e., V(r,t)=U(r)e−iωt as the homogeneous scalar wave equation solution), k is the wavenumber, and s is the distance from P to an (infinitesimally small) integral surface element, and ∂∂n denotes differentiation along the integral surface element normal unit vector n (i.e., a normal derivative), i.e., ∂f∂n=∇f⋅n . Note that the surface normal or the direction of n is toward the inside of the enclosed volume in this integral; if the more usual outer-pointing normal is used, the integral will have the opposite sign. And also note that, in the integral theorem shown here, n and P are vector quantities while other terms are scalar quantities. Derivation of Kirchhoff's diffraction formula: For the below cases, the following basic assumptions are made. Derivation of Kirchhoff's diffraction formula: The distance between a point source of waves and an integral area, the distance between the integral area and an observation point P, and the dimension of opening S are much greater than the wave wavelength λ .U and ∂U∂n=∇U⋅n are discontinuous at the boundaries of the aperture, called Kirchhoff's boundary conditions. This may be related with another assumption that waves on an aperture (or an open area) is same to the waves that would be present if there was no obstacle for the waves. Derivation of Kirchhoff's diffraction formula: Point source Consider a monochromatic point source at P0, which illuminates an aperture in a screen. The intensity of the wave emitted by a point source falls off as the inverse square of the distance traveled, so the amplitude falls off as the inverse of the distance. The complex amplitude of the disturbance at a distance r is given by where a represents the magnitude of the disturbance at the point source. Derivation of Kirchhoff's diffraction formula: The disturbance at a spatial position P can be found by applying the Kirchhoff's integral theorem to the closed surface formed by the intersection of a sphere of radius R with the screen. The integration is performed over the areas A1, A2 and A3, giving To solve the equation, it is assumed that the values of U and ∂U∂n in the aperture area A1 are the same as when the screen is not present, so at the position Q, where r is the length of the straight line P0Q, and (n,r) is the angle between a straightly extended version of P0Q and the (inward) normal to the aperture. Note that 0<(n,r)<π2 so cos ⁡(n,r) is a positive real number on A1. Derivation of Kirchhoff's diffraction formula: At Q, we also have where s is the length of the straight line PQ, and (n,s) is the angle between a straightly extended version of PQ and the (inward) normal to the aperture. Note that π2<(n,s)<3π2 so cos ⁡(n,s) is a negative real number on A1. Two more following assumptions are made. Derivation of Kirchhoff's diffraction formula: In the above normal derivatives, the terms 1r and 1s in the both square brackets are assumed to be negligible compared with the wavenumber k=2πλ , means r and s are much greater than the wavelength λ Kirchhoff assumes that the values of U and ∂U∂n on the opaque areas marked by A2 are zero. This implies that U and ∂U∂n are discontinuous at the edge of the aperture A1. This is not the case, and this is one of the approximations used in deriving the Kirchhoff's diffraction formula. These assumptions are sometimes referred to as Kirchhoff's boundary conditions.The contribution from the hemisphere A3 to the integral is expected to be zero, and it can be justified by one of the following reasons. Derivation of Kirchhoff's diffraction formula: Make the assumption that the source starts to radiate at a particular time, and then make R large enough, so that when the disturbance at P is being considered, no contributions from A3 will have arrived there. Such a wave is no longer monochromatic, since a monochromatic wave must exist at all times, but that assumption is not necessary, and a more formal argument avoiding its use has been derived. Derivation of Kirchhoff's diffraction formula: A wave emanated from the aperture A1 is expected to evolve toward a spherical wave as it propagates (Water wave examples of this can be found in many pictures showing a water wave passing through a relatively narrow opening.). So, if R is large enough, then the integral on A3 becomes where r′ and dΩ are the distance from the center of the aperture A1 to an integral surface element and the differential solid angle in the spherical coordinate system respectively.As a result, finally, the integral above, which represents the complex amplitude at P, becomes This is the Kirchhoff or Fresnel–Kirchhoff diffraction formula. Derivation of Kirchhoff's diffraction formula: Equivalence to Huygens–Fresnel principle The Huygens–Fresnel principle can be derived by integrating over a different closed surface (the boundary of some volume having an observation point P). The area A1 above is replaced by a part of a wavefront (emitted from a P0) at r0, which is the closest to the aperture, and a portion of a cone with a vertex at P0, which is labeled A4 in the right diagram. If the wavefront is positioned such that the wavefront is very close to the edges of the aperture, then the contribution from A4 can be neglected (assumed here). On this new A1, the inward (toward the volume enclosed by the closed integral surface, so toward the right side in the diagram) normal n to A1 is along the radial direction from P0, i.e., the direction perpendicular to the wavefront. As a result, the angle (n,r)=0 and the angle (n,s) is related with the angle χ (the angle as defined in Huygens–Fresnel principle) as The complex amplitude of the wavefront at r0 is given by So, the diffraction formula becomes where the integral is done over the part of the wavefront at r0 which is the closest to the aperture in the diagram. This integral leads to the Huygens–Fresnel principle (with the obliquity factor cos {\textstyle {\frac {1+\cos \chi }{2}}} ). Derivation of Kirchhoff's diffraction formula: In the derivation of this integral, instead of the geometry depicted in the right diagram, double spheres centered at P0 with the inner sphere radius r0 and an infinite outer sphere radius can be used. In this geometry, the observation point P is located in the volume enclosed by the two spheres so the Fresnel-Kirchhoff diffraction formula is applied on the two spheres. (The surface normal on these integral surfaces are, say again, toward the enclosed volume in the diffraction formula above.) In the formula application, the integral on the outer sphere is zero by a similar reason of the integral on the hemisphere as zero above. Derivation of Kirchhoff's diffraction formula: Extended source Assume that the aperture is illuminated by an extended source wave. The complex amplitude at the aperture is given by U0(r). Derivation of Kirchhoff's diffraction formula: It is assumed, as before, that the values of U and ∂U∂n in the area A1 are the same as when the screen is not present, that the values of U and ∂U∂n in A2 are zero (Kirchhoff's boundary conditions) and that the contribution from A3 to the integral are also zero. It is also assumed that 1/s is negligible compared with k. We then have This is the most general form of the Kirchhoff diffraction formula. To solve this equation for an extended source, an additional integration would be required to sum the contributions made by the individual points in the source. If, however, we assume that the light from the source at each point in the aperture has a well-defined direction, which is the case if the distance between the source and the aperture is significantly greater than the wavelength, then we can write where a(r) is the magnitude of the disturbance at the point r in the aperture. We then have and thus Fraunhofer and Fresnel diffraction equations: In spite of the various approximations that were made in arriving at the formula, it is adequate to describe the majority of problems in instrumental optics. This is mainly because the wavelength of light is much smaller than the dimensions of any obstacles encountered. Analytical solutions are not possible for most configurations, but the Fresnel diffraction equation and Fraunhofer diffraction equation, which are approximations of Kirchhoff's formula for the near field and far field, can be applied to a very wide range of optical systems. Fraunhofer and Fresnel diffraction equations: One of the important assumptions made in arriving at the Kirchhoff diffraction formula is that r and s are significantly greater than λ. Another approximation can be made, which significantly simplifies the equation further: this is that the distances P0Q and QP are much greater than the dimensions of the aperture. This allows one to make two further approximations: cos(n, r) − cos(n, s) is replaced with 2cos β, where β is the angle between P0P and the normal to the aperture. The factor 1/rs is replaced with 1/r's', where r' and s' are the distances from P0 and P to the origin, which is located in the aperture. The complex amplitude then becomes: Assume that the aperture lies in the xy plane, and the coordinates of P0, P and Q (a general point in the aperture) are (x0, y0, z0), (x, y, z) and (x', y', 0) respectively. We then have: r2=(x0−x′)2+(y0−y′)2+z02, s2=(x−x′)2+(y−y′)2+z2, r′2=x02+y02+z02, s′2=x2+y2+z2. Fraunhofer and Fresnel diffraction equations: We can express r and s as follows: These can be expanded as power series: The complex amplitude at P can now be expressed as where f(x', y') includes all the terms in the expressions above for s and r apart from the first term in each expression and can be written in the form where the ci are constants. Fraunhofer and Fresnel diffraction equations: Fraunhofer diffraction If all the terms in f(x', y') can be neglected except for the terms in x' and y', we have the Fraunhofer diffraction equation. If the direction cosines of P0Q and PQ are The Fraunhofer diffraction equation is then where C is a constant. This can also be written in the form where k0 and k are the wave vectors of the waves traveling from P0 to the aperture and from the aperture to P respectively, and r' is a point in the aperture. Fraunhofer and Fresnel diffraction equations: If the point source is replaced by an extended source whose complex amplitude at the aperture is given by U0(r' ), then the Fraunhofer diffraction equation is: where a0(r') is, as before, the magnitude of the disturbance at the aperture. In addition to the approximations made in deriving the Kirchhoff equation, it is assumed that r and s are significantly greater than the size of the aperture, second- and higher-order terms in the expression f(x', y') can be neglected. Fresnel diffraction When the quadratic terms cannot be neglected but all higher order terms can, the equation becomes the Fresnel diffraction equation. The approximations for the Kirchhoff equation are used, and additional assumptions are: r and s are significantly greater than the size of the aperture, third- and higher-order terms in the expression f(x', y') can be neglected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Squash and stretch** Squash and stretch: Squash and stretch is the phrase used to describe "by far the most important": 47  of the 12 basic principles of animation, described in the book The Illusion of Life by Frank Thomas and Ollie Johnston. Basis: The principle is based on observation that only stiff objects remain inert during motion,: 47  while objects that are not stiff, although retaining overall volume, tend to change shape in an extent that depends on inertia and elasticity of the different parts of the moving object. To illustrate the principle, a half-filled flour sack dropped on the floor, or stretched out by its corners, was shown to be retaining its overall volume as determined by the object's Poisson's ratio. Basis: Examples of the elasticity of the human body in motion were found in photographs the animators found in newspaper sports pages. Using these poses as reference the animators were able to start "observing (the motion) in a new way".: 48  Author Walt Stanchfield said, "A simple shape plus squash and stretch are all the anatomy you need for cartoon characters.": 84 Application: The ball that would change shape, compressing ("squash") as it hit the ground, then extending (stretch) as it bounced up again, was the simplest example that was part of the preparatory training for Disney animators.: 51 During the 1930s, the squash and stretch were exaggerated by Disney animators, making it ever more extreme, but they had to maintain the overall volume of an object so that it did not appear to change volume as well as shape.: 48
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skip graph** Skip graph: Skip graphs are a kind of distributed data structure based on skip lists. They were invented in 2003 by James Aspnes and Gauri Shah. A nearly identical data structure called SkipNet was independently invented by Nicholas Harvey, Michael Jones, Stefan Saroiu, Marvin Theimer and Alec Wolman, also in 2003.Skip graphs have the full functionality of a balanced tree in a distributed system. Skip graphs are mostly used in searching peer-to-peer networks. As they provide the ability to query by key ordering, they improve over search tools based on the hash table functionality only. In contrast to skip lists and other tree data structures, they are very resilient and can tolerate a large fraction of node failures. In addition, constructing, inserting, searching, and repairing a skip graph that was disturbed by failing nodes can be done by straightforward algorithms. Description: A skip graph is a distributed data structure based on skip lists designed to resemble a balanced search tree. They are one of several methods to implement a distributed hash table, which are used to locate resources stored in different locations across a network, given the name (or key) of the resource. Description: Skip graphs offer several benefits over other distributed hash table schemes such as Chord (peer-to-peer) and Tapestry (DHT), including addition and deletion in expected logarithmic time, logarithmic space per resource to store indexing information, no required knowledge of the number of nodes in a set and support for complex range queries. A major distinction from Chord and Tapestry is that there is no hashing of search keys of resources, which allows related resources to be near each other in the skip graph; this property makes searches for values within a given range feasible. Another strength of skip graphs is the resilience to node failure in both random and adversarial failure models. Implementation details: As with skip lists, nodes are arranged in increasing order in multiple levels; each node in level i is contained in level i+1 with some probability p (an adjustable parameter). Implementation details: Level 0 consists of one doubly linked list containing all of the nodes in the set. Lists becoming increasingly sparse at higher levels, until the list is composed of just one node. Where skip graphs differ from skip lists is that each level i≥1, will contain multiple lists; membership of a key x in a list is defined by the membership vector m(x) . The membership vector is defined as an infinite random word over a fixed alphabet, each list in the skip graph is identified by a finite word w from the same alphabet, if that word is a prefix of m(x) then node x is a member of the list. Operations: Skip graphs support the basic operations of search, insert and delete. Skip graphs will also support the more complex range search operation. Operations: Search The search algorithm for skip graphs is almost identical to the search algorithm for skip lists but it is modified to run in a distributed system. Searches start at the top level and traverse through the structure. At each level, the search traverses the list until the next node contains a greater key. When a greater key is found, the search drops to the next level, continuing until the key is found or it is determined that the key is not contained in the set of nodes. If the key is not contained in the set of nodes the largest value less than the search key is returned. Operations: Each node in a list has the following fields: key The node's value. Operations: neighbor[R/L][level] An array containing pointers to the right and left neighbor in the node's doubly linked at level i.search(searchOp, startNode, searchKey, level) if (v.key = searchKey) then send(foundOp, v) to startNode if (v.key < searchKey) then while level ≥ 0 if (v.neighbor[R][level].key ≤ searchKey) then send(searchOp, startNode, searchKey, level) to v.neighbor[R][level] break else level = level - 1 else while level ≥ 0 if ((v.neighbor[L][level]).key ≥ searchKey) then send(searchOp, startNode, searchKey, level) to v.neighbor[L][level] break else level = level - 1 if (level < 0) then send(notFoundOp, v) to startNode Analysis performed by William Pugh shows that on average, a skip list and by extension a skip graph contains log log ⁡(1/p)) levels for a fixed value of p. Given that at most 11−p nodes are searched on average per level, the total expected number of messages sent is log log ⁡(1−p)) and the expected time for the search is log log ⁡(1−p)) . Therefore, for a fixed value of p, the search operation is expected to take O(log n) time using O(log n) messages. Operations: Insert Insertion is done in two phases and requires that a new node u knows some introducing node v; the introducing node may be any other node currently in the skip graph. In the first phase the new node u uses the introducing node v to search for its own key; this search is expected to fail and return the node s with the largest key smaller than u. In the second phase u inserts itself in each level until it is the only element in a list at the top level. Insertion at each level is performed using standard doubly linked list operations; the left neighbor's next pointer is changed to point to the new node and the right neighbor's previous pointer is changed to point to the node. Operations: insert() search for s L = 0 while true insert u into level L from s Scan through level L to find s' such which has membership vector matching membership vector of u for first L+1 characters if no s' exists exit else s = s' L = L + 1 Similar to the search, the insert operation takes expected O(log n) messages and O(log n) time. With a fixed value of p; the search operation in phase 1 is expected to take O(log n) time and messages. In phase 2 at each level L ≥ 0, u communicates with an average 1/p other nodes to locate s', this will require O(1/p) time and messages leading to O(1) time and messages for each step in phase 2. Operations: Delete Nodes may be deleted in parallel at each level in O(1) time and O(log n) messages. When a node wishes to leave the graph it must send messages to its immediate neighbors to rearrange their next and previous pointers. delete() for L = 1 to max level, in parallel delete u from each level. delete u from level 0 The skip graph contains an average of O(log n) levels; at each level u must send 2 messages to complete a delete operation on a doubly linked list. As operations on each level may be done in parallel the delete operation may be finished using O(1) time and expected O(log n) messages. Fault tolerance: In skip graphs, fault tolerance describes the number of nodes which can be disconnected from the skip graph by failures of other nodes. Two failure models have been examined; random failures and adversarial failures. In the random failure model any node may fail independently from any other node with some probability. The adversarial model assumes node failures are planned such that the worst possible failure is achieved at each step, the entire skip graph structure is known and failures are chosen to maximize node disconnection. A drawback of skip graphs is that there is no repair mechanism; currently the only way to remove and to repair a skip graph is to build a new skip graph with surviving nodes. Fault tolerance: Random failure Skip graphs are highly resistant to random failures. By maintaining information on the state of neighbors and using redundant links to avoid failed neighbors, normal operations can continue even with a large number of node failures. While the number of failed nodes is less than log ⁡n) the skip graph can continue to function normally. Simulations performed by James Aspnes show that a skip graph with 131072 nodes was able tolerate up to 60% of its nodes failing before surviving nodes were isolated. While other distributed data structures may be able to achieve higher levels of resiliency they tend to be much more complex. Fault tolerance: Adversarial failure Adversarial failure is difficult to simulate in a large network as it becomes difficult to find worst case failure patterns. Theoretical analysis shows that the resilience depends based on the vertex expansion ratio of the graph, defined as follows. For a set of nodes A in the graph G, the expansion factor is the number of nodes not in A but adjacent to a node in A divided by the number of nodes in A. If skip graphs have a sufficiently large expansion ratio of log ⁡n) then at most log ⁡n) nodes may be separated, even if up to f failures are specifically targeted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psychological distance** Psychological distance: Psychological distance is the degree to which people feel removed from a phenomenon. Distance in this case is not limited to the physical surroundings, rather it could also be abstract. Distance can be defined as the separation between the self and other instances like persons, events, knowledge, or time. Psychological distance was first defined in Trope and Liberman's Construal Level Theory (CLT). However, Trope and Liberman only identified temporal distance as a separator. This has since been revised to include four categories of distance: spatial, social, hypothetical, and informational distances. Further studies have concluded that all four are strongly and systemically correlated with each other.At a basic level, psychological distance in Construal Level Theory notes that distance plays a pivotal role in the relationship between an event and a person. The distance factor will help determine the outcome of whether or not a person places value on a specific topic. The relationship between someone and an event, in regard to psychological distance, is such that the greater the distance between the self and an event, the lower the mental perception of importance is for the person. Following this example, the less important an event is perceived, the less likely one is to act on it. This psychological distance causes behavioral differences, or non-existence of certain behaviors or attitudes all together, that alter one's response to an event by changing the perception of its importance in one's mind. Psychological distance: Psychological distance is fundamentally egocentric, the anchor point is the self, in the present, and the different interactions of the self with an object or event correlate to the different levels of distance. Psychological distance in environmental issues: Oftentimes, psychologically distant things are those that are not present or experienced frequently in everyday life. As noted above, this can be due to a variety of factors. Whether the distance is due to a lack of exposure, a lack of knowledge, a temporal difference, or being physically separated, all four create a distance that in some way limits exposure or frequency. This phenomenon is prevalent in many environmental issues such as climate change and its effects. Data has shown that the earth's average temperature has been steadily increasing over the last few hundred years. This directly correlates with higher levels of carbon dioxide (CO2) in the atmosphere as a result of anthropogenic activities, starting around the industrial revolution (1740), that emit CO2 and other greenhouse gases (GHGs). Countries closer in proximity to the issue tend to place a higher level of importance on an issue as opposed to countries that are farther in proximity. While all regions/countries are affected by environmental issues, certain areas of the world feel these effects significantly more than others. This difference between the effects on certain areas of the world are key to understanding the role of psychological distance in environmental issues. Construal Level Theory concludes there is an inverse relationship between affected parties/exposure and psychological distance. In accordance with this theory, many areas of the world, such as the United States, are historically lacking on the world stage when it comes to environmental policy making in regards to other areas of the world such as Europe as a result of the country's general perceptions on environmental issues. Reasons for different levels of psychological distance in certain parts of the world: Several studies have concluded that public concern regarding climate change and environmental issues decreases as one's perceived psychological distance from the issue increases. According to the construal level theory (CLT), psychological distance from an event, issue or object is directly linked to the way that an individual or group of people mentally represent it. More specifically, issues or objects that are perceived as psychologically close are perceived in a “concrete” manner, meaning that a specific representation of those issues is generated. On the other hand, objects or events that are perceived as psychologically distant are perceived in an “abstract” manner, meaning that the cognitive representation of that issue is perceived in a broader sense.This abstract contextualization of climate change as a slow, gradual modification to our current climate conditions makes it difficult to assess and understand the severity of climate change as a personal experience. Climate-related risk awareness has been positively linked with the perception of the severity of the global issue of climate change as well as the risk of negative effects that climate change can pose to an individual. According to Social Representations Theory (SRT), individuals apprehend unfamiliar risks (such as climate change) through symbols and iconic images that are presented in a socio-cultural context. SRT further demonstrates the way in which risk representations of climate change contrast globally and are mainly shaped by the local environment of an individual or group of people.Compared to other issues pertaining to global society, the importance and awareness of climate change is low, which is likely due to the widespread perception that the risk associated with climate change to an individual is distant in space and time. For example, these issues of climate change are affecting areas that are distant, such as other countries or continents (space), or that only future generations will be affected (time). The phenomenon of psychological distance then decreases the public's ability to address and mitigate the effects of climate change. Reducing psychological distance: Public perception of climate change as a distant issue may threaten climate action. If the public's perception of their relative distance to climate change is driven by a construal level process, then the level at which the public construes climate change is an important determinant of their support for climate action. For example, an abstract construal level will likely lead to climate change being perceived as psychologically distant, which may result in dissension of the problem and unwillingness to tackle the issue. Conversely, a concrete construal is likely to lead to acceptance of climate change by the public through promoting a psychologically close view, which could result in a higher level of willingness to address climate change since the consequences of the issue or more tangible. Making the issue of climate change more localized, more relevant and more urgent will help to reduce the estrangement by people and help to increase pro-environmental behaviors.Notably, CLT indicates that psychological distance is essential when promoting action. According to Goal Setting Theory, which was proposed by Locke and Latham in the 1990s, goals that are specific in nature are found to lead to a higher rate of task performance by reducing ambiguity about what stands to be accomplished or attained. Goals are thought to affect the performance of proposed tasks through four mechanisms. These mechanisms include: 1) directing attention towards actions that are relevant to the goal or task, 2) providing motivation to increase efforts, 3) increasing persistence, and 4) promoting the activation of knowledge on the issue at hand to better motivate and strategize to achieve the goal.In terms of psychological distancing, the concept of goal setting theory would suggest that in order to counteract climate change, specific goals and/or policies that clearly state actions needed to be taken by governments, companies, the public, individuals, etc., would create a more concrete construal for the public, despite their psychological distance. This would in turn, lead to more successful mitigation of climate change.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dutrion** Dutrion: Dutrion is a brand name chlorine tablet for use in cleaning of meats and treatment of drinking water. The primary sanitizing compound is ClO2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bump steer** Bump steer: Bump steer is the term for the tendency of the wheel of a car to steer itself as it moves through the suspension stroke. Bump steer: Bump steer causes a vehicle to turn itself when one wheel hits a bump or falls down into a hole or rut. Excessive bump steer increases tire wear and makes the vehicle more difficult to handle on rough roads. For example, if the front left wheel rolls over a bump it will compress the suspension on that corner and automatically rotate to the left (toe out), causing the car to turn itself left momentarily without any input from the steering wheel. Another example, is that when most vehicles become airborne their front wheels will noticeably toe in. Bump steer: Rear suspension can be designed a number of ways. Many modern vehicles have rear suspension designs which are opposite of the front suspension: Toe in under bump, and out under droop. They can also be designed to have very little or no bump steer at all. Cars with rear live axles, also known as solid axles do not exhibit true bump steer, but can still cause some steering over one wheel bumps, see § difference between bump steer and roll steer. If both wheels on a live axle move upwards by the same amount, they tend not to steer. Bump steer: The linearity of the bump steer curve is important and relies on the relationship of the control arms and tie rod pickup points, and the length of each part. As the suspension goes through bump and droop, each part follows an arc resulting in a change of effective length. Whichever parts are longest tend to have less change in effective length because their arc radius is longer. This is the determining factor in designed bump steer. Another factor that affects bump steer is bushing compliance and deflection and arm bending. During a turn, if some or all of the bushings deflect then their pickup points have changed If any of the arms and tie rods bend then their effective length will change resulting in a change of toe. Bump steer and car ride height: Bump steer can become a problem when cars are modified by lowering or lifting, when a spring has become worn or broken causing a lower ride height, or if the vehicle is heavily loaded. When a car is lowered or lifted, the wheels' toe setting will change. Bump steer and car ride height: When a car is lowered or lifted, it will have to be re-aligned to avoid excessive tire wear. This is accomplished through adjustment of the steering tie rod length. After the tie rod lengths are changed, bump steer values will also change. In some cases, the car will have less toe change, this can make the car exhibit less roll understeer and therefore feel more "twitchy" during a turn. Other vehicles after lowering will exhibit an increase in toe change compared to stock, this results in the car feeling very "twitchy" on straight, bumpy roads, and at the same time feeling unwilling to turn requiring more driver input than normal due to an increase in roll understeer. Bump steer and car ride height: When a vehicle is heavily loaded, it lowers the ride height. Typically cars are loaded by having heavy loads in the trunk, many passengers or towing a trailer, thus mainly affecting the rear wheels. When a car has been heavily loaded (if it does not have a live rear axle), the suspension will compress greatly in order to support the load resulting in an extreme amount of toe in on the rear wheels. This causes rapid tire wear, can cause the car to follow cracks in the road and can cause tire temperatures to rise higher than normal due to an increase in friction. Negative Camber will often be greatly increased as well. This results in very heavy inner tire wear on the rear wheels of a car that is heavily loaded or towing. One reason that most trucks have live axle rear suspensions is because it completely avoids toe and camber changes with a load. Cars with multi-link rear suspension should have an alignment while loaded if they are going to be operated under heavy loads for extended periods. Difference between bump steer and roll steer: During a bump, both wheels rise together. When rolling as the car leans during a curve, the inside suspension extends and the outside suspension compresses. Typically this produces "toe in" on one wheel, and "toe out" on the other, thus producing a steering effect. Difference between bump steer and roll steer: Cars with rear live axles, also known as solid axles, tend to not have true bump steer. Since both wheels are connected to a single, rigid member they are incapable of having any toe angles under normal conditions. Rear live axle suspensions are therefore designed to exhibit roll understeer. During a curve, the entire axle will turn slightly to face the inside of the curve so that the car does not turn more sharply than anticipated by the driver, roll understeer. It is possible to design a rear live axle suspension that exhibits roll oversteer but it is highly undesirable for on road use. Roll oversteer is sometimes incorporated on extreme off-road vehicles because it can allow the rear wheels to help turn the vehicle extremely sharply in tight trail conditions. Difference between bump steer and roll steer: Roll steer is usually measured in degrees of toe per degree of roll, but can also be measured in degrees of toe per metre of wheel travel. Method of adjustment: The linearity of the bump steer curve is important and relies on the relationship of the control arms and tie rod pickup points, and the length of each part. If the wheels are not properly aligned then the length of the tie rod needs to be adjusted. Usually only small adjustments in the range of millimeters or 16ths of an inch are required. Method of adjustment: Bump steer can be adjusted by moving any of the front suspension components pickup points Up, down, in or out. For example: Say the inner tie rod mounting point is moved up either by moving the rack or modifying the pitman arm mounting point or arm drop. The result is the tie rod's arc will change. It will then require a change in the tie rod length to be in proper alignment so the radius of the arc will change as well. If the radius arc of the tie rod is longer than stock on a front steer setup then the car will have more toe understeer. If the same scenario was applied to a rear steering design, then the car would exhibit less toe understeer. This is because the effective length of the tie rods is affected by its static length/arc radius, its pickup points and the angle of its arc during each phase of suspension travel in relation to the control arms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Immunogenetics (journal)** Immunogenetics (journal): Immunogenetics is a peer-reviewed scientific journal covering immunogenetics, the branch of medical research that explores the relationship between the immune system and genetics. This journal publishes original research papers, brief communications and reviews in: immunogenetics of cell interaction, immunogenetics of tissue differentiation and development, phylogeny of alloantigens and of immune response, genetic control of immune response and disease susceptibility, and genetics and biochemistry of alloantigens.Immunogenetics was first published yearly starting in December 1974 and then monthly since May 1981. It has an impact factor of 2.621 according to the Thomson Reuters 2019 Journal Citation Reports. The current editor-in-chief is Ronald E. Bontrop professor of Theoretical Biology and Bioinformatics at Utrecht University. Immunogenetics (journal): Immunogenetics is a hybrid open-access journal through the Springer Open Choice option.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polycystic ovary syndrome** Polycystic ovary syndrome: Polycystic ovary syndrome, or polycystic ovarian syndrome (PCOS), is the most common endocrine disorder in women of reproductive age. The syndrome is named after cysts which form on the ovaries of some people with this condition, though this is not a universal symptom, and not the underlying cause of the disorder.Women with PCOS may experience irregular menstrual periods, heavy periods, excess hair, acne, pelvic pain, difficulty getting pregnant, and patches of thick, darker, velvety skin. The primary characteristics of this syndrome include: hyperandrogenism, anovulation, insulin resistance, and neuroendocrine disruption.A review of international evidence found that the prevalence of PCOS could be as high as 26% among some populations, though ranges between 4% and 18% are reported for general populations.The exact cause of PCOS remains uncertain, and treatment involves management of symptoms using medication. Definition: Two definitions are commonly used: NIHIn 1990 a consensus workshop sponsored by the NIH/NICHD suggested that a person has PCOS if they have all of the following:oligoovulation signs of androgen excess (clinical or biochemical) exclusion of other disorders that can result in menstrual irregularity and hyperandrogenismRotterdamIn 2003 a consensus workshop sponsored by ESHRE/ASRM in Rotterdam indicated PCOS to be present if any two out of three criteria are met, in the absence of other entities that might cause these findings: oligoovulation and/or anovulation excess androgen activity polycystic ovaries (by gynecologic ultrasound)The Rotterdam definition is wider, including many more women, the most notable ones being women without androgen excess. Critics say that findings obtained from the study of women with androgen excess cannot necessarily be extrapolated to women without androgen excess. Definition: Androgen Excess PCOS SocietyIn 2006, the Androgen Excess PCOS Society suggested a tightening of the diagnostic criteria to all of the following:excess androgen activity oligoovulation/anovulation and/or polycystic ovaries exclusion of other entities that would cause excess androgen activity Signs and symptoms: Signs and symptoms of PCOS include irregular or no menstrual periods, heavy periods, excess body and facial hair, acne, pelvic pain, difficulty getting pregnant, and patches of thick, darker, velvety skin, ovarian cysts, enlarged ovaries, excess androgen, weight gain and hirsutism.Associated conditions include type 2 diabetes, obesity, obstructive sleep apnea, heart disease, mood disorders, and endometrial cancer. Common signs and symptoms of PCOS include the following: Menstrual disorders: PCOS mostly produces oligomenorrhea (fewer than nine menstrual periods in a year) or amenorrhea (no menstrual periods for three or more consecutive months), but other types of menstrual disorders may also occur. Infertility: This generally results directly from chronic anovulation (lack of ovulation). Signs and symptoms: High levels of masculinizing hormones: Known as hyperandrogenism, the most common signs are acne and hirsutism (male pattern of hair growth, such as on the chin or chest), but it may produce hypermenorrhea (heavy and prolonged menstrual periods), androgenic alopecia (increased hair thinning or diffuse hair loss), or other symptoms. Approximately three-quarters of women with PCOS (by the diagnostic criteria of NIH/NICHD 1990) have evidence of hyperandrogenemia. Signs and symptoms: Metabolic syndrome: This appears as a tendency towards central obesity and other symptoms associated with insulin resistance, including low energy levels and food cravings. Serum insulin, insulin resistance, and homocysteine levels are higher in women with PCOS. Signs and symptoms: Polycystic ovaries: Ovaries might get enlarged and comprise follicles surrounding the eggs. As result, ovaries might fail to function regularly. This disease is related to the number of follicles per ovary each month growing from the average range of 6-8 to double, triple or more.Women with PCOS tend to have central obesity, but studies are conflicting as to whether visceral and subcutaneous abdominal fat is increased, unchanged, or decreased in women with PCOS relative to non-PCOS women with the same body mass index. In any case, androgens, such as testosterone, androstanolone (dihydrotestosterone), and nandrolone decanoate have been found to increase visceral fat deposition in both female animals and women.Although 80% of PCOS presents in women with obesity, 20% of women diagnosed with the disease are non-obese or "lean" women. However, obese women that have PCOS have a higher risk of adverse outcomes, such as hypertension, insulin resistance, metabolic syndrome, and endometrial hyperplasia.Even though most women with PCOS are overweight or obese, it is important to acknowledge that non-overweight women can also be diagnosed with PCOS. Up to 30% of women diagnosed with PCOS maintain a normal weight before and after diagnosis. "Lean" women still face the various symptoms of PCOS with the added challenges of having their symptoms properly addressed and recognized. Lean women often go undiagnosed for years, and usually are diagnosed after struggles to conceive. Lean women are likely to have a missed diagnosis of diabetes and cardiovascular disease. These women also have an increased risk of developing insulin resistance, despite not being overweight. Lean women are often taken less seriously with their diagnosis of PCOS, and also face challenges finding appropriate treatment options. This is because most treatment options are limited to approaches of losing weight and healthy dieting. Signs and symptoms: Hormone levels Testosterone levels are usually elevated in women with PCOS. In a 2020 systematic review and meta-analysis of sexual dysfunction related to PCOS which included 5,366 women with PCOS from 21 studies, testosterone levels were analyzed and were found to be 2.34 nmol/L (67 ng/dL) in women with PCOS and 1.57 nmol/L (45 ng/dL) in women without PCOS. In a 1995 study of 1,741 women with PCOS, mean testosterone levels were 2.6 (1.1–4.8) nmol/L (75 (32–140) ng/dL). In a 1998 study which reviewed many studies and subjected them to meta-analysis, testosterone levels in women with PCOS were 62 to 71 ng/dL (2.2–2.5 nmol/L) and testosterone levels in women without PCOS were about 32 ng/dL (1.1 nmol/L). In a 2010 study of 596 women with PCOS which used liquid chromatography–mass spectrometry (LC–MS) to quantify testosterone, median levels of testosterone were 41 and 47 ng/dL (with 25th–75th percentiles of 34–65 ng/dL and 27–58 ng/dL and ranges of 12–184 ng/dL and 1–205 ng/dL) via two different labs. If testosterone levels are above 100 to 200 ng/dL, per different sources, other possible causes of hyperandrogenism, such as congenital adrenal hyperplasia or an androgen-secreting tumor, may be present and should be excluded. Signs and symptoms: Associated conditions Warning signs may include a change in appearance. But there are also manifestations of mental health problems, such as anxiety, depression, and eating disorders.A diagnosis of PCOS suggests an increased risk of the following: Endometrial hyperplasia and endometrial cancer (cancer of the uterine lining) are possible, due to overaccumulation of uterine lining, and also lack of progesterone, resulting in prolonged stimulation of uterine cells by estrogen. It is not clear whether this risk is directly due to the syndrome or from the associated obesity, hyperinsulinemia, and hyperandrogenism. Signs and symptoms: Insulin resistance/type 2 diabetes. A review published in 2010 concluded that women with PCOS have an elevated prevalence of insulin resistance and type 2 diabetes, even when controlling for body mass index (BMI). PCOS is also associated with higher risk for diabetes. High blood pressure, in particular if obese or during pregnancy Depression and anxiety Dyslipidemia – disorders of lipid metabolism – cholesterol and triglycerides. Women with PCOS show a decreased removal of atherosclerosis-inducing remnants, seemingly independent of insulin resistance/type 2 diabetes. Cardiovascular disease, with a meta-analysis estimating a 2-fold risk of arterial disease for women with PCOS relative to women without PCOS, independent of BMI. Strokes Weight gain Miscarriage Sleep apnea, particularly if obesity is present Non-alcoholic fatty liver disease, particularly if obesity is present Acanthosis nigricans (patches of darkened skin under the arms, in the groin area, on the back of the neck) Autoimmune thyroiditis Iron deficiencyThe risk of ovarian cancer and breast cancer is not significantly increased overall. Cause: PCOS is a heterogeneous disorder of uncertain cause. There is some evidence that it is a genetic disease. Such evidence includes the familial clustering of cases, greater concordance in monozygotic compared with dizygotic twins and heritability of endocrine and metabolic features of PCOS. There is some evidence that exposure to higher than typical levels of androgens and the anti-Müllerian hormone (AMH) in utero increases the risk of developing PCOS in later life.It may be caused by a combination of genetic and environmental factors. Risk factors include obesity, a lack of physical exercise, and a family history of someone with the condition. Diagnosis is based on two of the following three findings: anovulation, high androgen levels, and ovarian cysts. Cysts may be detectable by ultrasound. Other conditions that produce similar symptoms include adrenal hyperplasia, hypothyroidism, and high blood levels of prolactin. Cause: Genetics The genetic component appears to be inherited in an autosomal dominant fashion with high genetic penetrance but variable expressivity in females; this means that each child has a 50% chance of inheriting the predisposing genetic variant(s) from a parent, and, if a daughter receives the variant(s), the daughter will have the disease to some extent. The genetic variant(s) can be inherited from either the father or the mother, and can be passed along to both sons (who may be asymptomatic carriers or may have symptoms such as early baldness and/or excessive hair) and daughters, who will show signs of PCOS. The phenotype appears to manifest itself at least partially via heightened androgen levels secreted by ovarian follicle theca cells from women with the allele. The exact gene affected has not yet been identified. In rare instances, single-gene mutations can give rise to the phenotype of the syndrome. Current understanding of the pathogenesis of the syndrome suggests, however, that it is a complex multigenic disorder.Due to the scarcity of large-scale screening studies, the prevalence of endometrial abnormalities in PCOS remains unknown, though women with the condition may be at increased risk for endometrial hyperplasia and carcinoma as well as menstrual dysfunction and infertility. Cause: The severity of PCOS symptoms appears to be largely determined by factors such as obesity. PCOS has some aspects of a metabolic disorder, since its symptoms are partly reversible. Even though considered as a gynecological problem, PCOS consists of 28 clinical symptoms.Even though the name suggests that the ovaries are central to disease pathology, cysts are a symptom instead of the cause of the disease. Some symptoms of PCOS will persist even if both ovaries are removed; the disease can appear even if cysts are absent. Since its first description by Stein and Leventhal in 1935, the criteria of diagnosis, symptoms, and causative factors are subject to debate. Gynecologists often see it as a gynecological problem, with the ovaries being the primary organ affected. However, recent insights show a multisystem disorder, with the primary problem lies in hormonal regulation in the hypothalamus, with the involvement of many organs. The term PCOS is used due to the fact that there is a wide spectrum of symptoms possible. It is common to have polycystic ovaries without having PCOS; approximately 20% of European women have polycystic ovaries, but most of those women do not have PCOS. Cause: Environment PCOS may be related to or worsened by exposures during the prenatal period, epigenetic factors, environmental impacts (especially industrial endocrine disruptors, such as bisphenol A and certain drugs) and the increasing rates of obesity.Endocrine disruptors are defined as chemicals that can interfere with the endocrine system by mimicking hormones such as estrogen. According to the NIH (National Institute of Health), examples of endocrine disruptors can include dioxins and triclosan. Endocrine disruptors can cause adverse health impacts in animals. Additional research is needed to assess the role that endocrine disruptors may play in disrupting reproductive health in women and possibly triggering or exacerbating PCOS and its related symptoms. Pathogenesis: Polycystic ovaries develop when the ovaries are stimulated to produce excessive amounts of androgenic hormones, in particular testosterone, by either one or a combination of the following (almost certainly combined with genetic susceptibility): the release of excessive luteinizing hormone (LH) by the anterior pituitary gland through high levels of insulin in the blood (hyperinsulinaemia) in women whose ovaries are sensitive to this stimulusA majority of women with PCOS have insulin resistance and/or are obese, which is a strong risk factor for insulin resistance, although insulin resistance is a common finding among women with PCOS in normal-weight women as well. Elevated insulin levels contribute to or cause the abnormalities seen in the hypothalamic–pituitary–ovarian axis that lead to PCOS. Hyperinsulinemia increases GnRH pulse frequency, which in turn results in an increase in the LH/FSH ratio increased ovarian androgen production; decreased follicular maturation; and decreased SHBG binding. Furthermore, excessive insulin increases the activity of 17α-hydroxylase, which catalyzes the conversion of progesterone to androstenedione, which is in turn converted to testosterone. The combined effects of hyperinsulinemia contribute to an increased risk of PCOS.Adipose (fat) tissue possesses aromatase, an enzyme that converts androstenedione to estrone and testosterone to estradiol. The excess of adipose tissue in obese women creates the paradox of having both excess androgens (which are responsible for hirsutism and virilization) and excess estrogens (which inhibit FSH via negative feedback).The syndrome acquired its most widely used name due to the common sign on ultrasound examination of multiple (poly) ovarian cysts. These "cysts" are in fact immature ovarian follicles. The follicles have developed from primordial follicles, but this development has stopped ("arrested") at an early stage, due to the disturbed ovarian function. The follicles may be oriented along the ovarian periphery, appearing as a 'string of pearls' on ultrasound examination.PCOS may be associated with chronic inflammation, with several investigators correlating inflammatory mediators with anovulation and other PCOS symptoms. Similarly, there seems to be a relation between PCOS and an increased level of oxidative stress. Diagnosis: Not every person with PCOS has polycystic ovaries (PCO), nor does everyone with ovarian cysts have PCOS; although a pelvic ultrasound is a major diagnostic tool, it is not the only one. The diagnosis is fairly straightforward using the Rotterdam criteria, even when the syndrome is associated with a wide range of symptoms. Differential diagnosis Other causes of irregular or absent menstruation and hirsutism, such as hypothyroidism, congenital adrenal hyperplasia (21-hydroxylase deficiency), Cushing's syndrome, hyperprolactinemia, androgen-secreting neoplasms, and other pituitary or adrenal disorders, should be investigated. Assessment and testing Standard assessment History-taking, specifically for menstrual pattern, obesity, hirsutism and acne. A clinical prediction rule found that these four questions can diagnose PCOS with a sensitivity of 77.1% (95% confidence interval [CI] 62.7%–88.0%) and a specificity of 93.8% (95% CI 82.8%–98.7%). Diagnosis: Gynecologic ultrasonography, specifically looking for small ovarian follicles. These are believed to be the result of disturbed ovarian function with failed ovulation, reflected by the infrequent or absent menstruation that is typical of the condition. In a normal menstrual cycle, one egg is released from a dominant follicle – in essence, a cyst that bursts to release the egg. After ovulation, the follicle remnant is transformed into a progesterone-producing corpus luteum, which shrinks and disappears after approximately 12–14 days. In PCOS, there is a so-called "follicular arrest"; i.e., several follicles develop to a size of 5–7 mm, but not further. No single follicle reaches the preovulatory size (16 mm or more). According to the Rotterdam criteria, which are widely used for diagnosis of PCOS, 12 or more small follicles should be seen in a suspect ovary on ultrasound examination. More recent research suggests that there should be at least 25 follicles in an ovary to designate it as having polycystic ovarian morphology (PCOM) in women aged 18–35 years. The follicles may be oriented in the periphery, giving the appearance of a 'string of pearls'. If a high-resolution transvaginal ultrasonography machine is not available, an ovarian volume of at least 10 ml is regarded as an acceptable definition of having polycystic ovarian morphology. rather than follicle count. Diagnosis: Laparoscopic examination may reveal a thickened, smooth, pearl-white outer surface of the ovary. (This would usually be an incidental finding if laparoscopy were performed for some other reason, as it would not be routine to examine the ovaries in this way to confirm a diagnosis of PCOS.) Serum (blood) levels of androgens, including androstenedione and testosterone may be elevated. Dehydroepiandrosterone sulfate (DHEA-S) levels above 700–800 µg/dL are highly suggestive of adrenal dysfunction because DHEA-S is made exclusively by the adrenal glands. The free testosterone level is thought to be the best measure, with approximately 60 per cent of PCOS patients demonstrating supranormal levels.Some other blood tests are suggestive but not diagnostic. The ratio of LH (luteinizing hormone) to FSH (follicle-stimulating hormone), when measured in international units, is elevated in women with PCOS. Common cut-offs to designate abnormally high LH/FSH ratios are 2:1 or 3:1 as tested on day 3 of the menstrual cycle. The pattern is not very sensitive; a ratio of 2:1 or higher was present in less than 50% of women with PCOS in one study. There are often low levels of sex hormone-binding globulin, in particular among obese or overweight women.Anti-Müllerian hormone (AMH) is increased in PCOS, and may become part of its diagnostic criteria. Diagnosis: Glucose tolerance testing Two-hour oral glucose tolerance test (GTT) in women with risk factors (obesity, family history, history of gestational diabetes) may indicate impaired glucose tolerance (insulin resistance) in 15–33% of women with PCOS. Frank diabetes can be seen in 65–68% of women with this condition. Insulin resistance can be observed in both normal weight and overweight people, although it is more common in the latter (and in those matching the stricter NIH criteria for diagnosis); 50–80% of people with PCOS may have insulin resistance at some level. Diagnosis: Fasting insulin level or GTT with insulin levels (also called IGTT). Elevated insulin levels have been helpful to predict response to medication and may indicate women needing higher doses of metformin or the use of a second medication to significantly lower insulin levels. Elevated blood sugar and insulin values do not predict who responds to an insulin-lowering medication, low-glycemic diet, and exercise. Many women with normal levels may benefit from combination therapy. A hypoglycemic response in which the two-hour insulin level is higher and the blood sugar lower than fasting is consistent with insulin resistance. A mathematical derivation known as the HOMAI, calculated from the fasting values in glucose and insulin concentrations, allows a direct and moderately accurate measure of insulin sensitivity (glucose-level x insulin-level/22.5). Management: The primary treatments for PCOS include lifestyle changes and use of medications.Goals of treatment may be considered under four categories: Lowering of insulin resistance Restoration of fertility Treatment of hirsutism or acne Restoration of regular menstruation, and prevention of endometrial hyperplasia and endometrial cancerIn each of these areas, there is considerable debate as to the optimal treatment. One of the major factors underlying the debate is the lack of large-scale clinical trials comparing different treatments. Smaller trials tend to be less reliable and hence may produce conflicting results. General interventions that help to reduce weight or insulin resistance can be beneficial for all these aims, because they address what is believed to be the underlying cause. As PCOS appears to cause significant emotional distress, appropriate support may be useful. Management: Diet Where PCOS is associated with overweight or obesity, successful weight loss is the most effective method of restoring normal ovulation/menstruation. The American Association of Clinical Endocrinologists guidelines recommend a goal of achieving 10–15% weight loss or more, which improves insulin resistance and all hormonal disorders. Still, many women find it very difficult to achieve and sustain significant weight loss. Insulin resistance itself can cause increased food cravings and lower energy levels, which can make it difficult to lose weight on a regular weight-loss diet. A scientific review in 2013 found similar improvements in weight, body composition and pregnancy rate, menstrual regularity, ovulation, hyperandrogenism, insulin resistance, lipids, and quality of life to occur with weight loss, independent of diet composition. Still, a low GI diet, in which a significant portion of total carbohydrates is obtained from fruit, vegetables, and whole-grain sources, has resulted in greater menstrual regularity than a macronutrient-matched healthy diet.Vitamin D deficiency may play some role in the development of the metabolic syndrome, and treatment of any such deficiency is indicated. However, a systematic review of 2015 found no evidence that vitamin D supplementation reduced or mitigated metabolic and hormonal dysregulations in PCOS. As of 2012, interventions using dietary supplements to correct metabolic deficiencies in people with PCOS had been tested in small, uncontrolled and nonrandomized clinical trials; the resulting data are insufficient to recommend their use. Management: Medications Medications for PCOS include oral contraceptives and metformin. The oral contraceptives increase sex hormone binding globulin production, which increases binding of free testosterone. This reduces the symptoms of hirsutism caused by high testosterone and regulates return to normal menstrual periods. Metformin is a medication commonly used in type 2 diabetes mellitus to reduce insulin resistance, and is used off label (in the UK, US, AU and EU) to treat insulin resistance seen in PCOS. In many cases, metformin also supports ovarian function and return to normal ovulation. Spironolactone can be used for its antiandrogenic effects, and the topical cream eflornithine can be used to reduce facial hair. A newer insulin resistance medication class, the thiazolidinediones (glitazones), have shown equivalent efficacy to metformin, but metformin has a more favorable side effect profile. The United Kingdom's National Institute for Health and Clinical Excellence recommended in 2004 that women with PCOS and a body mass index above 25 be given metformin when other therapy has failed to produce results. Metformin may not be effective in every type of PCOS, and therefore there is some disagreement about whether it should be used as a general first line therapy. In addition to this, metformin is associated with several unpleasant side effects: including abdominal pain, metallic taste in the mouth, diarrhoea and vomiting. The use of statins in the management of underlying metabolic syndrome remains unclear.It can be difficult to become pregnant with PCOS because it causes irregular ovulation. Medications to induce fertility when trying to conceive include the ovulation inducer clomiphene or pulsatile leuprorelin. Evidence from randomised controlled trials suggests that in terms of live birth, metformin may be better than placebo, and metform plus clomiphene may be better than clomiphene alone, but that in both cases women may be more likely to experience gastrointestinal side effects with metformin.Metformin is thought to be safe to use during pregnancy (pregnancy category B in the US). A review in 2014 concluded that the use of metformin does not increase the risk of major birth defects in women treated with metformin during the first trimester. Liraglutide may reduce weight and waist circumference in people with PCOS more than other medications. Management: Infertility Not all women with PCOS have difficulty becoming pregnant. But some women with PCOS may have difficulty getting pregnant since their body does not produce the hormones necessary for regular ovulation. PCOS might also increase the risk of miscarriage or premature delivery. However, it is possible to have a normal pregnancy. Including medical care and a healthy lifestyle to follow.For those that do, anovulation or infrequent ovulation is a common cause and PCOS is the main cause of anovulatory infertility. Other factors include changed levels of gonadotropins, hyperandrogenemia, and hyperinsulinemia. Like women without PCOS, women with PCOS that are ovulating may be infertile due to other causes, such as tubal blockages due to a history of sexually transmitted diseases.For overweight anovulatory women with PCOS, weight loss and diet adjustments, especially to reduce the intake of simple carbohydrates, are associated with resumption of natural ovulation. Digital health interventions have been shown to be particularly effective in providing combined therapy to manage PCOS through both lifestyle changes and medication.For those women that after weight loss still are anovulatory or for anovulatory lean women, then ovulation induction using the medications letrozole or clomiphene citrate are the principal treatments used to promote ovulation. Previously, the anti-diabetes medication metformin was recommended treatment for anovulation, but it appears less effective than letrozole or clomiphene.For women not responsive to letrozole or clomiphene and diet and lifestyle modification, there are options available including assisted reproductive technology procedures such as controlled ovarian hyperstimulation with follicle-stimulating hormone (FSH) injections followed by in vitro fertilisation (IVF).Though surgery is not commonly performed, the polycystic ovaries can be treated with a laparoscopic procedure called "ovarian drilling" (puncture of 4–10 small follicles with electrocautery, laser, or biopsy needles), which often results in either resumption of spontaneous ovulations or ovulations after adjuvant treatment with clomiphene or FSH. (Ovarian wedge resection is no longer used as much due to complications such as adhesions and the presence of frequently effective medications.) There are, however, concerns about the long-term effects of ovarian drilling on ovarian function. Management: Mental Health Although women with PCOS are far more likely to have depression than women without, the evidence for anti-depressant use in women with PCOS remains inconclusive. However, the pathophysiology of depression and mental stress during PCOS is linked to various changes including psychological changes such as high activity of pro-inflammatory markers and immune system during stress.PCOS is associated with other mental health related conditions besides depression such as anxiety, bipolar disorder, and obsessive–compulsive disorder. Management: Hirsutism and acne When appropriate (e.g., in women of child-bearing age who require contraception), a standard contraceptive pill is frequently effective in reducing hirsutism. Progestogens such as norgestrel and levonorgestrel should be avoided due to their androgenic effects. Metformin combined with an oral contraceptive may be more effective than either metformin or the oral contraceptive on its own.Other medications with anti-androgen effects include flutamide, and spironolactone, which can give some improvement in hirsutism. Metformin can reduce hirsutism, perhaps by reducing insulin resistance, and is often used if there are other features such as insulin resistance, diabetes, or obesity that should also benefit from metformin. Eflornithine (Vaniqa) is a medication that is applied to the skin in cream form, and acts directly on the hair follicles to inhibit hair growth. It is usually applied to the face. 5-alpha reductase inhibitors (such as finasteride and dutasteride) may also be used; they work by blocking the conversion of testosterone to dihydrotestosterone (the latter of which responsible for most hair growth alterations and androgenic acne). Management: Although these agents have shown significant efficacy in clinical trials (for oral contraceptives, in 60–100% of individuals), the reduction in hair growth may not be enough to eliminate the social embarrassment of hirsutism, or the inconvenience of plucking or shaving. Individuals vary in their response to different therapies. It is usually worth trying other medications if one does not work, but medications do not work well for all individuals. Management: Menstrual irregularity If fertility is not the primary aim, then menstruation can usually be regulated with a contraceptive pill. The purpose of regulating menstruation, in essence, is for the woman's convenience, and perhaps her sense of well-being; there is no medical requirement for regular periods, as long as they occur sufficiently often.If a regular menstrual cycle is not desired, then therapy for an irregular cycle is not necessarily required. Most experts say that, if a menstrual bleed occurs at least every three months, then the endometrium (womb lining) is being shed sufficiently often to prevent an increased risk of endometrial abnormalities or cancer. If menstruation occurs less often or not at all, some form of progestogen replacement is recommended. Management: Alternative medicine A 2017 review concluded that while both myo-inositol and D-chiro-inositols may regulate menstrual cycles and improve ovulation, there is a lack of evidence regarding effects on the probability of pregnancy. A 2012 and 2017 review have found myo-inositol supplementation appears to be effective in improving several of the hormonal disturbances of PCOS. Myo-inositol reduces the amount of gonadotropins and the length of controlled ovarian hyperstimulation in women undergoing in vitro fertilization. A 2011 review found not enough evidence to conclude any beneficial effect from D-chiro-inositol. There is insufficient evidence to support the use of acupuncture, current studies are inconclusive and there's a need for additional randomized controlled trials. Treatment: PCOS has no cure, as of 2023. Treatment may involve lifestyle changes such as weight loss and exercise. Birth control pills may help with improving the regularity of periods, excess hair growth, and acne. Metformin and anti-androgens may also help. Other typical acne treatments and hair removal techniques may be used. Efforts to improve fertility include weight loss, metformin, and ovulation induction using clomiphene or letrozole. In vitro fertilization is used by some in whom other measures are not effective. Epidemiology: PCOS is the most common endocrine disorder among women between the ages of 18 and 44. It affects approximately 2% to 20% of this age group depending on how it is defined. When someone is infertile due to lack of ovulation, PCOS is the most common cause and could guide to patients' diagnosis. The earliest known description of what is now recognized as PCOS dates from 1721 in Italy. Epidemiology: The prevalence of PCOS depends on the choice of diagnostic criteria. The World Health Organization estimates that it affects 116 million women worldwide as of 2010 (3.4% of women). Another estimate indicates that 7% of women of reproductive age are affected. Another study using the Rotterdam criteria found that about 18% of women had PCOS, and that 70% of them were previously undiagnosed. Prevalence also varies across countries due to lack of large-scale scientific studies; India, for example, has a purported rate of 1 in 5 women having PCOS.There are few studies that have investigated the racial differences in cardiometabolic factors in women with PCOS. There is also limited data on the racial differences in the risk of metabolic syndrome and cardiovascular disease in adolescents and young adults with PCOS. The first study to comprehensively examine racial differences discovered notable racial differences in risk factors for cardiovascular disease. African American women were found to be significantly more obese, with a significantly higher prevalence of metabolic syndrome compared to white adult women with PCOS. It is important for the further research of racial differences among women with PCOS, to ensure that every woman that is affected by PCOS has the available resources for management.Ultrasonographic findings of polycystic ovaries are found in 8–25% of women non-affected by the syndrome. 14% women on oral contraceptives are found to have polycystic ovaries. Ovarian cysts are also a common side effect of levonorgestrel-releasing intrauterine devices (IUDs).There are few studies that have investigated the racial differences in cardiometabolic factors in women with PCOS. History: The condition was first described in 1935 by American gynecologists Irving F. Stein, Sr. and Michael L. Leventhal, from whom its original name of Stein–Leventhal syndrome is taken. Stein and Leventhal first described PCOS as an endocrine disorder in the United States, and since then, it has become recognized as one of the most common causes of oligo ovulatory infertility among women.The earliest published description of a person with what is now recognized as PCOS was in 1721 in Italy. Cyst-related changes to the ovaries were described in 1844. Etymology: Other names for this syndrome include polycystic ovarian syndrome, polycystic ovary disease, functional ovarian hyperandrogenism, ovarian hyperthecosis, sclerocystic ovary syndrome, and Stein–Leventhal syndrome. The eponymous last option is the original name; it is now used, if at all, only for the subset of women with all the symptoms of amenorrhea with infertility, hirsutism, and enlarged polycystic ovaries.Most common names for this disease derive from a typical finding on medical images, called a polycystic ovary. A polycystic ovary has an abnormally large number of developing eggs visible near its surface, looking like many small cysts. Society and culture: In 2005, 4 million cases of PCOS were reported in the US, costing $4.36 billion in healthcare costs. In 2016 out of the National Institute Health's research budget of $32.3 billion for that year, 0.1% was spent on PCOS research. Among those aged between 14 and 44, PCOS is conservatively estimated to cost $4.37 billion per year.As opposed to women in the general population, women with PCOS experience higher rates of depression and anxiety. International guidelines and Indian guidelines suggest psychosocial factors should be considered in women with PCOS, as well as screenings for depression and anxiety. Globally, this aspect has been increasingly focused on because it reflects the true impact of PCOS on the lives of patients. Research shows that PCOS adversely impacts a patient's quality of life. Society and culture: Public figures A number of celebrities and public figures have spoken about their experiences with PCOS, including: Victoria Beckham Maci Bookout Frankie Bridge Harnaam Kaur Jaime King Chrisette Michele Lea Michele Keke Palmer Sasha Pieterse Daisy Ridley Romee Strijd Lee Tilghman
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light Crusader** Light Crusader: Light Crusader is an action-adventure game developed by Treasure and published by Sega for their Sega Genesis console in 1995. The game was included in the Sega Genesis Classics collections on Steam and other platforms in 2011. It was also included on the Sega Genesis Mini in North America and Sega Mega Drive Mini in PAL regions. It is similar in gameplay to Landstalker, blending role-playing video game, action-adventure and platform video game elements in much the same way. Gameplay: The game is played from an isometric viewpoint. Players can move freely, jump, and push objects. They can execute simple sword slashes, use four magic elements in different combinations, and use items for various effects. Gameplay is a mix of action, puzzle solving, and platforming for the most part, with the usual role-playing staples like towns, shops, equipment, and spellcasting. The player controls Sir David as he travels through an assortment of dungeons, battling creatures such as "slime", solving puzzles to advance and saving those who were kidnapped. An auto-map feature keeps the focus on action and single-room puzzles, rather than mazes or labyrinths. Plot: Sir David is invited to visit Green Row after a recent journey. He has not been there for a long time and was eager to return. However, the king informs David that townspeople have been disappearing. The king asks him to search for the missing people. After finding a hidden stairway in the graveyard, he discovers a large dungeon of many floors underneath the town. Plot: In the dungeon, as he begins to find the missing people, he gradually learns the story of an evil wizard named Ragno Roke, who was angered by the queen's rejection of his marriage proposal. As revenge, Lord Roke has planned to use the kidnapped townspeople as a sacrifice to reawaken the evil demon Ramiah, sealed long ago in the dungeon. As David descends, he passes through a town of goblins, and a guild of wizards who have been operating in the dungeon. Plot: At the end of the game, David confronts both Roke and Ramiah. At this point the townspeople have been rescued, but Roke tells David that his own life would be sufficient to revive Ramiah and sacrifices himself, bringing Ramiah to life for a final battle with David. After a victory, David leaves on horseback. Development and release: Light Crusader was developed by Japanese studio Treasure as part of partnership with Sega to develop products for the latter's Genesis console. This four game deal also included Dynamite Headdy, Alien Soldier, and Yu Yu Hakusho Makyō Tōitsusen. Light Crusader was programmed by Kazuhiko Ishida with support from Keiji Fujitake and Treasure president Masato Maekawa. The game's graphics and art were provided by Hiroshi Iuchi, Makoto Ogino, Kaname Shindo, and Koichi Kimura. Katsuhiko Suzuki was the sound director while Aki Hata and Satoshi Murata composed the music and sound effects respectively. Development and release: The project was announced in the spring of 1994 under the working title Relayer. Iuchi revealed that in its earliest stages, Light Crusader was planned as an action version of the classic RPG series Wizardry. The staff sought to improve the operability and enjoyment of pseudo-3D graphics afforded by the isometric viewpoint, but this presented challenges. Ishida said that it was difficult to program multiple joints in 3D, while Iuchi claimed that the three-quarters perspective interfered with the performance of the Genesis. Development was delayed when the team started over from scratch at one point. Iuichi estimated that the final build of the game was only 30% complete by the end of 1994.Light Crusader was Treasure's last title to appear on the Genesis console. Throughout 1995, Sega published the game in Japan, North America, Europe, and Australia while Samsung published it in South Korea. In the following decades, Light Crusader has been made available as both a stand-alone downloadable title and as part of several Genesis compilations. The game was released on the Wii Virtual Console in 2007; as part of the Sega Genesis Classics collection for Steam and home platforms beginning in 2011; on the North America Sega Genesis Mini and PAL region Sega Mega Drive Mini consoles in 2019; and finally on the Nintendo Switch Online + Expansion Pack in 2022. Reception: Mean Machines Sega praised the graphics and unique mixture of gameplay elements. They criticized that the game is often too easy and dull, and compared it unfavorably to Beyond Oasis (referred to by its European title, The Story of Thor) for longevity, but nonetheless gave it a very positive assessment, calling it "A superlative arcade adventure with great playability." The four reviewers of Electronic Gaming Monthly praised the graphics, but all but one of them gave the game an overall negative assessment, saying that the perspective severely hinders visibility, the combat is clunky, the lack of story makes the game less involving and creates difficulty figuring out where to go next, and there is too much of an emphasis on puzzles. Next Generation said that the game design reflected Treasure's experience with action games, but that the non-action elements such as the puzzles and storyline are overly shallow, and the isometric perspective creates control difficulties. They concluded, "Light Crusader is still one of the more exciting and graphically pleasing Genesis titles that has come out recently, but this is by no means a RPG."GamePro commented that the graphics and music are impressive in parts, but that the game is less challenging and complex than most RPGs, and that the player character maneuvers poorly, "with nowhere near the range or fluidity of movement of Ali in Beyond Oasis." However, they concluded, "In the end, Light Crusader gets a passing grade because of some cool bosses and interesting puzzle challenges." Hobby Consolas commended the pseudo-3D isometric visuals, gameplay, presentation and sound, stating that "Light Crusader fills an important void in the Mega Drive's role-playing game's library; the one that goes from pure role to adventure and nothing else."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mechanical–electrical analogies** Mechanical–electrical analogies: Mechanical–electrical analogies are the representation of mechanical systems as electrical networks. At first, such analogies were used in reverse to help explain electrical phenomena in familiar mechanical terms. James Clerk Maxwell introduced analogies of this sort in the 19th century. However, as electrical network analysis matured it was found that certain mechanical problems could more easily be solved through an electrical analogy. Theoretical developments in the electrical domain that were particularly useful were the representation of an electrical network as an abstract topological diagram (the circuit diagram) using the lumped element model and the ability of network analysis to synthesise a network to meet a prescribed frequency function. Mechanical–electrical analogies: This approach is especially useful in the design of mechanical filters—these use mechanical devices to implement an electrical function. However, the technique can be used to solve purely mechanical problems, and can also be extended into other, unrelated, energy domains. Nowadays, analysis by analogy is a standard design tool wherever more than one energy domain is involved. It has the major advantage that the entire system can be represented in a unified, coherent way. Electrical analogies are particularly used by transducer designers, by their nature they cross energy domains, and in control systems, whose sensors and actuators will typically be domain-crossing transducers. A given system being represented by an electrical analogy may conceivably have no electrical parts at all. For this reason domain-neutral terminology is preferred when developing network diagrams for control systems. Mechanical–electrical analogies: Mechanical–electrical analogies are developed by finding relationships between variables in one domain that have a mathematical form identical to variables in the other domain. There is no one, unique way of doing this; numerous analogies are theoretically possible, but there are two analogies that are widely used: the impedance analogy and the mobility analogy. The impedance analogy makes force and voltage analogous while the mobility analogy makes force and current analogous. By itself, that is not enough to fully define the analogy, a second variable must be chosen. A common choice is to make pairs of power conjugate variables analogous. These are variables which when multiplied together have units of power. In the impedance analogy, for instance, this results in force and velocity being analogous to voltage and current respectively. Mechanical–electrical analogies: Variations of these analogies are used for rotating mechanical systems, such as in electric motors. In the impedance analogy, instead of force, torque is made analogous to voltage. It is perfectly possible that both versions of the analogy are needed in, say, a system that includes rotating and reciprocating parts, in which case a force-torque analogy is required within the mechanical domain and a force-torque-voltage analogy to the electrical domain. Another variation is required for acoustical systems; here pressure and voltage are made analogous (impedance analogy). In the impedance analogy, the ratio of the power conjugate variables is always a quantity analogous to electrical impedance. For instance force/velocity is mechanical impedance. The mobility analogy does not preserve this analogy between impedances across domains, but it does have another advantage over the impedance analogy. In the mobility analogy the topology of networks is preserved, a mechanical network diagram has the same topology as its analogous electrical network diagram. Applications: Mechanical–electrical analogies are used to represent the function of a mechanical system as an equivalent electrical system by drawing analogies between mechanical and electrical parameters. A mechanical system by itself can be so represented, but analogies are of greatest use in electromechanical systems where there is a connection between mechanical and electrical parts. Analogies are especially useful in analysing mechanical filters. These are filters constructed of mechanical parts but designed to work in an electrical circuit through transducers. Circuit theory is well developed in the electrical domain in general and in particular there is a wealth of filter theory available. Mechanical systems can make use of this electrical theory in mechanical designs through a mechanical–electrical analogy.Mechanical–electrical analogies are useful in general where the system includes transducers between different energy domains. Another area of application is the mechanical parts of acoustic systems such as the pickup and tonearm of record players. This was of some importance in early phonographs where the audio is transmitted from the pickup needle to the horn through various mechanical components entirely without electrical amplification. Early phonographs suffered badly from unwanted resonances in the mechanical parts. It was found that these could be eliminated by treating the mechanical parts as components of a low-pass filter which has the effect of flattening out the passband.Electrical analogies of mechanical systems can be used just as a teaching aid, to help understand the behaviour of the mechanical system. In former times, up to about the early 20th century, it was more likely that the reverse analogy would be used; mechanical analogies were formed of the then little understood electrical phenomena. Forming an analogy: Electrical systems are commonly described by means of a circuit diagram. These are network diagrams that describe the topology of the electrical system using a specialised graph notation. The circuit diagram does not try to represent the true physical dimensions of the electrical components or their actual spatial relationship to each other. This is possible because the electrical components are represented as ideal lumped elements, that is, the element is treated as if it is occupying a single point (lumped at that point). Non-ideal components can be accommodated in this model by using more than one element to represent the component. For instance, a coil intended for use as an inductor has resistance as well as inductance. This can be represented on the circuit diagram as a resistor in series with an inductor. Thus, the first step in forming an analogy of a mechanical system is to describe it as a mechanical network in a similar way, that is, as a topological graph of ideal elements. Alternative, more abstract, representations to the circuit diagram are possible, for instance the bond graph. Forming an analogy: In an electrical network diagram, limited to linear systems, there are three passive elements: resistance, inductance, and capacitance; and two active elements: the voltage generator, and the current generator. The mechanical analogs of these elements can be used to construct a mechanical network diagram. What the mechanical analogs of these elements are depends on what variables are chosen to be the fundamental variables. There is a wide choice of variables that can be used, but most commonly used are a power conjugate pair of variables (described below) and the pair of Hamiltonian variables derived from these.There is a limit to the applicability of this lumped element model. The model works well if the components are small enough that the time taken for a wave to cross them is insignificant, or equivalently, if there is no significant phase difference in the wave either side of the component. What amounts to significant depends on how accurate the model is required to be, but a common rule of thumb is to require components to be smaller than one sixteenth of a wavelength. Since wavelength decreases with frequency, this puts an upper limit on the frequency that can be covered in this kind of design. This limit is much lower in the mechanical domain than the equivalent limit in the electrical domain. This is because the much higher propagation speeds in the electrical domain lead to longer wavelengths (mechanical vibrations in steel propagate at about 6,000 m/s, electromagnetic waves in common cable types propagate at about 2 x 108 m/s). For instance, traditional mechanical filters are only made up to around 600 kHz (although MEMS devices can operate at much higher frequencies due to their very small size). In the electrical domain, on the other hand, the transition from the lumped element model to the distributed element model occurs in the hundreds of megahertz region.In some cases it is possible to continue using a topological network diagram even when components needing a distributed element analysis are present. In the electrical domain, a transmission line, a basic distributed element component, can be included in the model with the introduction of the additional element of electrical length. The transmission line is a special case because it is invariant along its length and hence the full geometry need not be modelled. Another way of dealing with distributed elements is to use a finite element analysis whereby the distributed element is approximated by a large number of small lumped elements. Just such an approach was used in one paper to model the cochlea of the human ear. Another condition required of electrical systems for the application of the lumped element model is that no significant fields exist outside the component since these can couple to other unrelated components. However, these effects can often be modelled by introducing some virtual lumped elements called strays or parasitics. An analog of this in mechanical systems is vibration in one component being coupled to an unrelated component. Forming an analogy: Power conjugate variables The power conjugate variables are a pair of variables whose product is power. In the electrical domain the power conjugate variables chosen are invariably voltage (v) and current (i). Thus, the power conjugate variables in the mechanical domain are analogs. However, this is not enough to make the choice of mechanical fundamental variables unique. The usual choice for a translational mechanical system is force (F) and velocity (u) but it is not the only choice. A different pair may be more appropriate for a system with a different geometry, such as a rotational system.Even after the mechanical fundamental variables have been chosen, there is still not a unique set of analogs. There are two ways that the two pairs of power conjugate variables can be associated with each other in the analogy. For instance the associations F with v and u with i can be made. However, the alternative associations u with v and F with i are also possible. This leads to two classes of analogies, the impedance analogies and the mobility analogies. These analogies are the dual of each other. The same mechanical network has analogs in two different electrical networks. These two electrical networks are the dual circuits of each other. Forming an analogy: Hamiltonian variables The Hamiltonian variables, also called the energy variables, are those variables r = (q, p), which are conjugate according to Hamilton's equations: Further, the time derivatives of the Hamiltonian variables are the power conjugate variables. Forming an analogy: The Hamiltonian variables in the electrical domain are charge (q) and flux linkage (λ) because, ∂H∂q=−dλdt=v (Faraday's law of induction) and, ∂H∂λ=dqdt=i In the translational mechanical domain the Hamiltonian variables are distance displacement (x) and momentum (p) because, (Newton's second law of motion) and, ∂H∂p=dxdt=u There is a corresponding relationship for other analogies and sets of variables. The Hamiltonian variables are also called the energy variables. The integrand of a power conjugate variable with respect to a Hamiltonian variable is a measure of energy. For instance, ∫Fdx and, ∫udp are both expressions of energy. They can also be called generalised momentum and generalised displacement after their analogs in the mechanical domain. Some authors discourage this terminology because it is not domain neutral. Likewise, the use of the terms I-type and V-type (after current and voltage) is also discouraged. Classes of analogy: There are two principle classes of analogy in use. The impedance analogy (also called the Maxwell analogy) preserves the analogy between mechanical, acoustical and electrical impedance but does not preserve the topology of networks. The mechanical network is arranged differently to its analogous electrical network. The mobility analogy (also called the Firestone analogy) preserves network topologies at the expense of losing the analogy between impedances across energy domains. There is also the through and across analogy, also called the Trent analogy. The through and across analogy between the electrical and mechanical domain is the same as in the mobility analogy. However, the analogy between the electrical and acoustical domains is like the impedance analogy. Analogies between the mechanical and acoustical domain in the through and across analogy have a dual relationship with both the impedance analogy and mobility analogy.Different fundamental variables are chosen for mechanical translation and rotational systems leading to two variants for each of the analogies. For instance, linear distance is the displacement variable in a translational system, but this is not so appropriate for rotating systems where angle is used instead. Acoustical analogies have also been included in the descriptions as a third variant. While acoustical energy is ultimately mechanical in nature, it is treated in the literature as an instance of a different energy domain, the fluid domain, and has different fundamental variables. Analogies between all three domains − electrical, mechanical and acoustical − are required to fully represent electromechanical audio systems. Classes of analogy: Impedance analogies Impedance analogies, also called the Maxwell analogy, classify the two variables making up the power conjugate pair as an effort variable and a flow variable. The effort variable in an energy domain is the variable analogous to force in the mechanical domain. The flow variable in an energy domain is the variable analogous to velocity in the mechanical domain. Power conjugate variables in the analog domain are chosen that bear some resemblance to force and velocity.In the electrical domain, the effort variable is voltage and the flow variable is electrical current. The ratio of voltage to current is electrical resistance (Ohm's law). The ratio of the effort variable to the flow variable in other domains is also described as resistance. Oscillating voltages and currents give rise to the concept of electrical impedance when there is a phase difference between them. Impedance can be thought of as an extension to the concept of resistance. Resistance is associated with energy dissipation. Impedance encompasses energy storage as well as energy dissipation. Classes of analogy: The impedance analogy gives rise to the concept of impedance in other energy domains (but measured in different units). The translational impedance analogy describes mechanical systems moving in a single linear dimension and gives rise to the idea of mechanical impedance. The unit of mechanical impedance is the mechanical ohm; in SI units this is N-s/m, or Kg/s. The rotational impedance analogy describes rotating mechanical systems and gives rise to the idea of rotational impedance. The unit of rotational impedance in the SI system is N-m-s/rad. The acoustical impedance analogy gives rise to the idea of acoustic impedance. The unit of acoustic impedance is the acoustic ohm; in SI units this is N-s/m5. Classes of analogy: Mobility analogies Mobility analogies, also called the Firestone analogy, are the electrical duals of impedance analogies. That is, the effort variable in the mechanical domain is analogous to current (the flow variable) in the electrical domain, and the flow variable in the mechanical domain is analogous to voltage (the effort variable) in the electrical domain. The electrical network representing the mechanical system is the dual network of that in the impedance analogy.The mobility analogy is characterised by admittance in the same way that the impedance analogy is characterised by impedance. Admittance is the algebraic inverse of impedance. In the mechanical domain, mechanical admittance is more usually called mobility. Classes of analogy: Through and across analogies Through and across analogies, also called the Trent analogy, classify the two variables making up the power conjugate pair as an across variable and a through variable. The across variable is a variable that appears across the two terminals of an element. The across variable is measured relative to the element terminals. The through variable is a variable that passes through, or acts through an element, that is, it has the same value at both terminals of the element. The benefit of the through and across analogy is that when the through Hamiltonian variable is chosen to be a conserved quantity, Kirchhoff's node rule can be used, and the model will have the same topology as the real system. Classes of analogy: Thus, in the electrical domain the across variable is voltage and the through variable is current. In the mechanical domain the analogous variables are velocity and force, as in the mobility analogy. In the acoustic system, pressure is an across variable because pressure is measured relative to the two terminals of an element, not as an absolute pressure. It is thus not analogous to force which is a through variable, even though pressure is in units of force per area. Forces act through an element; a rod with a force applied to the top will transmit the same force to an element connected to its bottom. Thus, in the through and across analogy the mechanical domain is analogous to the electrical domain like the mobility analogy, but the acoustical domain is analogous to the electrical domain like the impedance analogy. Classes of analogy: Other energy domains The electrical analogy can be extended to many other energy domains. In the field of sensors and actuators, and for control systems using them, it is a common method of analysis to develop an electrical analogy of the entire system. Since sensors can be sensing a variable in any energy domain, and likewise outputs from the system can be in any energy domain, analogies for all energy domains are required. The following table gives a summary of the most common power conjugate variables used to form analogies. Classes of analogy: It is perhaps more common in the thermal domain to choose temperature and thermal power as the fundamental variables because, unlike entropy, they can be measured directly. The concept of thermal resistance is based on this analogy. However, these are not power conjugate variables and are not fully compatible with the other variables in the table. An integrated electrical analogy across multiple domains that includes this thermal analogy will not correctly model energy flows.Similarly, the commonly seen analogy using mmf and magnetic flux as the fundamental variables, which gives rise to the concept of magnetic reluctance, does not correctly model energy flow. The variable pair mmf and magnetic flux is not a power conjugate pair. This reluctance model is sometimes called the reluctance-resistance model since it makes these two quantities analogous. The analogy shown in the table, which does use a power conjugate pair, is sometimes called the gyrator–capacitor model. Transducers: A transducer is a device that takes energy from one domain as input and converts it to another energy domain as output. They are often reversible, but are rarely used in that way. Transducers have many uses and there are many kinds, in electromechanical systems they can be used as actuators and sensors. In audio electronics they provide the conversion between the electrical and acoustical domains. The transducer provides the link between the mechanical and electrical domains and thus a network representation is required for it in order to develop a unified electrical analogy. To do this the concept of port from the electrical domain is extended into other domains.Transducers have (at least) two ports, one port in the mechanical domain and one in the electrical domain, and are analogous to electrical two-port networks. This is to be compared to the elements discussed so far which are all one-ports. Two-port networks can be represented as a 2×2 matrix, or equivalently, as a network of two dependent generators and two impedances or admittances. There are six canonical forms of these representations: impedance parameters, chain parameters, hybrid parameters and their inverses. Any of them can be used. However, the representation of a passive transducer converting between analogous variables (for instance an effort variable to another effort variable in the impedance analogy) can be simplified by replacing the dependent generators with a transformer.On the other hand, a transducer converting non-analogous power conjugate variables cannot be represented by a transformer. The two-port element in the electrical domain that does this is called a gyrator. This device converts voltages to currents and currents to voltages. By analogy, a transducer that converts non-analogous variables between energy domains is also called a gyrator. For instance, electromagnetic transducers convert current to force and velocity to voltage. In the impedance analogy such a transducer is a gyrator. Whether a transducer is a gyrator or a transformer is analogy related; the same electromagnetic transducer in the mobility analogy is a transformer because it is converting between analogous variables. History: James Clerk Maxwell developed very detailed mechanical analogies of electrical phenomena. He was the first to associate force with voltage (1873) and consequently is usually credited with founding the impedance analogy. This was the earliest mechanical–electrical analogy. However, the term impedance was not coined until 1886, long after Maxwell's death, by Oliver Heaviside. The idea of complex impedance was introduced by Arthur E. Kennelly in 1893, and the concept of impedance was not extended into the mechanical domain until 1920 by Kennelly and Arthur Gordon Webster.Maxwell's purpose in constructing this analogy was not to represent mechanical systems in terms of electrical networks. Rather, it was to explain electrical phenomena in more familiar mechanical terms. When George Ashley Campbell first demonstrated the use of loading coils to improve telephone lines in 1899, he calculated the distance needed between coils by analogy with the work of Charles Godfrey on mechanical lines loaded with periodic weights. As electrical phenomena became better understood the reverse of this analogy, using electrical analogies to explain mechanical systems, started to become more common. Indeed, the lumped element abstract topology of electrical analysis has much to offer problems in the mechanical domain, and other energy domains for that matter. By 1900 the electrical analogy of the mechanical domain was becoming commonplace. From about 1920 the electrical analogy became a standard analysis tool. Vannevar Bush was a pioneer of this kind of modelling in his development of analogue computers, and a coherent presentation of this method was presented in a 1925 paper by Clifford A. Nickle.The application of electrical network analysis, most especially the newly developed field of filter theory, to mechanical and acoustic systems led to huge improvements in performance. According to Warren P. Mason the efficiency of ship electric foghorns grew from less than one per cent to 50 per cent. The bandwidth of mechanical phonographs grew from three to five octaves when the mechanical parts of the sound transmission were designed as if they were the elements of an electric filter (see also Mechanical filter § Sound reproduction). Remarkably, the conversion efficiency was improved at the same time (the usual situation with amplifying systems is that gain can be traded for bandwidth such that the gain-bandwidth product remains constant).In 1933 Floyd A. Firestone proposed a new analogy, the mobility analogy, in which force is analogous to current instead of voltage. Firestone introduced the concept of across and through variables in this paper and presented a structure for extending the analogy into other energy domains. A variation of the force-current analogy was proposed by Horace M. Trent in 1955 and it is this version that is generally meant by the through and across analogy. Trent used a linear graph method of representing networks which has resulted in the force-current analogy historically being associated with linear graphs. The force-voltage analogy is historically used with bond graph representations, introduced in 1960 by Henry Paynter, however, it is possible to use either analogy with either representation if desired.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Society of Toxicology** Society of Toxicology: The Society of Toxicology (SOT) is a learned society (professional association) based in the United States that supports scientific inquiry in the field of toxicology. Goals: The SOT is committed to creating a safer and healthier world by advancing the science of toxicology. The Society promotes the acquisition and utilization of knowledge in toxicology, aids in the protection of public health, and facilitates disciplines. SOT's definition of toxicology is 'the study of the adverse effects of chemical, physical or biological agents on living organisms and the ecosystem, including the prevention and amelioration of such adverse effects.' The society organizes an annual meeting (usually in the early spring) and several smaller colloquia via its special interest sections and groups. It publishes the journal Toxicological Sciences, as well as public position papers and guidelines on conflicts of interest in toxicology. Membership: Full membership of the society is restricted to people with significant published work and/or professional experience in toxicology and members are bound by a Code of Ethics. There are also several categories of associate and student membership for people who do not fulfill the professional requirements for full membership. The Society has more than 8,000 members from 70 different countries. Leadership: SOT is run by a team of full-time board members, called councilors. The councilors are selected by other full SOT members and other SOT members who manage the affairs of SOT. The elections are made by ballot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Daisy (software)** Daisy (software): Daisy is a Java/XML open-source content management system based on the Apache Cocoon content management framework. Today, Daisy is in use at major corporations for intranet knowledge bases, product and/or project documentation, and management of content-rich websites. Content management system: The content is stored in so-called Daisy documents. These documents are managed by the Daisy Repository Server. Documents consist of parts. Parts can be anything from required blocks of text to specified fields with restricted content. By creating specific document types, different types of information can be handled differently. Simple documents just hold text and hyperlinks. By including a query in a document it is easy to create documents that aggregate other documents. Content management system: Each document can have multiple variants. A variant can be a version or a translated document (language variant). Variants can be used to mark specific versions, e.g., all documents referring to version XYZ of the software described. Editing of Daisy documents is supported with a WYSIWYG Wiki-like editing environment. Site navigation trees can be made more dynamic using queries generating navigation hierarchies. Daisy is hierarchy-free and has a clear separation between repository server and front-end application. This allows for easy extension of the functionality. Other features are: revision control centralized ACL system Jakarta Lucene based full-text indexing book publishing which allows for the generation of nicely formatted books with table of contents, section numbering, cross-referencing, footnotes and index faceted browsing The Daisy 2.0 version added JBoss jBPM-based workflow Requirements: The packaged versions of Daisy 2.2 includes everything required to run Daisy, except for: a Java Virtual Machine (JVM): Java 1.5 or higher required a MySQL database: version 4.1.7 or higher required (5 also fine)Daisy can work with other databases, such as PostgreSQL, but only MySQL is supported. Daisy is thoroughly tested on Linux, Mac OS X and Windows NT/2000/XP, but should also run on other Unix operating systems such as Solaris. Additionally, Daisy displays properly in most major browsers: Internet Explorer and Mozilla/Firefox with fallback to a text area on other browsers. Outerthought: Outerthought is an Open Source Java & XML company. Outerthought supports Daisy and provides support to its community of users and contributors. Documentation: The documentation for Daisy runs on Daisy itself, and can be viewed online as HTML or downloaded in PDF as a "Daisy book".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Southeast Asian ovalocytosis** Southeast Asian ovalocytosis: Southeast Asian ovalocytosis is a blood disorder that is similar to, but distinct from hereditary elliptocytosis. It is common in some communities in Malaysia and Papua New Guinea, as it confers some resistance to cerebral Falciparum Malaria. Pathophysiology: Southeast Asian ovalocytosis It is hereditary hemolytic anaemia in which the red blood cell is oval-shaped. The primary defect in SAO differs significantly from other forms of elliptocytosis in that it is a defect in the gene coding for a protein that is not directly involved in the cytoskeleton scaffolding of the cell. Rather, the defect lies in a protein known as the band 3 protein, which lies in the cell membrane itself. The band 3 protein normally binds to another membrane-bound protein called ankyrin, but in SAO this bond is stronger than normal. Other abnormalities include tighter tethering of the band 3 protein to the cell membrane, increased tyrosine phosphorylation of the band 3 protein, reduced sulfate anion transport through the cell membrane, and more rapid ATP consumption. These (and probably other) consequences of the SAO mutations lead to the following erythrocyte abnormalities: A greater robustness of cells to a variety of external forces, including: Reduction in cellular sensitivity to osmotic pressures Reduction in fragility related to temperature change greater general rigidity of the cell membrane Loss of sensitivity to substances that cause spiculation of cells Reduced anion exchange Partial intracellular depletion of ATP A reduction in expression of multiple antigensThese changes are thought to give rise to the scientifically and clinically interesting phenomenon that those with SAO exhibit: a marked in vivo resistance to infection by the causative pathogen of malaria, Plasmodium falciparum. Unlike those with the Leach phenotype of common hereditary elliptocytosis (see above), there is a clinically significant reduction in both disease severity and prevalence of malaria in those with SAO. Because of this, the 35% incidence rate of SAO along the north coast of Madang Province in Papua New Guinea, where malaria in endemic, is a good example of natural selection.The reasons behind the resistance to malaria become clear when given an explanation the way in which Plasmodium falciparum invades its host. This parasite is an obligate intracellular parasite, which must enter the cells of the host it is invading. The band 3 proteins aggregate on the cell membrane at the site of entry, forming a circular orifice that the parasite squeezes through. These band 3 proteins act as receptors for the parasite. Normally a process much like endocytosis occurs, and the parasite is able to isolate itself from the intracellular proteins that are toxic to it while still being inside an erythrocyte (see figure 2). The increased rigidity of the erythrocyte membrane in SAO is thought to reduce the capacity of the band 3 proteins to cluster together, thereby making it more difficult for the malaria parasite to properly attach to and enter the cell. The reduced free ATP within the cell has been postulated as a further mechanism behind which SAO creates a hostile environment for Plasmodium falciparum. Diagnosis: Diagnosis is based on presence of ovalocytes on a peripheral blood smear in absence of haemolysis, and should be differentiated from other forms of hereditory elliptocytosis and hereditary spherocytosis. Genetic assays such as PCR amplification may be used to confirm mutation of the SLC4A1 gene. Treatment: Homozygous SAO appears to be largely incompatible with life, although there have been reports of individuals surviving till adolescence with prompt intervention.Patients with heterozygous SAO are largely asymptomatic or may present with only compensated haemolytic anemia, hence treatment is generally not necessary. Patients with severe haemolytic anemia may require splenectomy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bomb pulse** Bomb pulse: The bomb pulse is the sudden increase of carbon-14 (14C) in the Earth's atmosphere due to the hundreds of aboveground nuclear bombs tests that started in 1945 and intensified after 1950 until 1963, when the Limited Test Ban Treaty was signed by the United States, the Soviet Union and the United Kingdom. These hundreds of blasts were followed by a doubling of the relative concentration of 14C in the atmosphere. We discuss “relative concentration”, because measurements of 14C levels by mass spectrometers are most accurately made by comparison to another carbon isotope, often the common isotope 12C. Isotope abundance ratios are not only more easily measured, they are what 14C carbon daters want, since it is the fraction of carbon in a sample that is 14C, not the absolute concentration, that is of interest in dating measurements. The figure shows how the fraction of carbon in the atmosphere that is 14C, of order only a part per trillion, has changed over the past several decades following the bomb tests. Because 12C concentration has increased by about 30% over the past fifty years, the fact that “pMC”, measuring the isotope ratio, has returned (almost) to its 1955 value, means that 14C concentration in the atmosphere remains some 30% higher than it once was. Carbon-14, the radioisotope of carbon, is naturally developed in trace amounts in the atmosphere and it can be detected in all living organisms. Carbon of all types is continually used to form the molecules of the cells of organisms. Doubling of the concentration of 14C in the atmosphere is reflected in the tissues and cells of all organisms that lived around the period of nuclear testing. This property has many applications in the fields of biology and forensics. Background: The radioisotope carbon-14 is constantly formed from nitrogen-14 (14N) in the higher atmosphere by incoming cosmic rays which generate neutrons. These neutrons collide with 14N to produce 14C which then combines with oxygen to form 14CO2. This radioactive CO2 spreads through the lower atmosphere and the oceans where it is absorbed by the plants and the animals that eat the plants. The radioisotope 14C thus becomes part of the biosphere so that all living organisms contain a certain amount of 14C. Nuclear testing caused a rapid increase in atmospheric 14C (see figure), since the explosion of an atomic bomb also creates neutrons which collide again with 14N and produce 14C. Since the ban on nuclear testing in 1963, atmospheric 14C relative concentration is slowly decreasing at a pace of 4% annually. This continuous decrease permits scientists to determine among others the age of deceased people and allows them to study cell activity in tissues. By measuring the amount of 14C in a population of cells and comparing that to the amount of 14C in the atmosphere during or after the bomb pulse, scientists can estimate when the cells were created and how often they've turned over since then. Difference with classical radiocarbon dating: Radiocarbon dating has been used since 1946 to determine the age of organic material as old as 50,000 years. As the organism dies, the exchange of 14C with the environment ceases and the incorporated 14C decays. Given the steady decay of radioisotopes (the half-life of 14C is about 5,730 years), the relative amount of 14C left in the dead organism can be used to calculate how long ago it died. Bomb pulse dating should be considered a special form of carbon dating. As discussed above and in the Radiolab episode, Elements (section 'Carbon'), in bomb pulse dating the slow absorption of atmospheric 14C by the biosphere, can be considered as a chronometer. Starting from the pulse around the years 1963 (see figure), atmospheric radiocarbon relative abundance decreased by about 4% a year. So in bomb pulse dating it is the relative amount of 14C in the atmosphere that is decreasing and not the amount of 14C in a dead organisms, as is the case in classical radiocarbon dating. This decrease in atmospheric 14C can be measured in cells and tissues and has permitted scientists to determine the age of individual cells and of deceased people. These applications are very similar to the experiments conducted with pulse-chase analysis in which cellular processes are examined over time by exposing the cells to a labeled compound (pulse) and then to the same compound in an unlabeled form (chase). Radioactivity is a commonly used label in these experiments. An important difference between pulse-chase analysis and bomb-pulse dating is the absence of the chase in the latter. Difference with classical radiocarbon dating: Around the year 2030 the bomb pulse will die out. Every organism born after this will not bear detectable bomb pulse traces and their cells cannot be dated in this way. Radioactive pulses cannot ethically be administered to people just to study the turnover of their cells so the bomb pulse results may be considered as a useful side effect of nuclear testing. Applications: The fact that cells and tissues reflect the doubling of 14C in the atmosphere during and after nuclear testing, has been of great use for several biological studies, for forensics and even for the determination of the year in which certain wine was produced. Applications: Biology Biological studies carried out by Kirsty Spalding demonstrated that neuronal cells are essentially static and do not regenerate during life. She also showed that the number of fat cells is set during childhood and adolescence. Considering the amount of 14C present in DNA she could establish that 10% of fat cells are renewed annually. The radiocarbon bomb pulse has been used to validate otolith annuli (ages scored from otolith sections) across several fish species including the freshwater drum, lake sturgeon, pallid sturgeon, bigmouth buffalo, arctic salmonids, Pristipomoides filamentosus, several reef fishes, among numerous other validated freshwater and marine species. The precision for bomb radiocarbon age validation is typically within +/- 2 years because the rise period (1956-1960) is so steep. The bomb pulse has also been used to estimate (not validate) the age of Greenland sharks by measuring the incorporation of 14C in the eye lens during development. After having determined the age and measured the length of sharks born around the bomb pulse, it was possible to create a mathematical model in which length and age of the sharks were correlated in order to deduce the age of the larger sharks. The study showed that the Greenland shark, with an age of 392 +/- 120 years, is the oldest known vertebrate. Applications: Forensics At the moment of death, carbon uptake is ended. Considering that tissue that contained the bomb pulse 14C was rapidly diminishing with a rate of 4% per year, it has been possible to establish the time of death of two women in a court case by examining tissues with a rapid turnover. Another important application has been the identification of victims of the Southeast Asian tsunami 2004 by examining their teeth. Applications: Carbon Transport Modeling The perturbation in atmospheric 14C from the bomb testing was an opportunity to validate atmospheric transport models, and to study the movement of carbon between the atmosphere and oceanic or terrestrial sinks. Other Atmospheric bomb 14C has been used to validate tree ring ages and to date recent trees that have no annual growth rings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DSSAM Model** DSSAM Model: The DSSAM Model (Dynamic Stream Simulation and Assessment Model) is a computer simulation developed for the Truckee River to analyze water quality impacts from land use and wastewater management decisions in the Truckee River Basin. This area includes the cities of Reno and Sparks, Nevada as well as the Lake Tahoe Basin. The model is historically and alternatively called the Earth Metrics Truckee River Model. Since original development in 1984-1986 under contract to the U.S. Environmental Protection Agency (EPA), the model has been refined and successive versions have been dubbed DSSAM II and DSSAM III. This hydrology transport model is based upon a pollutant loading metric called Total maximum daily load (TMDL). The success of this flagship model contributed to the Agency's broadened commitment to the use of the underlying TMDL protocol in its national policy for management of most river systems in the United States.The Truckee River has a length of over 115 miles (185 km) and drains an area of approximately 3120 square miles, not counting the extent of its Lake Tahoe sub-basin. The DSSAM model establishes numerous stations along the entire river extent as well as a considerable number of monitoring points inside the Great Basin's Pyramid Lake, the receiving waters of this closed hydrological system. Although the region is sparsely populated, it is important because Lake Tahoe is visited by 20 million persons per annum and Truckee River water quality affects at least two endangered species: the Cui-ui sucker fish and the Lahontan cutthroat trout. Development history: Impetus to derive a quantitative prediction model arose from a trend of historically decreasing river flow rates coupled with jurisdictional and tribal conflicts over water rights as well as concern for river biota. When expansion of the Reno-Sparks Wastewater Treatment Plant was proposed, the EPA decided to fund a large scale research effort to create simulation software and a parallel program to collect field data in the Truckee River and Pyramid Lake. For river stations water quality measurements were made in the benthic zone as well as the topic zone; in the case of Pyramid Lake boats were used to collect grab samples at varying depths and locations. Earth Metrics conducted the software development for the first generation computer model and collected field data on water quality and flow rates in the Truckee River. After model calibration, runs were made to evaluate impacts of alternative land use controls and discharge parameters for treated effluent. Development history: The DSSAM Model is constructed to allow dynamic decay of most pollutants; for example, total nitrogen and phosphorus are allowed to be consumed by benthic algae in each time step, and the algal communities are given a separate population dynamic in each river reach (e.g.metabolic rate based upon river temperature). Sources throughout the watershed include non-point agricultural and urban stormwater as well as a multiplicity of point source discharges of treated municipal wastewater effluent. Development history: Subsequent to the first generation of DSSAM model development, calibration and application, later refinements were made. These augmentations to model functionality focussed on increased flexibility in modeling the diel cycle and also allowed inclusion of analyzing particulate nitrogen and phosphorus. In developing DSSAM III several changes in the model operation and scope were performed. Applications: Numerous different uses of the model have been made including (a)analysis of public policies for urban stormwater runoff, (b) researching agricultural methods for surface runoff minimization, (c) innovative solutions for non-point source control and d)engineering aspects of treated wastewater discharge. Regarding stormwater runoff in Washoe County, the specific elements within a new xeriscape ordinance were analyzed for efficacy using the model. For the varied agricultural uses in the watershed, the model was run to understand the principal sources of adverse impact, and management practices were developed to reduce in river pollution. Use of the model has specifically been conducted to analyze survival of two endangered species found in the Truckee River and Pyramid Lake: the Cui-ui sucker fish (endangered 1967) and the Lahontan cutthroat trout (threatened 1970). When the model is used for surface runoff reaching a stream, this pollutant input can be viewed as a line source (e.g., a continuous linear source of pollution entering the waterway).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steelman language requirements** Steelman language requirements: The Steelman language requirements were a set of requirements which a high-level general-purpose programming language should meet, created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing. It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly. The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetracenomycin C** Tetracenomycin C: Tetracenomycin C is an antitumor anthracycline-like antibiotic produced by Streptomyces glaucescens GLA.0. Tetracenomycin C: The pale-yellow antibiotic is active against some gram-positive bacteria, especially against streptomycetes. Gram-negative bacteria and fungi are not inhibited. In considering the differences of biological activity and the functional groups of the molecule, tetracenomycin C is not a member of the tetracycline or anthracyclinone group of antibiotics. Tetracenomycin C is notable for its broad activity against actinomycetes. As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes. Structure and properties: The structure of tetracenomycin C was established by chemical and spectroscopic methods. The three hydroxy groups, at C-4, C-4a, and C-12a, are cis to each other. The two at C-4a and C-12a are involved in intramolecular hydrogen bonding to the carbonyl oxygen atoms at C-5 and C-1, respectively. The carboxymethyl group at C-9 is almost perpendicular to the planar rings C and D. The crystal packing is stabilized by intermolecular hydrogen bonds with participation of methanol molecules. Biosynthesis: As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes. Early studies of tetracenomycin C biosynthesis utilized mutants that were blocked in its production to describe many of the pathway's intermediates.Complementation of the mutations allowed the cloning of a large gene cluster that included all of the genes required for production, as well as resistance genes. Transformation of the cluster into heterologous streptomycete hosts like Streptomyces lividans resulted in the overproduction of several intermediates of the pathway. Sequence analysis of the polyketide synthase genes showed that they included two β-ketoacyl synthases (tcmK and tcmL), an acyl carrier protein (tcmM), and several cyclases. Streptomyces glaucescens protects itself from the deleterious effect of tetracenomycin C by the action of the tcmA and tcmR gene products. TcmA has several transmembrane loops and is believed to act as a tetracenomycin C exporter. Its expression is controlled by the TcmR repressor. TcmR binds to operator sites in the tcmA promoter. When tetracenomycin C is present, it binds to TcmR, releasing it from the DNA and initiating tcmA expression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Campigliaite** Campigliaite: Campigliaite is a copper and manganese sulfate mineral with a chemical formula of Cu4Mn(SO4)2(OH)6·4H2O. It has a chemical formula and also a crystal structure similar to niedermayrite, with Cd(II) cation replacing by Mn(II). The formation of campigliaite is related to the oxidation of sulfide minerals to form sulfate solutions with ilvaite associated with the presence of manganese. Campigliaite is a rare secondary mineral formed when metallic sulfide skarn deposits are oxidized. While there are several related associations, there is no abundant source for this mineral due to its rare process of formation. Based on its crystallographic data and chemical formula, campigliaite is placed in the devillite group and considered the manganese analogue of devillite. Campigliaite belongs to the copper oxysalt minerals as well followed by the subgroup M=M-T sheets. The infinite sheet structures that campigliaite has are characterized by strongly bonded polyhedral sheets, which are linked in the third dimension by weaker hydrogen bonds. History: S. Menchetti and C. Sabelli discovered campigliaite in one of the side tunnels of the Temperino Mine in Campiglia Marittima, Tuscany, Italy. The new mineral was named after its type locality and was approved by the Commission of New Minerals and Mineral Names, International Mineralogical Association. Structure: When tested with a Weissenberg camera and single-crystal diffractometer, campigliaite is found to be twinned with the (100) as its twinning plane. Campigliaite is monoclinic with a point group of 2 and space group of C2. The infinite sheet structures that campigliaite has are characterized by strongly bonded sheets of polyhedra which are linked in the third dimension by the weaker hydrogen bonds. Physical properties: Campigliaite crystals are transparent with a vitreous luster. Crystals of campigliaite and gypsum are often mixed but small amounts of pure crystals appear to grow on gypsum. Campigliaite crystals are flattened on the {100} and elongated on the [010]. These crystals are light or pale blue in color but under light transmission, the crystal color changes to pale greenish blue. The measured density was 3.0 g cm−3 by heavy-liquid method but this is uncertain due to the size of the crystals. The calculated density was 3.063 g cm−3 with a normalized empirical formula of 7 H2O. Geologic occurrence: The formation of campigliaite is related to the oxidation of sulfide minerals to form sulfate solutions with ilvaite associated with the presence of manganese. Campigliaite is closely associated with gypsum because of the geologic processes that form campigliaite.In addition to the type locality, campigliaite has also been reported from the Biccia Mine in Alessandria Province, Piedmont, Italy, and from the Deer Trail Mine of the Tushar Mountains of Piute County, Utah, US.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toll-like receptor 7** Toll-like receptor 7: Toll-like receptor 7, also known as TLR7, is a protein that in humans is encoded by the TLR7 gene. Orthologs are found in mammals and birds. It is a member of the toll-like receptor (TLR) family and detects single stranded RNA. Function: The TLR family plays an important role in pathogen recognition and activation of innate immunity. TLRs are highly conserved from Drosophila to humans and share structural and functional similarities. They recognize pathogen-associated molecular patterns (PAMPs) that are expressed on infectious agents, and mediate the production of cytokines necessary for the development of effective immunity. The various TLRs exhibit different patterns of expression. This gene is predominantly expressed in lung, placenta, and spleen, and lies in close proximity to another family member, TLR8, on the human X chromosome.TLR7 recognizes single-stranded RNA in endosomes, which is a common feature of viral genomes which are internalised by macrophages and dendritic cells. TLR7 recognizes single-stranded RNA of viruses such as HIV and HCV. TLR7 can recognize GU-rich single-stranded RNA. However, the presence of GU-rich sequences in the single-stranded RNA is not sufficient to stimulate TLR7. Clinical significance: TLR7 has been shown to play a significant role in the pathogenesis of autoimmune disorders such as lupus as well as in the regulation of antiviral immunity. Although not yet fully elucidated, using an unbiased genome-scale screen with short hairpin RNA (shRNA), it has been demonstrated that the receptor TREML4 acts as an essential positive regulator of TLR7 signaling. In TREML4 -/- mice macrophages that are hyporesponsive to TLR7 agonists, macrophages fail to produce type I interferons due to impaired phosphorylation of the transcription factor STAT1 by the mitogen-activated protein kinase p38 and decreased recruitment of the adaptor MYD88 to TLR7. TREML4 deficiency reduced the production of inflammatory cytokines and autoantibodies in MRL/lpr mice, suggesting that TLR7 is a vital component of antiviral immunity and a predecessor factor in the pathogenesis of rheumatic diseases such as systemic lupus erythematosus (SLE). A TLR7 agonist, imiquimod (Aldara), has been approved for topical use in treating warts caused by papillomavirus and for actinic keratosis.Due to their ability to induce robust production of anti-cancer cytokines such as interleukin-12, TLR7 agonists have been investigated for cancer immunotherapy. Recent examples include TMX-202 delivery via liposomal formulation, as well as the delivery of resiquimod via nanoparticles formed from beta-cyclodextrin.In July 2020, it was discovered that loss-of-function variants in TLR7 cause young men patients to become seriously ill after being infected by SARS-CoV-2. This suggests that TLR7 plays a key role in triggering the immune response for patients of COVID-19. For more details on the biological mechanism and pathway, see "Type I Interferon Induction and Signaling During SARS-CoV-2 Infection" on WikiPathways. In contrast, gain-of-function variation in TLR7 was shown in 2022 to cause systemic lupus erythematous and neuromyelitis optica in humans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chemical plant** Chemical plant: A chemical plant is an industrial process plant that manufactures (or otherwise processes) chemicals, usually on a large scale. The general objective of a chemical plant is to create new material wealth via the chemical or biological transformation and or separation of materials. Chemical plants use specialized equipment, units, and technology in the manufacturing process. Other kinds of plants, such as polymer, pharmaceutical, food, and some beverage production facilities, power plants, oil refineries or other refineries, natural gas processing and biochemical plants, water and wastewater treatment, and pollution control equipment use many technologies that have similarities to chemical plant technology such as fluid systems and chemical reactor systems. Some would consider an oil refinery or a pharmaceutical or polymer manufacturer to be effectively a chemical plant. Chemical plant: Petrochemical plants (plants using chemicals from petroleum as a raw material or feedstock) are usually located adjacent to an oil refinery to minimize transportation costs for the feedstocks produced by the refinery. Speciality chemical and fine chemical plants are usually much smaller and not as sensitive to location. Tools have been developed for converting a base project cost from one geographic location to another. Chemical processes: Chemical plants use chemical processes, which are detailed industrial-scale methods, to transform feedstock chemicals into products. The same chemical process can be used at more than one chemical plant, with possibly differently scaled capacities at each plant. Also, a chemical plant at a site may be constructed to utilize more than one chemical process, for instance to produce multiple products. Chemical processes: A chemical plant commonly has usually large vessels or sections called units or lines that are interconnected by piping or other material-moving equipment which can carry streams of material. Such material streams can include fluids (gas or liquid carried in piping) or sometimes solids or mixtures such as slurries. An overall chemical process is commonly made up of steps called unit operations which occur in the individual units. A raw material going into a chemical process or plant as input to be converted into a product is commonly called a feedstock, or simply feed. In addition to feedstocks for the plant as a whole, an input stream of material to be processed in a particular unit can similarly be considered feed for that unit. Output streams from the plant as a whole are final products and sometimes output streams from individual units may be considered intermediate products for their units. However, final products from one plant may be intermediate chemicals used as feedstock in another plant for further processing. For example, some products from an oil refinery may used as feedstock in petrochemical plants, which may in turn produce feedstocks for pharmaceutical plants. Chemical processes: Either the feedstock(s), the product(s), or both may be individual compounds or mixtures. It is often not worthwhile separating the components in these mixtures completely; specific levels of purity depend on product requirements and process economics. Operations Chemical processes may be run in continuous or batch operation. Chemical processes: Batch operation In batch operation, production occurs in time-sequential steps in discrete batches. A batch of feedstock(s) is fed (or charged) into a process or unit, then the chemical process takes place, then the product(s) and any other outputs are removed. Such batch production may be repeated over again and again with new batches of feedstock. Batch operation is commonly used in smaller scale plants such as pharmaceutical or specialty chemicals production, for purposes of improved traceability as well as flexibility. Chemical processes: Continuous plants are usually used to manufacture commodity or petrochemicals while batch plants are more common in speciality and fine chemical production as well as pharmaceutical active ingredient (API) manufacture. Chemical processes: Continuous operation In continuous operation, all steps are ongoing continuously in time. During usual continuous operation, the feeding and product removal are ongoing streams of moving material, which together with the process itself, all take place simultaneously and continuously. Chemical plants or units in continuous operation are usually in a steady state or approximate steady state. Steady state means that quantities related to the process do not change as time passes during operation. Such constant quantities include stream flow rates, heating or cooling rates, temperatures, pressures, and chemical compositions at any given point (location). Continuous operation is more efficient in many large scale operations like petroleum refineries. It is possible for some units to operate continuously and others be in batch operation in a chemical plant; for example, see Continuous distillation and Batch distillation. The amount of primary feedstock or product per unit of time which a plant or unit can process is referred to as the capacity of that plant or unit. For examples: the capacity of an oil refinery may be given in terms of barrels of crude oil refined per day; alternatively chemical plant capacity may be given in tons of product produced per day. In actual daily operation, a plant (or unit) will operate at a percentage of its full capacity. Engineers typically assume 90% operating time for plants which work primarily with fluids, and 80% uptime for plants which primarily work with solids. Units and fluid systems: Specific unit operations are conducted in specific kinds of units. Although some units may operate at ambient temperature or pressure, many units operate at higher or lower temperatures or pressures. Vessels in chemical plants are often cylindrical with rounded ends, a shape which can be suited to hold either high pressure or vacuum. Chemical reactions can convert certain kinds of compounds into other compounds in chemical reactors. Chemical reactors may be packed beds and may have solid heterogeneous catalysts which stay in the reactors as fluids move through, or may simply be stirred vessels in which reactions occur. Since the surface of solid heterogeneous catalysts may sometimes become "poisoned" from deposits such as coke, regeneration of catalysts may be necessary. Fluidized beds may also be used in some cases to ensure good mixing. There can also be units (or subunits) for mixing (including dissolving), separation, heating, cooling, or some combination of these. For example, chemical reactors often have stirring for mixing and heating or cooling to maintain temperature. When designing plants on a large scale, heat produced or absorbed by chemical reactions must be considered. Some plants may have units with organism cultures for biochemical processes such as fermentation or enzyme production. Units and fluid systems: Separation processes include filtration, settling (sedimentation), extraction or leaching, distillation, recrystallization or precipitation (followed by filtration or settling), reverse osmosis, drying, and adsorption. Heat exchangers are often used for heating or cooling, including boiling or condensation, often in conjunction with other units such as distillation towers. There may also be storage tanks for storing feedstock, intermediate or final products, or waste. Storage tanks commonly have level indicators to show how full they are. There may be structures holding or supporting sometimes massive units and their associated equipment. There are often stairs, ladders, or other steps for personnel to reach points in the units for sampling, inspection, or maintenance. An area of a plant or facility with numerous storage tanks is sometimes called a tank farm, especially at an oil depot. Units and fluid systems: Fluid systems for carrying liquids and gases include piping and tubing of various diameter sizes, various types of valves for controlling or stopping flow, pumps for moving or pressurizing liquid, and compressors for pressurizing or moving gases. Vessels, piping, tubing, and sometimes other equipment at high or very low temperature are commonly covered with insulation for personnel safety and to maintain temperature inside. Fluid systems and units commonly have instrumentation such as temperature and pressure sensors and flow measuring devices at select locations in a plant. Online analyzers for chemical or physical property analysis have become more common. Solvents can sometimes be used to dissolve reactants or materials such as solids for extraction or leaching, to provide a suitable medium for certain chemical reactions to run, or so they can otherwise be treated as fluids. Chemical plant design: Today, the fundamental aspects of designing chemical plants are done by chemical engineers. Historically, this was not always the case and many chemical plants were constructed in a haphazard way before the discipline of chemical engineering became established. Chemical engineering was first established as a profession in the United Kingdom when the first chemical engineering course was given at the University of Manchester in 1887 by George E. Davis in the form of twelve lectures covering various aspects of industrial chemical practice. As a consequence George E. Davis is regarded as the world's first chemical engineer. Today chemical engineering is a profession and those professional chemical engineers with experience can gain "Chartered" engineer status through the Institution of Chemical Engineers. Chemical plant design: In plant design, typically less than 1 percent of ideas for new designs ever become commercialized. During this solution process, typically, cost studies are used as an initial screening to eliminate unprofitable designs. If a process appears profitable, then other factors are considered, such as safety, environmental constraints, controllability, etc. The general goal in plant design, is to construct or synthesize “optimum designs” in the neighborhood of the desired constraints.Many times chemists research chemical reactions or other chemical principles in a laboratory, commonly on a small scale in a "batch-type" experiment. Chemistry information obtained is then used by chemical engineers, along with expertise of their own, to convert to a chemical process and scale up the batch size or capacity. Commonly, a small chemical plant called a pilot plant is built to provide design and operating information before construction of a large plant. From data and operating experience obtained from the pilot plant, a scaled-up plant can be designed for higher or full capacity. After the fundamental aspects of a plant design are determined, mechanical or electrical engineers may become involved with mechanical or electrical details, respectively. Structural engineers may become involved in the plant design to ensure the structures can support the weight of the units, piping, and other equipment. Chemical plant design: The units, streams, and fluid systems of chemical plants or processes can be represented by block flow diagrams which are very simplified diagrams, or process flow diagrams which are somewhat more detailed. The streams and other piping are shown as lines with arrow heads showing usual direction of material flow. In block diagrams, units are often simply shown as blocks. Process flow diagrams may use more detailed symbols and show pumps, compressors, and major valves. Likely values or ranges of material flow rates for the various streams are determined based on desired plant capacity using material balance calculations. Energy balances are also done based on heats of reaction, heat capacities, expected temperatures and pressures at various points to calculate amounts of heating and cooling needed in various places and to size heat exchangers. Chemical plant design can be shown in fuller detail in a piping and instrumentation diagram (P&ID) which shows all piping, tubing, valves, and instrumentation, typically with special symbols. Showing a full plant is often complicated in a P&ID, so often only individual units or specific fluid systems are shown in a single P&ID. Chemical plant design: In the plant design, the units are sized for the maximum capacity each may have to handle. Similarly, sizes for pipes, pumps, compressors, and associated equipment are chosen for the flow capacity they have to handle. Utility systems such as electric power and water supply should also be included in the plant design. Additional piping lines for non-routine or alternate operating procedures, such as plant or unit startups and shutdowns, may have to be included. Fluid systems design commonly includes isolation valves around various units or parts of a plant so that a section of a plant could be isolated in case of a problem such as a leak in a unit. If pneumatically or hydraulically actuated valves are used, a system of pressurizing lines to the actuators is needed. Any points where process samples may have to be taken should have sampling lines, valves, and access to them included in the detailed design. If necessary, provisions should be made for reducing high pressure or temperature of a sampling stream, such including a pressure reducing valve or sample cooler. Chemical plant design: Units and fluid systems in the plant including all vessels, piping, tubing, valves, pumps, compressors, and other equipment must be rated or designed to be able to withstand the entire range of pressures, temperatures, and other conditions which they could possibly encounter, including any appropriate safety factors. All such units and equipment should also be checked for materials compatibility to ensure they can withstand long-term exposure to the chemicals they will come in contact with. Any closed system in a plant which has a means of pressurizing possibly beyond the rating of its equipment, such as heating, exothermic reactions, or certain pumps or compressors, should have an appropriately sized pressure relief valve included to prevent overpressurization for safety. Frequently all of these parameters (temperatures, pressures, flow, etc.) are exhaustively analyzed in combination through a Hazop or fault tree analysis, to ensure that the plant has no known risk of serious hazard. Chemical plant design: Within any constraints the plant is subject to, design parameters are optimized for good economic performance while ensuring safety and welfare of personnel and the surrounding community. For flexibility, a plant may be designed to operate in a range around some optimal design parameters in case feedstock or economic conditions change and re-optimization is desirable. In more modern times, computer simulations or other computer calculations have been used to help in chemical plant design or optimization. Plant operation: Process control In process control, information gathered automatically from various sensors or other devices in the plant is used to control various equipment for running the plant, thereby controlling operation of the plant. Instruments receiving such information signals and sending out control signals to perform this function automatically are process controllers. Previously, pneumatic controls were sometimes used. Electrical controls are now common. A plant often has a control room with displays of parameters such as key temperatures, pressures, fluid flow rates and levels, operating positions of key valves, pumps and other equipment, etc. In addition, operators in the control room can control various aspects of the plant operation, often including overriding automatic control. Process control with a computer represents more modern technology. Based on possible changing feedstock composition, changing products requirements or economics, or other changes in constraints, operating conditions may be re-optimized to maximize profit. Plant operation: Workers As in any industrial setting, there are a variety of workers working throughout a chemical plant facility, often organized into departments, sections, or other work groups. Such workers typically include engineers, plant operators, and maintenance technicians. Other personnel at the site could include chemists, management/administration and office workers. Types of engineers involved in operations or maintenance may include chemical process engineers, mechanical engineers for maintaining mechanical equipment, and electrical/computer engineers for electrical or computer equipment. Plant operation: Transport Large quantities of fluid feedstock or product may enter or leave a plant by pipeline, railroad tank car, or tanker truck. For example, petroleum commonly comes to a refinery by pipeline. Pipelines can also carry petrochemical feedstock from a refinery to a nearby petrochemical plant. Natural gas is a product which comes all the way from a natural gas processing plant to final consumers by pipeline or tubing. Large quantities of liquid feedstock are typically pumped into process units. Smaller quantities of feedstock or product may be shipped to or from a plant in drums. Use of drums about 55 gallons in capacity is common for packaging industrial quantities of chemicals. Smaller batches of feedstock may be added from drums or other containers to process units by workers. Plant operation: Maintenance In addition to feeding and operating the plant, and packaging or preparing the product for shipping, plant workers are needed for taking samples for routine and troubleshooting analysis and for performing routine and non-routine maintenance. Routine maintenance can include periodic inspections and replacement of worn catalyst, analyzer reagents, various sensors, or mechanical parts. Non-routine maintenance can include investigating problems and then fixing them, such as leaks, failure to meet feed or product specifications, mechanical failures of valves, pumps, compressors, sensors, etc. Plant operation: Statutory and regulatory compliance When working with chemicals, safety is a concern in order to avoid problems such as chemical accidents. In the United States, the law requires that employers provide workers working with chemicals with access to a Material Safety Data Sheet (MSDS) for every kind of chemical they work with. An MSDS for a certain chemical is prepared and provided by the supplier to whoever buys the chemical. Other laws covering chemical safety, hazardous waste, and pollution must be observed, including statutes such as the Resource Conservation and Recovery Act (RCRA) and the Toxic Substances Control Act (TSCA), and regulations such as the Chemical Facility Anti-Terrorism Standards in the United States. Hazmat (hazardous materials) teams are trained to deal with chemical leaks or spills. Process Hazard Analysis (PHA) is used to assess potential hazards in chemical plants. In 1998, the U. S. Chemical Safety and Hazard Investigation Board has become operational. Plant facilities: The actual production or process part of a plant may be indoors, outdoors, or a combination of the two. It may be a traditional stick-built plant or a modular skid. Large modular skids are especially impressive feats of engineering. A modular skid is built including all of the modular equipment needed to do the same job a traditional stick-build plant may perform. However, the modular skid is built within a structural steel frame, allowing it to be shipped to the onsite location without needing to be rebuilt onsite. A modular skid build results in a higher functioning end product, as less hands are required in the onsite setup of the modular skid process unit, resulting in minimized risk for mishaps. The actual production section of a facility usually has the appearance of a rather industrial environment. Hard hats and work shoes are commonly worn. Floors and stairs are often made of metal grating, and there is practically no decoration. There may also be pollution control or waste treatment facilities or equipment. Sometimes existing plants may be expanded or modified based on changing economics, feedstock, or product needs. As in other production facilities, there may be shipping and receiving, and storage facilities. In addition, there are usually certain other facilities, typically indoors, to support production at the site. Plant facilities: Although some simple sample analysis may be able to be done by operations technicians in the plant area, a chemical plant typically has a laboratory where chemists analyze samples taken from the plant. Such analysis can include chemical analysis or determination of physical properties. Sample analysis can include routine quality control on feedstock coming into the plant, intermediate and final products to ensure quality specifications are met. Non-routine samples may be taken and analyzed for investigating plant process problems also. A larger chemical company often has a research laboratory for developing and testing products and processes where there may be pilot plants, but such a laboratory may be located at a site separate from the production plants. Plant facilities: A plant may also have a workshop or maintenance facility for repairs or keeping maintenance equipment. There is also typically some office space for engineers, management or administration, and perhaps for receiving visitors. The decorum there is commonly more typical of an office environment. Clustering of commodity chemical plants: Chemical Plants used particularly for commodity chemical and petrochemical manufacture, are located in relatively few manufacturing locations around the world largely due to infrastructural needs. This is less important for speciality or fine chemical batch plants. Not all commodity/petrochemicals are produced in any one location but groups of related materials often are, to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale. These manufacturing locations often have business clusters of units called chemical plants that share utilities and large scale infrastructure such as power stations, port facilities, road and rail terminals. In the United Kingdom for example there are four main locations for commodity chemical manufacture: near the River Mersey in Northwest England, on the Humber on the East coast of Yorkshire, in Grangemouth near the Firth of Forth in Scotland and on Teesside as part of the Northeast of England Process Industry Cluster (NEPIC). Approximately 50% of the UK's petrochemicals, which are also commodity chemicals, are produced by the industry cluster companies on Teesside at the mouth of the River Tees on three large chemical parks at Wilton, Billingham and Seal Sands. Corrosion and use of new materials: Corrosion in chemical process plants is a major issue that consumes billions of dollars yearly. Electrochemical corrosion of metals is pronounced in chemical process plants due to the presence of acid fumes and other electrolytic interactions. Recently, FRP (Fibre-reinforced plastic) is used as a material of construction. The British standard specification BS4994 is widely used for design and construction of the vessels, tanks, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schur algebra** Schur algebra: In mathematics, Schur algebras, named after Issai Schur, are certain finite-dimensional algebras closely associated with Schur–Weyl duality between general linear and symmetric groups. They are used to relate the representation theories of those two groups. Their use was promoted by the influential monograph of J. A. Green first published in 1980. The name "Schur algebra" is due to Green. In the modular case (over infinite fields of positive characteristic) Schur algebras were used by Gordon James and Karin Erdmann to show that the (still open) problems of computing decomposition numbers for general linear groups and symmetric groups are actually equivalent. Schur algebras were used by Friedlander and Suslin to prove finite generation of cohomology of finite group schemes. Construction: The Schur algebra Sk(n,r) can be defined for any commutative ring k and integers n,r≥0 . Consider the algebra k[xij] of polynomials (with coefficients in k ) in n2 commuting variables xij , 1 ≤ i, j ≤ n . Denote by Ak(n,r) the homogeneous polynomials of degree r . Elements of Ak(n,r) are k-linear combinations of monomials formed by multiplying together r of the generators xij (allowing repetition). Thus k[xij]=⨁r≥0Ak(n,r). Construction: Now, k[xij] has a natural coalgebra structure with comultiplication Δ and counit ε the algebra homomorphisms given on generators by Δ(xij)=∑lxil⊗xlj,ε(xij)=δij (Kronecker's delta).Since comultiplication is an algebra homomorphism, k[xij] is a bialgebra. One easily checks that Ak(n,r) is a subcoalgebra of the bialgebra k[xij] , for every r ≥ 0. Construction: Definition. The Schur algebra (in degree r ) is the algebra Sk(n,r)=Homk(Ak(n,r),k) . That is, Sk(n,r) is the linear dual of Ak(n,r) It is a general fact that the linear dual of a coalgebra A is an algebra in a natural way, where the multiplication in the algebra is induced by dualizing the comultiplication in the coalgebra. To see this, let Δ(a)=∑ai⊗bi and, given linear functionals f , g on A , define their product to be the linear functional given by a↦∑f(ai)g(bi). Construction: The identity element for this multiplication of functionals is the counit in A Main properties: One of the most basic properties expresses Sk(n,r) as a centralizer algebra. Let V=kn be the space of rank n column vectors over k , and form the tensor power factors ). Then the symmetric group Sr on r letters acts naturally on the tensor space by place permutation, and one has an isomorphism Sk(n,r)≅EndSr(V⊗r). In other words, Sk(n,r) may be viewed as the algebra of endomorphisms of tensor space commuting with the action of the symmetric group. Sk(n,r) is free over k of rank given by the binomial coefficient (n2+r−1r) Various bases of Sk(n,r) are known, many of which are indexed by pairs of semistandard Young tableaux of shape λ , as λ varies over the set of partitions of r into no more than n parts. In case k is an infinite field, Sk(n,r) may also be identified with the enveloping algebra (in the sense of H. Weyl) for the action of the general linear group GLn(k) acting on V⊗r (via the diagonal action on tensors, induced from the natural action of GLn(k) on V=kn given by matrix multiplication). Schur algebras are "defined over the integers". This means that they satisfy the following change of scalars property: Sk(n,r)≅SZ(n,r)⊗Zk for any commutative ring k .Schur algebras provide natural examples of quasihereditary algebras (as defined by Cline, Parshall, and Scott), and thus have nice homological properties. In particular, Schur algebras have finite global dimension. Generalizations: Generalized Schur algebras (associated to any reductive algebraic group) were introduced by Donkin in the 1980s. These are also quasihereditary. Around the same time, Dipper and James introduced the quantized Schur algebras (or q-Schur algebras for short), which are a type of q-deformation of the classical Schur algebras described above, in which the symmetric group is replaced by the corresponding Hecke algebra and the general linear group by an appropriate quantum group. There are also generalized q-Schur algebras, which are obtained by generalizing the work of Dipper and James in the same way that Donkin generalized the classical Schur algebras. There are further generalizations, such as the affine q-Schur algebras related to affine Kac–Moody Lie algebras and other generalizations, such as the cyclotomic q-Schur algebras related to Ariki-Koike algebras (which are q-deformations of certain complex reflection groups).The study of these various classes of generalizations forms an active area of contemporary research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cap Gemini SDM** Cap Gemini SDM: Cap Gemini SDM, or SDM2 (System Development Methodology) is a software development method developed by the software company Pandata in the Netherlands in 1970. The method is a waterfall model divided in seven phases that have a clear start and end. Each phase delivers subproducts, called milestones. It was used extensively in the Netherlands for ICT projects in the 1980s and 1990s. Pandata was purchased by the Capgemini group in the 1980s, and the last version of SDM to be published in English was SDM2 (6th edition) in 1991 by Cap Gemini Publishing BV. The method was regularly taught and distributed among Capgemini consultants and customers, until the waterfall method slowly went out of fashion in the wake of more iterative extreme programming methods such as Rapid application development, Rational Unified Process and Agile software development. The Cap Gemini SDM Methodology: In the early to mid-1970s, the various generic work steps of system development methodologies were replaced with work steps based on various structured analysis or structured design techniques. SDM, SDM2, SDM/70, and Spectrum evolved into system development methodologies that were based on the works of Steven Ward, Tom Demarco, Larry Constantine, Ken Orr, Ed Yourdon, Michael A. Jackson and others, as well as data modeling techniques developed by Thomas Bachmann and Peter Chen. SDM is a top-down model. Starting from the system as a whole, its description becomes more detailed as the design progresses. The method was marketed as a proprietary method that all company developers were required to use to ensure quality in customer projects. This method shows several similarities with the proprietary methods of CAP Gemini's most important competitors in 1990. A similar waterfall method that was later used against the company itself in court proceedings in 2002 was CMG:Commander. History: SDM was developed in 1970 by a company known as PANDATA, now part of Cap Gemini, which itself was created as a joint venture by three Dutch companies: AKZO, Nationale Nederlanden and Posterijen, Telegrafie en Telefonie (Nederland). The company was founded in order to develop the method and create training materials to propagate the method. It was successful, but was revised in 1987 to standardize and separate the method theory from the more technical aspects used to implement the method. Those aspects were bundled into the process modelling tool called "Software Development Workbench", that was later sold in 2000 to BWise, another Dutch company. This revised version of the method without the tool is commonly known as SDM2. History: Main difference between SDM and SDM2 SDM2 was a revised version of SDM that attempted to solve a basic problem that occurred often in SDM projects; the delivered system failed to meet the customer requirements. Though any number of specific reasons for this could arise, the basic waterfall method used in SDM was a recipe for this problem due to the relatively large amount of time spent by development teams between the Definition Study and the Implementation phases. It was during the design phases that the project often became out of sync with customer requirements. History: During the SDM functional design phase called BD (Basic Design), design aspects were documented (out of phase) in detail for the later technical design DD (Detailed Design). This caused a gray zone of responsibility to occur between the two phases; the functional crew responsible for the data flows and process flows in the BD were making decisions that the technical crew later needed to code, although their technical knowledge was not detailed enough to make those decisions. This obviously led to problems in collaboration between project teams during both the BD and DD phases. Because of the waterfall method of Go/No Go decisions at the end of each phase, the technical crew would have to make a formal Change request in order to make corrections in the detailed sections of the Basic Design. Such changes were often confusing for the customer, because these originated from the project team rather than directly from the customer requirements, even after a change freeze was put in place. Usually the customer was only allowed to produce requirements up to and including the functional design in the BD phase. After that, the customer had to wait patiently until acceptance testing in the Implementation phase. History: In SDM2, the term "Basic Design" was replaced by the term "Global Design" to indicate that this document was continuously updated and subject to change during both the BD and DD phases. Thus the "Basic design" is both global and detailed at the end of the project. In the global design, the principles of functionality and construction, as well as their relations, are documented. This is how the idea of iterative development got started; a functional design is by nature influenced by the technology platform chosen for implementation, and some basic design decisions will need to be revisited when early assumptions prove later to be wrong or costly to implement. This became the forerunner of the Rapid Application Development method, which caused these two phases to become cyclical and work in tandem. History: SDM2 only partially solved the problem of meeting customer requirements; modern software development methods go several steps further by insisting for example on incremental deliveries, or that the customer appoint key users of the delivered system to play a role in the project from start to finish. The SDM method: SDM is a method based on phases. Before every phase, an agreement needs to be reached detailing the activities for that phase. These documents are known as milestone documents. Several uses for these documents exist: Traceability — Through applying deadlines to milestone documents, clients can keep track on whether a project is on schedule Consolidation — By approving a milestone document, it gains a certain status. The client can not change any of the specifications later during development. The SDM method: If necessary, the project can be aborted. This mostly happens during the start of development. Phases: The method uses 7 phases which are successively executed, like the waterfall model. The phases are: Information planning: Problem definition and initial plan Definition study: Requirements analysis and revised plan Basic Design: High level technical design and revised plan Detailed Design: Building the system (and revised plan) Realization: Testing and acceptance (and revised plan) Implementation: Installation, data conversion, and cut-over to production Operation and Support: Delivery to ICT support departmentUpon completion of a phase, it is decided whether to go on to the next phase or not; the terms 'Go' and 'No-Go' are used for this. The next phase will not start until a 'Go' is given, while if there is a 'No-Go', the project either stays in the current phase to be improved or is canceled completely. Phases: Information planning In this phase, the problems that have to be solved by the project are defined. The current and desired situations are analysed, and goals for the project are decided upon. In this phase, it is important to consider the needs of all parties, such as future users and their management. Often, their expectations clash, causing problems later during development or during use of the system. Phases: Definition study In this phase, a more in-depth study of the project is made. The organization is analysed to determine their needs and determine the impact of the system on the organization. The requirements for the system are discussed and decided upon. The feasibility of the project is determined. Aspects that can be considered to determine feasibility are: Advisable — Are the resources (both time and knowledge) available to complete the project. Phases: Significance — Does the current system need to be replaced? Technique — Can the available equipment handle the requirements the system places on it? Economics — Are the costs of developing the system lower than the profit made from using it? Organization — Will the organization be able to use the new system? Legal — Does the new system conflict with existing laws? Basic Design In this phase, the design for the product is made. After the definition study has determined what the system needs to do, the design determines how this will be done. This often results in two documents: The functional design, or User interface design explaining what each part of the system does, and the high-level technical design, explaining how each part of the system is going to work. This phase combines the functional and technical design and only gives a broad design for the whole system. Often, the architecture of the system is described here. Phases: SDM2 split this step in two parts, one for the BD phase, and one for the DD phase, in order to create a Global Design document. Phases: Detailed Design In this phase, the design for the product is described technically in the jargon needed for software developers (and later, the team responsible for support of the system in the O&S phase). After the basic design has been signed off, the technical detailed design determines how this will be developed with software. This often results in a library of source documentation: The functional design per function, and the technical design per function, explaining how each part of the system is going to work, and how they relate to each other. In SDM2, this phase elaborates on the Global Design by creating more detailed designs, or further refining existing detailed designs, to the point where they can be used to build the system itself. Phases: Realization In this phase, the design is converted to a working system. The actual way this is done will depend on the system used. Where in older systems programmers often had to write all of the code, newer systems allow the programmers to convert the design into code directly, leaving less work to be done and a smaller chance for errors. At the same type, the system becomes more reliant on the design—if the design has been properly tested, the proper code will be generated, but if the design is not fully correct, the code will be incorrect without a programmer to look for such problems. Phases: Implementation The implementation, or testing phase consists of two steps: a system test and an acceptance test. During the system test the development team—or a separate testing team—tests the system. Most of this will be focused on the technical aspects: does the system work as it should, or are there bugs still present? Bugs that are found in this phase will be fixed. At the ending of this phase, the program should work properly. Phases: During the acceptance test, the end-users will test the system. They will test to see if the program does what they want it to do. They will not test every possible scenario, but they will test to see if the program does what they want and expect it to do and that it works in an easy way. Bugs that are found in this phase will be reported to the development team so that they can fix these bugs. Phases: During this phase, the final version of the system is implemented by the organization: the hardware is set up, the software is installed, end user documentation is created and, end users trained to use the program, existing data is entered into the system. Operation and Support Once the system has been implemented, it is used within the organization. During its lifetime, it needs to be kept running and possibly enhanced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**COSMOS cohort study** COSMOS cohort study: COSMOS is a cohort study of mobile phone use and health. The study will investigate the possible health effects of long-term use of mobile phones and other wireless technologies. It is an international study being conducted in five European countries – United Kingdom, Denmark, Sweden, Finland, the Netherlands and France. In the UK, the research is being conducted by the Imperial College London. Survey design and data collection: The UK study will follow the health of approximately 90,000–100,000 volunteer mobile phone users (18+ years of age for 20–30 years). The international cohort will follow the health of approximately 250,000 European mobile phone users. During a study pre-test in 2009 and the study main launch in 2010, prospective participants were randomly selected by their mobile phone network operator to be invited to participate. In order to maintain confidentiality the COSMOS team had no access to the personal details of those invited to participate in the study until they had given their consent. Participants are asked to fill in an online questionnaire about their health, lifestyle and use of technology. Participants are also asked to give permission for the study to access their NHS and mobile phone records. Participants are contacted approximately once a year in order to update their details or to request additional information. The long term intention of the study is to follow participant’s health status for at least 20 years. Development: A pre-test took place in May 2009; 4,500 mobile phone users were invited to take part in the study. The main launch of the study took place on 22 April 2010; 2.4 million UK mobile phone users were invited to participate. As of August 2010, 67,987 people were taking part in the UK arm of the study. From February 2012 the UK study changes its eligibility criteria so that invitations were no longer mandatory to participate. Anyone aged 18 or over, who is a UK resident and uses a mobile phone can take part in the study. To take part visit the study website at www.ukcosmos.org. Initial study results will be published in 5 years. Data Protection: The Information Commissioner has advised that this research study complies fully with the requirements of the Data Protection Act 1998. All individually identifiable data will be dealt with in the strictest confidence. The results of the study will be published following independent review but no individually identifying data will ever be published. Access to data is limited to the academic research team who are required to sign strict non-disclosure agreements. Background: Mobile phones have been in widespread use for a relatively short period of time. There are currently more than six billion users of mobile phones worldwide and in the UK there are over 70 million mobile phone devices in use. It is important for current users, and for future generations, to find out whether there are any possible long-term health effects from this new and widespread technology. Background: Many reviews have concluded that there is no convincing evidence to date that mobile phones are harmful to health. However, the widespread use of mobile phones is a relatively recent phenomenon and it is possible that adverse health effects could emerge after years of prolonged use. Evidence to date suggests that short term (less than ten years) exposure to mobile phone emissions is not associated with an increase in brain and nervous system cancers. However, regarding longer term use, the evidence base necessary to allow firm judgments to be made has not yet been accumulated. There are still significant uncertainties that can only be resolved by monitoring the health of a large cohort of phone users over a long period of time. Background: A major report on mobile phones and health was published by the UK Independent Expert Group on Mobile Phones in 2000, known as the 'Stewart report ’. This report was updated by a further review of mobile phones and health undertaken by the Advisory Group on Non-Ionizing Radiation (AGNIR ) and published by the National Radiological Protection Board (NRPB) in 2005. Most recently, the independent Mobile Telecommunications and Health Research programme (MTHR), established in 2001 following the ‘Stewart Report’, published a report describing research undertaken as part of its programme. None of the research supported by the MTHR programme and published so far demonstrates that biological or adverse health effects are produced by radiofrequency exposure from mobile phones. The report also summarizes the current evidence base regarding mobile phones and health and identifies priorities for future research. The COSMOS study aims to carry out long term health monitoring of a large group of people to identify if there are any health issues linked to long term mobile phone use. Through this health monitoring, current uncertainties about possible long term health effects associated with this new technology can be resolved. This research has been endorsed as a priority by agencies worldwide, including the Department of Health (United Kingdom), the UK Health Protection Agency (HPA), the UK Advisory Group on Non-Ionising Radiation (AGNIR), the European Union’s Scientific Committee on Emerging and Newly Identified Health Risks (SCENHIR), and the World Health Organization (WHO). Background: Following the publication of the Stewart report by the Independent Expert Group on Mobile Phones in 2000, an independent research programme, called the Mobile Telecommunications and Health Research Programme (MTHR), was set up in the UK to look into the possible health impact of Mobile Telecommunications. The most recent report from the MTHR concluded that: ...short term (less than ten years) exposure to mobile phone emissions is not associated with an increase in brain and nervous system cancers. However, there are still significant uncertainties that can only be resolved by monitoring the health of a large cohort of phone users over a long period of time. Background: The Committee is convinced that the best way to address these uncertainties is to carry out a large cohort study of mobile phone users, an approach that has also been rated as a high priority by the World Health Organization. Funding: The UK COSMOS study, previously funded by the MTHR (an independent programme of research into mobile phones and health that was jointly supported by the Department of Health and industry), is now jointly funded by industry and government under the Research Initiative on Health and Mobile Telecommunications (RIHMT), and is managed through the Department of Health’s Policy Research Programme. Media interest: The main launch was covered in the news.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Industrial finishing** Industrial finishing: Industrial finishing is any kind of secondary process done to any metal, plastic, or wood product used in a common market such as automotive, OEM, telecommunications or point-of-purchase. The most common commodity in the industrial finishing market is plastic parts. These can be injection molded, thermoformed, extruded or vacuum formed. Most parts are painted but can be pad printed or silkscreened. Industrial finishing: One finishing process is vacuum metalising.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gene knock-in** Gene knock-in: In molecular cloning and biology, a gene knock-in (abbreviation: KI) refers to a genetic engineering method that involves the one-for-one substitution of DNA sequence information in a genetic locus or the insertion of sequence information not found within the locus. Typically, this is done in mice since the technology for this process is more refined and there is a high degree of shared sequence complexity between mice and humans. The difference between knock-in technology and traditional transgenic techniques is that a knock-in involves a gene inserted into a specific locus, and is thus a "targeted" insertion. It is the opposite of gene knockout. Gene knock-in: A common use of knock-in technology is for the creation of disease models. It is a technique by which scientific investigators may study the function of the regulatory machinery (e.g. promoters) that governs the expression of the natural gene being replaced. This is accomplished by observing the new phenotype of the organism in question. The BACs and YACs are used in this case so that large fragments can be transferred. Technique: Gene knock-in originated as a slight modification of the original knockout technique developed by Martin Evans, Oliver Smithies, and Mario Capecchi. Traditionally, knock-in techniques have relied on homologous recombination to drive targeted gene replacement, although other methods using a transposon-mediated system to insert the target gene have been developed. The use of loxP flanking sites that become excised upon expression of Cre recombinase with gene vectors is an example of this. Embryonic stem cells with the modification of interest are then implanted into a viable blastocyst, which will grow into a mature chimeric mouse with some cells having the original blastocyst cell genetic information and other cells having the modifications introduced to the embryonic stem cells. Subsequent offspring of the chimeric mouse will then have the gene knock-in.Gene knock-in has allowed, for the first time, hypothesis-driven studies on gene modifications and resultant phenotypes. Mutations in the human p53 gene, for example, can be induced by exposure to benzo(a)pyrene (BaP) and the mutated copy of the p53 gene can be inserted into mouse genomes. Lung tumors observed in the knock-in mice offer support for the hypothesis of BaP’s carcinogenicity. More recent developments in knock-in technique have allowed for pigs to have a gene for green fluorescent protein inserted with a CRISPR/Cas9 system, which allows for much more accurate and successful gene insertions. The speed of CRISPR/Cas9-mediated gene knock-in also allows for biallelic modifications to some genes to be generated and the phenotype in mice observed in a single generation, an unprecedented timeframe. Versus gene knockout: Knock-in technology is different from knockout technology in that knockout technology aims to either delete part of the DNA sequence or insert irrelevant DNA sequence information to disrupt the expression of a specific genetic locus. Gene knock-in technology, on the other hand, alters the genetic locus of interest via a one-for-one substitution of DNA sequence information or by the addition of sequence information that is not found on said genetic locus. A gene knock-in therefore can be seen as a gain-of-function mutation and a gene knockout a loss-of-function mutation, but a gene knock-in may also involve the substitution of a functional gene locus for a mutant phenotype that results in some loss of function. Potential applications: Because of the success of gene knock-in methods thus far, many clinical applications can be envisioned. Knock-in of sections of the human immunoglobulin gene into mice has already been shown to allow them to produce humanized antibodies that are therapeutically useful. It should be possible to modify stem cells in humans to restore targeted gene function in certain tissues, for example possibly correcting the mutant gamma-chain gene of the IL-2 receptor in hematopoietic stem cells to restore lymphocyte development in people with X-linked severe combined immunodeficiency. Limitations: While gene knock-in technology has proven to be a powerful technique for the generation of models of human disease and insight into proteins in vivo, numerous limitations still exist. Many of these are shared with the limitations of knockout technology. First, combinations of knock-in genes lead to growing complexity in the interactions that inserted genes and their products have with other sections of the genome and can therefore lead to more side effects and difficult-to-explain phenotypes. Also, only a few loci, such as the ROSA26 locus have been characterized well enough where they can be used for conditional gene knock-ins; making combinations of reporter and transgenes in the same locus problematic. The biggest disadvantage of using gene knock-in for human disease model generation is that mouse physiology is not identical to that of humans and human orthologs of proteins expressed in mice will often not wholly reflect the role of a gene in human pathology. This can be seen in mice produced with the ΔF508 fibrosis mutation in the CFTR gene, which accounts for more than 70% of the mutations in this gene for the human population and leads to cystic fibrosis. While ΔF508 CF mice do exhibit the processing defects characteristic of the human mutation, they do not display the pulmonary pathophysiological changes seen in humans and carry virtually no lung phenotype. Such problems could be ameliorated by the use of a variety of animal models, and pig models (pig lungs share many biochemical and physiological similarities with human lungs) have been generated in an attempt to better explain the activity of the ΔF508 mutation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded