text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Triamcinolone benetonide**
Triamcinolone benetonide:
Triamcinolone benetonide (brand names Alcorten, Benecorten, Tibicorten; also known as triamcinolone acetonide 21-(benzoyl-β-aminoisobutyrate) or TBI) is a synthetic glucocorticoid corticosteroid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Symplectic vector field**
Symplectic vector field:
In physics and mathematics, a symplectic vector field is one whose flow preserves a symplectic form. That is, if (M,ω) is a symplectic manifold with smooth manifold M and symplectic form ω , then a vector field X∈X(M) in the Lie algebra X(M) is symplectic if its flow preserves the symplectic structure. In other words, the Lie derivative of the vector field must vanish: LXω=0 .An alternative definition is that a vector field is symplectic if its interior product with the symplectic form is closed. (The interior product gives a map from vector fields to 1-forms, which is an isomorphism due to the nondegeneracy of a symplectic 2-form.) The equivalence of the definitions follows from the closedness of the symplectic form and Cartan's magic formula for the Lie derivative in terms of the exterior derivative. If the interior product of a vector field with the symplectic form is an exact form (and in particular, a closed form), then it is called a Hamiltonian vector field. If the first De Rham cohomology group H1(M) of the manifold is trivial, all closed forms are exact, so all symplectic vector fields are Hamiltonian. That is, the obstruction to a symplectic vector field being Hamiltonian lives in H1(M) . In particular, symplectic vector fields on simply connected manifolds are Hamiltonian.
Symplectic vector field:
The Lie bracket of two symplectic vector fields is Hamiltonian, and thus the collection of symplectic vector fields and the collection of Hamiltonian vector fields both form Lie algebras. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Modiolus (cochlea)**
Modiolus (cochlea):
The modiolus is a conical shaped central axis in the cochlea. The modiolus consists of spongy bone and the cochlea turns approximately 2.75 times around the central axis in humans. The cochlear nerve, as well as spiral ganglion is situated inside it. The cochlear nerve conducts impulses from the receptors located within the cochlea.
The picture shows the osseous labyrinth. The modiolus is not labeled; it's at the axis of the spiral of the cochlea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhododendrol**
Rhododendrol:
Rhododendrol (RD) also called 4-[(3R)-3-hydroxybutyl]phenol (systemic name), is an organic compound with the formula C10H14O2. It is a naturally occurring ingredient present in many plants, such as the Rhododendron. The phenolic compound was first developed in 2010 as a tyrosinase inhibitor for skin-lightening cosmetics. In 2013, after rhododendrol reportedly caused skin depigmentation in consumers using RD-containing skin-brightening cosmetics, the cosmetics were withdrawn from the market. The skin condition, caused by RD, is called RD-induced leukoderma. Rhododendrol exerts melanocyte cytotoxicity via a tyrosinase-dependent mechanism. It has been shown to impair the normal proliferation of melanocytes through reactive oxygen species-dependent activation of GADD45. It is now well established that rhododendrol is a potent tyrosinase inhibitor.
Structure and synthesis:
Structure Rhododendrol occurs as the glucoside rhododendrin in leaves of the Rhododendron (Ericacae), and it naturally occurs as a phenolic compound in plants such as Acer nikoense , Betula platyphylla, and the Chinese red birch Betula Alba. The compound can be obtained from alkylation of phenols (C6H5OH). The molecule has a para-substituted structure, and one chiral center. Also, the compound has a natural charge.
Structure and synthesis:
Biosynthesis There are several ways to synthesise rhododendrol. First, the synthesis can be achieved in six steps from benzaldehyde. The key reactions in this method include aldol condensation and trichloroacetimidate glycosylation. The compound can also be prepared by reducing raspberry ketone (4-(4-hydroxyphenyl)-2- butanone) with Raney nickel in EtOH. In addition, Rhododendrol can be synthesised from p-coumaric acid. This pathway involves reduction of the aliphatic double bond present in p-coumaric acid.
Mechanisms of action:
The mechanism of action of rhododendrol has been investigated in multiple studies which revealed that RD competes with tyrosine for hydroxylation by tyrosinase and interferes with melanin synthesis. First, RD is catalysed by tyrosinase to produce toxic metabolites as RD-cyclic catechol. These reactive metabolites cause damage to the melanocytes. There is still uncertainty, however, how the metabolites result in melanocyte damage.
Mechanisms of action:
A previous report reported that the melanocyte toxicity of rhododendrol is caused by the production of cytotoxic reactive oxygen species (ROS). However, another study stated that there was no ROS detected in the rhododendrol-treated melanocytes, but a tyrosinase-dependent accumulation of endoplasmic reticulum stress and activation of the apoptotic pathway. Even though there is still no full agreement on the exact mechanism of action, it is suggested that the mechanism of RD-induced leukoderma closely resembles the mechanism displayed in the figure below (Suggested mechanism of Rhododendrol.png).
Mechanisms of action:
In some individuals, a T-cell response is observed. The melanocyte cell lysates may sensitise T-cells, and the immunised cytotoxic T-lymphocytes (specific to Melan A, which is a melanocytic differentiation marker) may enhance the RD-induced leukoderma or evoke vitiligo-like lesions on the non-applied skin.
Metabolism:
Rhododendrol is metabolised via tyrosinase-catalysed oxidation. Therefore, the enzyme tyrosinase is necessary for the oxidation of rhododendrol. Tyrosinase regularly plays an essential role in the production of melanocytes called the melanogenesis. After oxidation of rhododendrol by the tyrosinase enzyme, several kinds of phenols and catechols are formed. These phenols and catechols together form ortho-quinones (o-quinones). Presence of o-quinones can lead to cytotoxicity via the production of reactive oxygen species (ROS) or by the binding to enzymes or DNA.When rhododendrol is metabolised via the tyrosinase-catalysed oxidation RD-quinone will be formed. This formation gives rise to the formation of secondary quinones. As described in the mechanisms of action, the presence of quinones could cause cytotoxicity to melanocytes by the production of ROS or by binding to DNA and enzymes.
Adverse effects:
Considering the use of rhododendrol is prohibited since 2013, the knowledge about the side effects rhodendodrol causes is limited. As stated above, the main known adverse effect of rhododendrol is melanocyte toxicity. Melanocytes are melanin-producing cells, primarily responsible for skin colour. Melanocyte toxicity induces apoptosis of the cell, causing the melanocytes to die. This is due to an increased expression of caspase-3 and caspase-8. Caspase proteins are crucial mediators of apoptosis, with caspase-3 and caspase-8 being death proteases. Considering melanocytes are responsible for skin colour, apoptosis of these cells causes the colour of the skin to vanish. This disease caused by rhododendrol is called leukoderma. Leukoderma, also known as vitiligo, is a skin disease characterized by patches of the skin losing their pigment. This rhododendrol-induced depigmentation can be either long-term and short term. In most cases, repigmentation and cessation of further depigmentation occur after discontinuing the exposure to the substance. However, some patients develop vitiligo vulgaris through the spread of depigmentation into non-exposed areas. This only occurs after severe chemical damage. In addition, rhododendrol not only causes melanocytes to go into apoptosis but it also inhibits melanogenesis. Meaning that the use of rhododendrol not only causes melanocytes to die, but also prevents the development of new melanocytes.
Toxicity:
Various studies have shown that there is more than one mechanism by which rhododendrol can have a toxic effect. This toxic effect of rhododendrol is found in the melanocytes, which gives rise to skin depigmentation.
Toxicity:
ROS Rhododendrol can have a toxic effect via the production of reactive oxygen species (ROS). This will cause an impairment in the further development of melanocytes in the skin. Impairment is caused by upregulation of the GADD45 gene. A study of Kim et al. showed that the production of ROS, which gives rise to more production of GADD45, is already found at low concentrations of rhododendrol. At the time that rhododendrol was used in cosmetic products, it contained concentrations of 2%. The study of Kim et al. suggests that the production of reactive oxygen species at low concentrations may have contributed to the development of leukoderma in users of these cosmetic products.
Toxicity:
Reactive metabolites The study of Ito et al. showed that rhododendrol exerts its toxic effect in the melanocytes via tyrosinase-dependent mechanisms. This tyrosinase enzyme breaks rhododendrol down into the following reactive metabolites: RD-quinone and RD-cyclic quinone. These reactive metabolites can bind to proteins which contain a thiol-group or it can form radicals. These radicals are toxic to the melanocytes as it causes auto-oxidation of the cells. Auto-oxidation, in turn, causes oxidative stress to cells, which will impair the natural growth and function of the melanocytes.
Toxicity:
Rhododenol and raspberry ketone impair the regular proliferation of melanocytes through reactive oxygen species-dependent activation of GADD45.
Effects on animals:
The effect of rhododendrol (4-(4-hydroxyphenyl)-2-butanol) is measured in mice as well as in guinea pigs. These studies were performed to elucidate the aetiology of RD-induced leukoderma. The data of these studies revealed that the amount of RD applied to the skin is highly relevant considering that high doses of RD are required in order to cause cytotoxicity. This finding is contrary to the results presented in the study of Kim et al., which is performed in humans. Furthermore, the animal studies enlightened the importance of the ER-stress response. It is suggested that the activity of the ER-stress response may determine whether melanocytes survive or die. Also, the study of Abe et al. revealed that the autophagy pathway may be involved in the resistance to the cytotoxicity of RD.Since the biochemical and histological characteristics of the used mice in the animal studies (hairless hk14-SCF Tg mice) closely resembled the characteristics of the human skin, these newly generated mice could be used as experimental animal models to investigate chemical vitiligo further. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Land bridge**
Land bridge:
In biogeography, a land bridge is an isthmus or wider land connection between otherwise separate areas, over which animals and plants are able to cross and colonize new lands. A land bridge can be created by marine regression, in which sea levels fall, exposing shallow, previously submerged sections of continental shelf; or when new land is created by plate tectonics; or occasionally when the sea floor rises due to post-glacial rebound after an ice age.
Prominent examples:
Adam's Bridge (also known as Rama Setu), connecting India and Sri Lanka The Bassian Plain, which linked Australia and Tasmania The Bering Land Bridge (aka Beringia), which intermittently connected Alaska (Northern America) with Siberia (North Asia) as sea levels rose and fell under the effect of ice ages Land bridges of Japan, several land bridges which connected Japan to Russia and Korea at various times in history.
Prominent examples:
De Geer Land Bridge, a route that connected Fennoscandia to northern Greenland Doggerland, a former landmass in the southern North Sea which connected the island of Great Britain to continental Europe during the last ice age The Isthmus of Panama, whose appearance three million years ago allowed the Great American Biotic Interchange between North America and South America The Thule Land Bridge, a since disappeared land bridge between the British Isles and Greenland The Sinai Peninsula, linking Africa and Eurasia Torres Strait land bridge, Sahul, between modern-day West Papua and Cape York
Land bridge theory:
In the 19th century, scientists including Joseph Dalton Hooker noted puzzling geological, botanical, and zoological similarities between widely separated areas. To solve these problems, they proposed land bridges between appropriate land masses. In geology, the concept was first proposed by Jules Marcou in Lettres sur les roches du Jura et leur distribution géographique dans les deux hémisphères ("Letters on the rocks of the Jura [Mountains] and their geographic distribution in the two hemispheres"), 1857–1860.The hypothetical land bridges included: Archatlantis from the West Indies to North Africa Archhelenis from Brazil to South Africa Archiboreis in the North Atlantic Archigalenis from Central America through Hawaii to Northeast Asia Archinotis from South America to Antarctica Lemuria in the Indian OceanThe theory of continental drift provided an alternate explanation that did not require land bridges. However the continental drift theory was not widely accepted until the development of plate tectonics in the early 1960s, which more completely explained the motion of continents over geological time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Movable Type**
Movable Type:
Movable Type is a weblog publishing system developed by the company Six Apart. It was publicly announced on September 3, 2001; version 1.0 was publicly released on October 8, 2001.
The current version is 7.0.Movable Type is proprietary software. From June 2007 to July 2013, Six Apart ran the Movable Type Open Source Project, which offered a version of Movable Type under the GPL.
Features:
Movable Type's features include the ability to host multiple weblogs and standalone content pages, manage files, user roles, templates, tags, categories, and trackback links. The application supports static page generation (in which files for each page are updated whenever the content of the site is changed), dynamic page generation (in which pages are composited from the underlying data as the browser requests them), or a combination of the two techniques. Movable Type optionally supports Lightweight Directory Access Protocol (LDAP) for user and group management and automatic blog provisioning.
Features:
Movable Type is written in Perl, and supports storage of the weblog's content and associated data within MySQL natively. PostgreSQL and SQLite support was available prior to version 5, and can still be used via plug-ins. Movable Type Enterprise also supports the Oracle database and Microsoft SQL Server.
History:
Version 1.0 was released in October 2001. Movable Type 2.6 was released February 13, 2003.The TrackBack feature was introduced in version 2.2, and has since been adopted by a number of other blog systems.With the release of version 3.0 in 2004, there were marked changes in Movable Type's licensing, most notably placing greater restrictions on its use without paying a licensing fee. This sparked criticism from some users of the software, with some moving to the then-new open-source blogging tool WordPress. With the release of Movable Type 3.2, the ability to create an unlimited number of weblogs at all licensing levels was restored. In Movable Type 3.3, the product once again became completely free for personal users.Six Apart released a beta version of Movable Type 4 on June 5, 2007 and re-launched movabletype.org as a community site, for purposes of developing an open-source version that was released under the GNU Public License on December 12, 2007. Movable Type 4's Enterprise version provides advanced features such as LDAP management, and enterprise database integration such as Oracle, MySQL, user roles, blog cloning, and automated blog provisioning. It is also available as part of Intel's SuiteTwo professional software offering of Web 2.0 tools.
History:
Movable Type 5 was released in Open Source and Pro versions in January 2010, with several bug-fix and security updates appearing later in the year. Movable Type Enterprise remains based on Movable Type 4.
Movable Type 6 was released in 2013; this release included the termination (once again) of the Open Source licensing option. Movable Type is now available in "Professional" and "Enterprise" closed versions.
History:
Melody was a fork of the open-source Movable Type distribution, announced in June 2009. Its development was being guided by a non-profit group consisting of current and former Six Apart employees, as well as other consultants and volunteers, but development appeared to cease in the middle of 2011.At various times, Six Apart also maintained three other weblog publishing systems—TypePad, Vox, and LiveJournal. While Movable Type is a system which needs to be installed on a user's own web server, TypePad, Vox, and LiveJournal were all hosted weblog services. LiveJournal, purchased in 2005, was sold in 2007. Shortly before being acquired by web advertising firm VideoEgg to form SAY Media in September 2010, Six Apart announced that it would be shutting down the Vox service at the end of that month, leaving TypePad and Movable Type as the company's only blogging platforms. In January 2011, SAY Media announced that Infocom, a Japanese IT company, had acquired Six Apart Japan and that as part of the transaction, Infocom would assume responsibility for Movable Type. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Destructible environment**
Destructible environment:
In video games, the term destructible environment, or deformable terrain, refers to an environment within a game which can be wholly or partially destroyed by the player. It may refer to any part of the environment, including terrain, buildings and other man-made structures. A game may feature destructible environments to demonstrate its graphical prowess, underscore the potency of the player character's given abilities, and/or require the player to leverage them to solve problems or discover new paths and/or secrets.Early examples include the Taito shooter games Gun Fight (1975) and Space Invaders (1978), where the players could take cover behind destructible objects. An early example of a fully destructible environment can be found in Namco's 1982 game Dig Dug, in which the whole of each level is destructible, though enemies can usually only follow the player through a combination of pre-made tracks and paths made by the player. A similar game released that same year was Mr. Do! by Universal. In most games that feature destructible terrain, it is more common for only part of the environment to be destructible to prevent players from cutting their way directly to the goal.
Destructible environment:
An early example of a shooter game that featured fully destructible environments was Kagirinaki Tatakai, an early run and gun shooter developed by Hiroshi Ishikawa for the Sharp X1 computer and released by Enix in 1983. The Worms series, starting in 1995, also features terrain which can be completely obliterated.
Destructible environment:
The earliest first-person shooter example may be Ghen War, released in 1995 for the Sega Saturn, which featured a 3D terrain map generator that allows fully destructible environments. However, the trend to make more and more items and environmental features destroyable by the player hearkens all the way back to the explosive barrels in Doom (1993). Games like Blood II: The Chosen (1998) also featured large numbers of destroyable objects; in that game a room filled with objects could be turned into an empty room filled only with debris.
Destructible environment:
Newer iterations of this feature can be observed in games such as Dragon Ball Z: Budokai Tenkaichi and Dragon Ball: Xenoverse, where the fighters' dashes and super moves can destroy large rock formations and buildings, Spring, Crysis (CryEngine 2), Mercenaries 2: World in Flames, Battlefield: Bad Company (Frostbite 1.0), Battlefield: Bad Company 2 (Frostbite 1.5), Battlefield 1943 (Frostbite 1.5), Black, and Red Faction: Guerilla (Geo-Mod). Future implementations are core facets of gameplay and can be found in Battlefield 3 (Frostbite 2), Diablo 3, Battlefield 4 and Battlefield 1 (Frostbite 3), Tom Clancy's Rainbow Six Siege (AnvilNext 2.0), and BattleBit Remastered (Unity). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endocrine disruptor**
Endocrine disruptor:
Endocrine disruptors, sometimes also referred to as hormonally active agents, endocrine disrupting chemicals, or endocrine disrupting compounds are chemicals that can interfere with endocrine (or hormonal) systems. These disruptions can cause cancerous tumors, birth defects, and other developmental disorders. Found in many household and industrial products, endocrine disruptors "interfere with the synthesis, secretion, transport, binding, action, or elimination of natural hormones in the body that are responsible for development, behavior, fertility, and maintenance of homeostasis (normal cell metabolism)."Any system in the body controlled by hormones can be derailed by hormone disruptors. Specifically, endocrine disruptors may be associated with the development of learning disabilities, severe attention deficit disorder, cognitive and brain development problems.There has been controversy over endocrine disruptors, with some groups calling for swift action by regulators to remove them from the market, and regulators and other scientists calling for further study. Some endocrine disruptors have been identified and removed from the market (for example, a drug called diethylstilbestrol), but it is uncertain whether some endocrine disruptors on the market actually harm humans and wildlife at the doses to which wildlife and humans are exposed. Additionally, a key scientific paper, published in 1996 in the journal Science, which helped launch the movement of those opposed to endocrine disruptors, was retracted and its author found to have committed scientific misconduct.Studies in cells and laboratory animals have shown that EDCs can cause adverse biological effects in animals, and low-level exposures may also cause similar effects in human beings.
Endocrine disruptor:
EDCs in the environment may also be related to reproductive and infertility problems in wildlife and bans and restrictions on their use has been associated with a reduction in health problems and the recovery of some wildlife populations.
History:
The term endocrine disruptor was coined in 1991 at the Wingspread Conference Center in Wisconsin. One of the early papers on the phenomenon was by Theo Colborn in 1993. In this paper, she stated that environmental chemicals disrupt the development of the endocrine system, and that effects of exposure during development are often permanent.
History:
Although the endocrine disruption has been disputed by some, work sessions from 1992 to 1999 have generated consensus statements from scientists regarding the hazard from endocrine disruptors, particularly in wildlife and also in humans.The Endocrine Society released a scientific statement outlining mechanisms and effects of endocrine disruptors on "male and female reproduction, breast development and cancer, prostate cancer, neuroendocrinology, thyroid, metabolism and obesity, and cardiovascular endocrinology," and showing how experimental and epidemiological studies converge with human clinical observations "to implicate endocrine disruptive chemicals (EDCs) as a significant concern to public health." The statement noted that it is difficult to show that endocrine disruptors cause human diseases, and it recommended that the precautionary principle should be followed. A concurrent statement expresses policy concerns.Endocrine disrupting compounds encompass a variety of chemical classes, including drugs, pesticides, compounds used in the plastics industry and in consumer products, industrial by-products and pollutants, and even some naturally produced botanical chemicals. Some are pervasive and widely dispersed in the environment and may bioaccumulate. Some are persistent organic pollutants (POPs), and can be transported long distances across national boundaries and have been found in virtually all regions of the world, and may even concentrate near the North Pole, due to weather patterns and cold conditions. Others are rapidly degraded in the environment or human body or may be present for only short periods of time. Health effects attributed to endocrine disrupting compounds include a range of reproductive problems (reduced fertility, male and female reproductive tract abnormalities, and skewed male/female sex ratios, loss of fetus, menstrual problems); changes in hormone levels; early puberty; brain and behavior problems; impaired immune functions; and various cancers.One example of the consequences of the exposure of developing animals, including humans, to hormonally active agents is the case of the drug diethylstilbestrol (DES), a nonsteroidal estrogen and not an environmental pollutant. Prior to its ban in the early 1970s, doctors prescribed DES to as many as five million pregnant women to block spontaneous abortion, an off-label use of this medication prior to 1947. It was discovered after the children went through puberty that DES affected the development of the reproductive system and caused vaginal cancer. The relevance of the DES saga to the risks of exposure to endocrine disruptors is questionable, as the doses involved are much higher in these individuals than in those due to environmental exposures.Aquatic life subjected to endocrine disruptors in an urban effluent have experienced decreased levels of serotonin and increased feminization.In 2013 the WHO and the United Nations Environment Programme released a study, the most comprehensive report on EDCs to date, calling for more research to fully understand the associations between EDCs and the risks to health of human and animal life. The team pointed to wide gaps in knowledge and called for more research to obtain a fuller picture of the health and environmental impacts of endocrine disruptors. To improve global knowledge the team has recommended: Testing: known EDCs are only the 'tip of the iceberg' and more comprehensive testing methods are required to identify other possible endocrine disruptors, their sources, and routes of exposure.
History:
Research: more scientific evidence is needed to identify the effects of mixtures of EDCs on humans and wildlife (mainly from industrial by-products) to which humans and wildlife are increasingly exposed.
Reporting: many sources of EDCs are not known because of insufficient reporting and information on chemicals in products, materials and goods.
Collaboration: more data sharing between scientists and between countries can fill gaps in data, primarily in developing countries and emerging economies.
Endocrine system:
Endocrine systems are found in most varieties of animals. The endocrine system consists of glands that secrete hormones, and receptors that detect and react to the hormones.Hormones travel throughout the body and act as chemical messengers. Hormones interface with cells that contain matching receptors in or on their surfaces. The hormone binds with the receptor, much like a key would fit into a lock. The endocrine system regulates adjustments through slower internal processes, using hormones as messengers. The endocrine system secretes hormones in response to environmental stimuli and to orchestrate developmental and reproductive changes. The adjustments brought on by the endocrine system are biochemical, changing the cell's internal and external chemistry to bring about a long term change in the body. These systems work together to maintain the proper functioning of the body through its entire life cycle. Sex steroids such as estrogens and androgens, as well as thyroid hormones, are subject to feedback regulation, which tends to limit the sensitivity of these glands.Hormones work at very small doses (part per billion ranges). Endocrine disruption can thereby also occur from low-dose exposure to exogenous hormones or hormonally active chemicals such as bisphenol A. These chemicals can bind to receptors for other hormonally mediated processes. Furthermore, since endogenous hormones are already present in the body in biologically active concentrations, additional exposure to relatively small amounts of exogenous hormonally active substances can disrupt the proper functioning of the body's endocrine system. Thus, an endocrine disruptor can elicit adverse effects at much lower doses than a toxicity, acting through a different mechanism.
Endocrine system:
The timing of exposure is also critical. Most critical stages of development occur in utero, where the fertilized egg divides, rapidly developing every structure of a fully formed baby, including much of the wiring in the brain. Interfering with the hormonal communication in utero can have profound effects both structurally and toward brain development. Depending on the stage of reproductive development, interference with hormonal signaling can result in irreversible effects not seen in adults exposed to the same dose for the same length of time. Experiments with animals have identified critical developmental time points in utero and days after birth when exposure to chemicals that interfere with or mimic hormones have adverse effects that persist into adulthood. Disruption of thyroid function early in development may be the cause of abnormal sexual development in both males and females early motor development impairment, and learning disabilities.There are studies of cell cultures, laboratory animals, wildlife, and accidentally exposed humans that show that environmental chemicals cause a wide range of reproductive, developmental, growth, and behavior effects, and so while "endocrine disruption in humans by pollutant chemicals remains largely undemonstrated, the underlying science is sound and the potential for such effects is real." While compounds that produce estrogenic, androgenic, antiandrogenic, and antithyroid actions have been studied, less is known about interactions with other hormones.
Endocrine system:
The interrelationships between exposures to chemicals and health effects are rather complex. It is hard to definitively link a particular chemical with a specific health effect, and exposed adults may not show any ill effects. But, fetuses and embryos, whose growth and development are highly controlled by the endocrine system, are more vulnerable to exposure and may develop overt or subtle lifelong health or reproductive abnormalities. Prebirth exposure, in some cases, can lead to permanent alterations and adult diseases.Some in the scientific community are concerned that exposure to endocrine disruptors in the womb or early in life may be associated with neurodevelopmental disorders including reduced IQ, ADHD, and autism. Certain cancers and uterine abnormalities in women are associated with exposure to diethylstilbestrol (DES) in the womb due to DES used as a medical treatment.
Endocrine system:
In another case, phthalates in pregnant women's urine was linked to subtle, but specific, genital changes in their male infants—a shorter, more female-like anogenital distance and associated incomplete descent of testes and a smaller scrotum and penis. The science behind this study has been questioned by phthalate industry consultants. As of June 2008, there are only five studies of anogenital distance in humans, and one researcher has stated "Whether AGD measures in humans relate to clinically important outcomes, however, remains to be determined, as does its utility as a measure of androgen action in epidemiological studies."
Effects on intrinsic hormones:
While there are chemical differences between endocrine disruptors and endogenous hormones that may explain why endocrine disruptors affect only some (not all) of responses to hormones, toxicology research shows that some endocrine disruptors target the specific hormone trait that allows one hormone to regulate the production or degradation of intrinsic hormones. As endocrine disruptors have the potential to mimic or antagonize natural hormones, these chemicals can exert their effects by acting through interaction with nuclear receptors, the aryl hydrocarbon receptor or membrane bound receptors.
U-shaped dose-response curve:
Most toxicants, including endocrine disruptors, have been claimed to follow a U-shaped dose-response curve. This means that very low and very high levels have more effects than mid-level exposure to a toxicant.
Endocrine disrupting effects have been noted in animals exposed to environmentally relevant levels of some chemicals. For example, a common flame retardant, BDE-47, affects the reproductive system and thyroid gland of female rats in doses similar to which humans are exposed.
U-shaped dose-response curve:
Low concentrations of endocrine disruptors can also have synergistic effects in amphibians, but it is not clear that this is an effect mediated through the endocrine system.Critics have argued that there is evidence suggesting that the amounts of chemicals in the environment are too low to cause a significant effect. A consensus statement by the Learning and Developmental Disabilities Initiative argued that "The very low-dose effects of endocrine disruptors cannot be predicted from high-dose studies, which contradicts the standard 'dose makes the poison' rule of toxicology. Nontraditional dose-response curves are referred to as non-monotonic dose response curves."The dosage objection could also be overcome if low concentrations of different endocrine disruptors are synergistic. This paper was published in Science in June 1996, and was one reason for the passage of the Food Quality Protection Act of 1996. The results could not be confirmed with the same and alternative methodologies, and the original paper was retracted, with Arnold found to have committed scientific misconduct by the United States Office of Research Integrity.It has been claimed that tamoxifen and some phthalates have fundamentally different (and harmful) effects on the body at low doses than at high doses.
Routes of exposure:
Food is a major mechanism by which people are exposed to pollutants. Diet is thought to account for up to 90% of a person's PCB and DDT body burden. In a study of 32 different common food products from three grocery stores in Dallas, fish and other animal products were found to be contaminated with PBDE. Since these compounds are fat-soluble, it is likely they are accumulating from the environment in the fatty tissue of animals eaten by humans. Some suspect fish consumption is a major source of many environmental contaminants. Indeed, both wild and farmed salmon from all over the world have been shown to contain a variety of man-made organic compounds.With the increase in household products containing pollutants and the decrease in the quality of building ventilation, indoor air has become a significant source of pollutant exposure. Residents living in houses with wood floors treated in the 1960s with PCB-based wood finish have a much higher body burden than the general population. A study of indoor house dust and dryer lint of 16 homes found high levels of all 22 different PBDE congeners tested for in all samples. Recent studies suggest that contaminated house dust, not food, may be the major source of PBDE in our bodies. One study estimated that ingestion of house dust accounts for up to 82% of humans' PBDE body burden.It has been shown that contaminated house dust is a primary source of lead in young children's bodies. It may be that babies and toddlers ingest more contaminated house dust than the adults they live with, and therefore have much higher levels of pollutants in their systems.
Routes of exposure:
Consumer goods are another potential source of exposure to endocrine disruptors. An analysis of the composition of 42 household cleaning and personal care products versus 43 "chemical-free" products has been performed. The products contained 55 different chemical compounds: 50 were found in the 42 conventional samples representing 170 product types, while 41 were detected in 43 "chemical-free" samples representing 39 product types. Parabens, a class of chemicals that has been associated with reproductive-tract issues, were detected in seven of the "chemical-free" products, including three sunscreens that did not list parabens on the label. Vinyl products such as shower curtains were found to contain more than 10% by weight of the compound DEHP, which when present in dust has been associated with asthma and wheezing in children. The risk of exposure to EDCs increases as products, both conventional and "chemical-free", are used in combination. "If a consumer used the alternative surface cleaner, tub and tile cleaner, laundry detergent, bar soap, shampoo and conditioner, facial cleanser and lotion, and toothpaste [he or she] would potentially be exposed to at least 19 compounds: 2 parabens, 3 phthalates, MEA, DEA, 5 alkylphenols, and 7 fragrances." An analysis of the endocrine-disrupting chemicals in Old Order Mennonite women in mid-pregnancy determined that they have much lower levels in their systems than the general population. Mennonites eat mostly fresh, unprocessed foods, farm without pesticides, and use few or no cosmetics or personal care products. One woman who had reported using hairspray and perfume had high levels of monoethyl phthalate, while the other women all had levels below detection. Three women who reported being in a car or truck within 48 hours of providing a urine sample had higher levels of diethylhexyl phthalate, which is found in polyvinyl chloride and is used in car interiors.Additives added to plastics during manufacturing may leach into the environment after the plastic item is discarded; additives in microplastics in the ocean leach into ocean water and in plastics in landfills may escape and leach into the soil and then into groundwater.
Types:
All people are exposed to chemicals with estrogenic effects in their everyday life, because endocrine disrupting chemicals are found in low doses in thousands of products. Chemicals commonly detected in people include DDT, polychlorinated biphenyls (PCBs), bisphenol A (BPA), polybrominated diphenyl ethers (PBDEs), and a variety of phthalates. In fact, almost all plastic products, including those advertised as "BPA-free", have been found to leach endocrine-disrupting chemicals. In a 2011, study it was found that some "BPA-free" products released more endocrine active chemicals than the BPA-containing products. Other forms of endocrine disruptors are phytoestrogens (plant hormones).
Types:
Xenoestrogens Xenoestrogens are a type of xenohormone that imitates estrogen. Synthetic xenoestrogens include widely used industrial compounds, such as PCBs, BPA and phthalates, which have estrogenic effects on a living organism.
Types:
Alkylphenols Alkylphenols are xenoestrogens. The European Union has implemented sales and use restrictions on certain applications in which nonylphenols are used because of their alleged "toxicity, persistence, and the liability to bioaccumulate" but the United States Environmental Protections Agency (EPA) has taken a slower approach to make sure that action is based on "sound science".The long-chain alkylphenols are used extensively as precursors to the detergents, as additives for fuels and lubricants, polymers, and as components in phenolic resins. These compounds are also used as building block chemicals that are also used in making fragrances, thermoplastic elastomers, antioxidants, oil field chemicals and fire retardant materials. Through the downstream use in making alkylphenolic resins, alkylphenols are also found in tires, adhesives, coatings, carbonless copy paper and high performance rubber products. They have been used in industry for over 40 years.
Types:
Certain alkylphenols are degradation products from nonionic detergents. Nonylphenol is considered to be a low-level endocrine disruptor owing to its tendency to mimic estrogen.
Types:
Bisphenol A (BPA) Bisphenol A is commonly found in plastic bottles, plastic food containers, dental materials, and the linings of metal food and infant formula cans. Another exposure comes from receipt paper commonly used at grocery stores and restaurants, because today the paper is commonly coated with a BPA containing clay for printing purposes.BPA is a known endocrine disruptor, and numerous studies have found that laboratory animals exposed to low levels of it have elevated rates of diabetes, mammary and prostate cancers, decreased sperm count, reproductive problems, early puberty, obesity, and neurological problems.To expand on the reproductive problems faced by women exposed to BPA. Studies in the US have shown that healthy women without any fertility problems found that urinary BPA was unrelated to time of pregnancy despite a shorter luteal phase (second part of the menstrual cycle) being reported. Additional studies have been conducted in fertility centers say that BPA exposure is correlation with lower ovarian reserves. To combat this most women will undergo IVF to help with the poor ovarian stimulation response and seemingly all of them have elevated levels of BPA in the urinary tract. Median conjugation of BPA concentrations were higher in those who did have a miscarriage compared to those who had a live birth. All of these studies show that BPA can have an effect on ovarian functions and the pivotal early part of conception. One study did show racial/ethnical differences as Asian women were found to have an increased oocyte maturity rate, but all of the women had significantly lower concentration of BPA just in this study. Early developmental stages appear to be the period of greatest sensitivity to its effects, and some studies have linked prenatal exposure to later physical and neurological difficulties. Regulatory bodies have determined safety levels for humans, but those safety levels are currently being questioned or are under review as a result of new scientific studies. A 2011 cross-sectional study that investigated the number of chemicals pregnant women are exposed to in the U.S. found BPA in 96% of women.
Types:
In 2010 the World Health Organization expert panel recommended no new regulations limiting or banning the use of bisphenol A, stating that "initiation of public health measures would be premature."In August 2008, the U.S. FDA issued a draft reassessment, reconfirming their initial opinion that, based on scientific evidence, it is safe. However, in October 2008, FDA's advisory Science Board concluded that the Agency's assessment was "flawed" and had not proven the chemical to be safe for formula-fed infants. In January 2010, the FDA issued a report indicating that, due to findings of recent studies that used novel approaches in testing for subtle effects, both the National Toxicology Program at the National Institutes of Health as well as the FDA have some level of concern regarding the possible effects of BPA on the brain and behavior of fetuses, infants and younger children. In 2012 the FDA did ban the use of BPA in baby bottles, however the Environmental Working Group called the ban "purely cosmetic". In a statement they said, "If the agency truly wants to prevent people from being exposed to this toxic chemical associated with a variety of serious and chronic conditions it should ban its use in cans of infant formula, food and beverages." The Natural Resources Defense Council called the move inadequate saying, the FDA needs to ban BPA from all food packaging. In a statement a FDA spokesman said the agency's action was not based on safety concerns and that "the agency continues to support the safety of BPA for use in products that hold food."A program initiated by NIEHS, NTP, and the U.S. Food and Drug Administration (named CLARITY-BPA) found no effect of chronic exposure to BPA on rats and the FDA considers currently authorized uses of BPA to be safe for consumers.The Environmental Protection Agency set a reference dose for BPA at 50 μg/kg/day for mammals, although exposure to doses lower then the reference dose has been shown to effect both male and female reproductive systems.
Types:
Bisphenol S (BPS) and bisphenol F (BPF) Bisphenol S and Bisphenol F are analogs of bisphenol A. They are commonly found in thermal receipts, plastics, and household dust.
Traces of BPS have also been found in personal care products. It is more presently being used because of the ban of BPA. BPS is used in place of BPA in "BPA free" items. However BPS and BPF have been shown to be as much of an endocrine disruptor as BPA.
Types:
DDT Dichlorodiphenyltrichloroethane (DDT) was first used as a pesticide against Colorado potato beetles on crops beginning in 1936. An increase in the incidence of malaria, epidemic typhus, dysentery, and typhoid fever led to its use against the mosquitoes, lice, and houseflies that carried these diseases. Before World War II, pyrethrum, an extract of a flower from Japan, had been used to control these insects and the diseases they can spread. During World War II, Japan stopped exporting pyrethrum, forcing the search for an alternative. Fearing an epidemic outbreak of typhus, every British and American soldier was issued DDT, who used it to routinely dust beds, tents, and barracks all over the world.
Types:
DDT was approved for general, non-military use after the war ended. It became used worldwide to increase monoculture crop yields that were threatened by pest infestation, and to reduce the spread of malaria which had a high mortality rate in many parts of the world. Its use for agricultural purposes has since been prohibited by national legislation of most countries, while its use as a control against malaria vectors is permitted, as specifically stated by the Stockholm Convention on Persistent Organic Pollutants.As early as 1946, the harmful effects of DDT on birds, beneficial insects, fish, and marine invertebrates were seen in the environment. The most infamous example of these effects were seen in the eggshells of large predatory birds, which did not develop to be thick enough to support the adult bird sitting on them. Further studies found DDT in high concentrations in carnivores all over the world, the result of biomagnification through the food chain. Twenty years after its widespread use, DDT was found trapped in ice samples taken from Antarctic snow, suggesting wind and water are another means of environmental transport. Recent studies show the historical record of DDT deposition on remote glaciers in the Himalayas.More than sixty years ago when biologists began to study the effects of DDT on laboratory animals, it was discovered that DDT interfered with reproductive development. Recent studies suggest DDT may inhibit the proper development of female reproductive organs that adversely affects reproduction into maturity. Additional studies suggest that a marked decrease in fertility in adult males may be due to DDT exposure. Most recently, it has been suggested that exposure to DDT in utero can increase a child's risk of childhood obesity. DDT is still used as anti-malarial insecticide in Africa and parts of Southeast Asia in limited quantities.
Types:
Polychlorinated biphenyls Polychlorinated biphenyls (PCBs) are a class of chlorinated compounds used as industrial coolants and lubricants. PCBs are created by heating benzene, a byproduct of gasoline refining, with chlorine. They were first manufactured commercially by the Swann Chemical Company in 1927. In 1933, the health effects of direct PCB exposure was seen in those who worked with the chemicals at the manufacturing facility in Alabama. In 1935, Monsanto acquired the company, taking over US production and licensing PCB manufacturing technology internationally.
Types:
General Electric was one of the largest US companies to incorporate PCBs into manufactured equipment. Between 1952 and 1977, the New York GE plant had dumped more than 500,000 pounds of PCB waste into the Hudson River. PCBs were first discovered in the environment far from its industrial use by scientists in Sweden studying DDT.The effects of acute exposure to PCBs were well known within the companies who used Monsanto's PCB formulation who saw the effects on their workers who came into contact with it regularly. Direct skin contact results in a severe acne-like condition called chloracne. Exposure increases the risk of skin cancer, liver cancer, and brain cancer. Monsanto tried for years to downplay the health problems related to PCB exposure in order to continue sales.The detrimental health effects of PCB exposure to humans became undeniable when two separate incidents of contaminated cooking oil poisoned thousands of residents in Japan (Yushō disease, 1968) and Taiwan (Yu-cheng disease, 1979), leading to a worldwide ban on PCB use in 1977. Recent studies show the endocrine interference of certain PCB congeners is toxic to the liver and thyroid, increases childhood obesity in children exposed prenatally, and may increase the risk of developing diabetes.PCBs in the environment may also be related to reproductive and infertility problems in wildlife. In Alaska, it is thought that they may contribute to reproductive defects, infertility and antler malformation in some deer populations. Declines in the populations of otters and sea lions may also be partially due to their exposure to PCBs, the insecticide DDT, other persistent organic pollutants. Bans and restrictions on the use of EDCs have been associated with a reduction in health problems and the recovery of some wildlife populations.
Types:
Polybrominated diphenyl ethers Polybrominated diphenyl ethers (PBDEs) are a class of compounds found in flame retardants used in plastic cases of televisions and computers, electronics, carpets, lighting, bedding, clothing, car components, foam cushions and other textiles. Potential health concern: PBDEs are structurally very similar to Polychlorinated biphenyls (PCBs), and have similar neurotoxic effects. Research has correlated halogenated hydrocarbons, such as PCBs, with neurotoxicity. PBDEs are similar in chemical structure to PCBs, and it has been suggested that PBDEs act by the same mechanism as PCBs.In the 1930s and 1940s, the plastics industry developed technologies to create a variety of plastics with broad applications. Once World War II began, the US military used these new plastic materials to improve weapons, protect equipment, and to replace heavy components in aircraft and vehicles. After WWII, manufacturers saw the potential plastics could have in many industries, and plastics were incorporated into new consumer product designs. Plastics began to replace wood and metal in existing products as well, and today plastics are the most widely used manufacturing materials.By the 1960s, all homes were wired with electricity and had numerous electrical appliances. Cotton had been the dominant textile used to produce home furnishings, but now home furnishings were composed of mostly synthetic materials. More than 500 billion cigarettes were consumed each year in the 1960s, as compared to less than 3 billion per year in the beginning of the twentieth century. When combined with high density living, the potential for home fires was higher in the 1960s than it had ever been in the US. By the late 1970s, approximately 6000 people in the US died each year in home fires.In 1972, in response to this situation, the National Commission on Fire Prevention and Control was created to study the fire problem in the US. In 1973 they published their findings in America Burning, a 192-page report that made recommendations to increase fire prevention. Most of the recommendations dealt with fire prevention education and improved building engineering, such as the installation of fire sprinklers and smoke detectors. The Commission expected that with the recommendations, a 5% reduction in fire losses could be expected each year, halving the annual losses within 14 years.
Types:
Historically, treatments with alum and borax were used to reduce the flammability of fabric and wood, as far back as Roman times. Since it is a non-absorbent material once created, flame retardant chemicals are added to plastic during the polymerization reaction when it is formed. Organic compounds based on halogens like bromine and chlorine are used as the flame retardant additive in plastics, and in fabric based textiles as well. The widespread use of brominated flame retardants may be due to the push from Great Lakes Chemical Corporation (GLCC) to profit from its huge investment in bromine. In 1992, the world market consumed approximately 150,000 tonnes of bromine-based flame retardants, and GLCC produced 30% of the world supply.PBDEs have the potential to disrupt thyroid hormone balance and contribute to a variety of neurological and developmental deficits, including low intelligence and learning disabilities. Many of the most common PBDE's were banned in the European Union in 2006. Studies with rodents have suggested that even brief exposure to PBDEs can cause developmental and behavior problems in juvenile rodents and exposure interferes with proper thyroid hormone regulation.
Types:
Phthalates Phthalates are found in some soft toys, flooring, medical equipment, cosmetics and air fresheners. They are of potential health concern because they are known to disrupt the endocrine system of animals, and some research has implicated them in the rise of birth defects of the male reproductive system.Although an expert panel has concluded that there is "insufficient evidence" that they can harm the reproductive system of infants, California, Washington state, and Europe have banned them from toys. One phthalate, bis(2-ethylhexyl) phthalate (DEHP), used in medical tubing, catheters and blood bags, may harm sexual development in male infants. In 2002, the Food and Drug Administration released a public report which cautioned against exposing male babies to DEHP. Although there are no direct human studies the FDA report states: "Exposure to DEHP has produced a range of adverse effects in laboratory animals, but of greatest concern are effects on the development of the male reproductive system and production of normal sperm in young animals. In view of the available animal data, precautions should be taken to limit the exposure of the developing male to DEHP". Similarly, phthalates may play a causal role in disrupting masculine neurological development when exposed prenatally.Dibutyl phthalate (DBP) has also disrupted insulin and glucagon signaling in animal models.
Types:
Perfluorooctanoic acid PFOA is a stable chemical that has been used for its grease, fire, and water resistant properties in products such as non-stick pan coatings, furniture, firefighter equipment, industrial, and other common household items. There is evidence to suggest that PFOA is an endocrine disruptor affecting male and female reproductive systems. PFOA delivered to pregnant rats produced male offspring with decreased levels of 3-β and 17-β-hydroxysteroid dehydrogenase, a gene that transcribes for proteins involved in the production of sperm. Adult women have exhibited low progesterone and androstenedione production when exposed to PFOA, leading to menstrual and reproductive health issues.PFOA exerts hormonal effects including alteration of thyroid hormone levels. Blood serum levels of PFOA were associated with an increased time to pregnancy—or "infertility"—in a 2009 study. PFOA exposure is associated with decreased semen quality. PFOA appeared to act as an endocrine disruptor by a potential mechanism on breast maturation in young girls. A C8 Science Panel status report noted an association between exposure in girls and a later onset of puberty.
Types:
Other suspected endocrine disruptors Some other examples of putative EDCs are polychlorinated dibenzo-dioxins (PCDDs) and -furans (PCDFs), polycyclic aromatic hydrocarbons (PAHs), phenol derivatives and a number of pesticides (most prominent being organochlorine insecticides like endosulfan, kepone (chlordecone) and DDT and its derivatives, the herbicide atrazine, and the fungicide vinclozolin), the contraceptive 17-alpha ethinylestradiol, as well as naturally occurring phytoestrogens such as genistein and mycoestrogens such as zearalenone.
Types:
The molting in crustaceans is an endocrine-controlled process. In the marine penaeid shrimp Litopenaeus vannamei, exposure to endosulfan resulted increased susceptibility to acute toxicity and increased mortalities in the postmolt stage of the shrimp.Many sunscreens contain oxybenzone, a chemical blocker that provides broad-spectrum UV coverage, yet is subject to a lot of controversy due its potential estrogenic effect in humans.Tributyltin (TBT) are organotin compounds. For 40 years TBT was used as a biocide in anti-fouling paint, commonly known as bottom paint. TBT has been shown to impact invertebrate and vertebrate development, disrupting the endocrine system, resulting in masculinization, lower survival rates, as well as many health problems in mammals.
Temporal trends of body burden:
Since being banned, the average human body burdens of DDT and PCB have been declining. Since their ban in 1972, the PCB body burden in 2009 is one-hundredth of what it was in the early 1980s. On the other hand, monitoring programs of European breast milk samples have shown that PBDE levels are increasing. An analysis of PBDE content in breast milk samples from Europe, Canada, and the US shows that levels are 40 times higher for North American women than for Swedish women, and that levels in North America are doubling every two to six years.It has been discussed that the long-term slow decline in average body temperature observed since the beginning of the industrial revolution may result from disrupted thyroid hormone signalling.
Animal models:
Because endocrine disruptors affect complex metabolic, reproductive, and neuroendocrine systems, they cannot be modeled in in vitro cell based assay. Consequently, animal models are important for access the risk of endocrine disrupting chemicals.
Animal models:
Mice There are multiple lines of genetically engineered mice used for lab studies, in this case the lines can be used as population-based genetic foundations. For instance, there is a population that is named Multi-parent and can be a Collaborative Cross (CC) or Diversity Outbred (DO). These mice while both from the same eight founder strains, have distinct differences.The eight founder strains, combine strains that are wild-derived (with high genetic diversity) and historically significant biomedical research bred strains. Each genetically differential line is important in EDCs response and also almost all biological processes and traits.The CC population consists of 83 inbred mouse strains that over many generations in labs came from the 8 founder strains. These inbred mice have recombinant genomes that are developed to ensure every strain is equally related, this eradicates population structure and can result in false positives with qualitative trait locus (QTL) mapping.
Animal models:
While DO mice have the identical alleles to the CC mice population. There are two major differences in these mice; 1) every individual is unique allowing for hundreds of individuals to be applied in one mapping study. Making DO mice an extremely useful tool for determining genetic relationships. 2) The catch is that DO individuals cannot be reproduced.
Animal models:
Transgenic These rodents mainly mice have been bred by inserting other genes from another organism to make transgenic lines (thousands of lines) of rodents. The most recent tool used to do this is CRISPR/Cas9 which allows this process to be done more efficiently.Genes may be manipulated in a particular cell populations if done under the correct conditions. For Endocrine disrupting chemical (EDC) research these rodents have become an important tool to the point where they can produce humanized mouse models. Additionally scientists use gene knockout lines of mice in order to study how certain mechanisms work when impacted by EDC's. Transgenic rodents are an important tool for studies involving the mechanisms that are impacted by EDC but take a long time to produce and are expensive. Additionally, the genes aimed at for knockout are not always successfully targeted resulting in incomplete knockout of a gene or off-target expression.
Animal models:
Social models Experiments (gene by environment) with these relatively new rodent models may, be able to discover if there are mechanisms that EDCs could impact in the social decline in autism spectrum disorder (ASD) and other behavioral disorders. This is because prairie and pine voles are socially monogamous making them a better model for human social behaviors and development in relation to EDCs. Additionally the prairie vole genome has been sequenced making it feasible to do the experiments mentioned above. These voles can be compared to montane and meadow voles who are socially promiscuous and solitary, when looking at how different species have various forms of development and social brain structure. Both monogamous and promiscuous mice species have been used in these types of experiments, for more information studies can expand on this topic. More complex models that have systems that are as close as possible to humans are being looked at. Looking back at more common rodent models for instance the common ASD mouse are helpful but do not fully encompass what a model of the human social behaviors needs to. But these rodents will always just be models and this is important to keep in mind.
Animal models:
Zebrafish The endocrine systems between mammals and fish are similar; because of this, zebrafish (Danio rerio) are a popular lab choice. Zebrafish work well as a model organism, part of which can be attributed to the fact that researchers are able to study them starting from the embryo, as the embryo is nearly transparent. Additionally, zebrafish have DNA sex markers, this allows the biologists to individually assign sex to fish, this is particularly important when studying endocrine disruptors as the disruptors can affect how, among other things, the sex organs work, so if by chance there is sperm in the ovaries later on through the testing it can then be pinned to the chemical without the chance of it being a genetic abnormality since the sex was determined by the researcher. Besides zebrafish being readily available, and easy to study through their different life stages, they have hugely similar genes to humans—70% of human genes have a zebrafish counterpart and even more fascinatingly 84% of disease genes in humans have a zebrafish counterpart. Most importantly perhaps is the fact that the vast majority of endocrine disruptors end up in water ways, and so it is important to know how these disruptors affect fish, which arguably have intrinsic value and just happen to be model organisms as well.
Animal models:
The zebrafish embryos are transparent, relatively small fish (larvae are less than a few millimeters in size). This allows scientists to view the larvae (in vivo) without killing them to study how their organs develop in particular, neuro development and transport of presumed endocrine disrupting chemicals (EDC). Meaning how their development is impacted by certain chemicals. As a model, they have simple modes of endocrine disruption. Along with homologous physiological, sensory, anatomical and signal-transduction mechanism similar to mammals. Another helpful tool available to scientists is their recorded genome along with multiple transgenic lines accessible for breeding. Zebrafish and mammalian genomes when compared have prominent similarities with about 80% of human genes expressed in the fish. Additionally, this fish is also fairly inexpensive to breed and house in a lab partly due to their shorter life span and being able to house more of them, compared to mammalian models.
Directions of research:
Research on endocrine disruptors is challenged by five complexities requiring special trial designs and sophisticated study protocols: The dissociation of space means that, although disruptors may act by a common pathway via hormone receptors, their impact may also be mediated by effects at the levels of transport proteins, deiodinases, degradation of hormones or modified setpoints of feedback loops (i.e. allostatic load).
Directions of research:
The dissociation of time may ensue from the fact that unwanted effects may be triggered in a small time window in the embryonal or fetal period, but consequences may ensue decades later or even in the generation of grandchildren.
The dissociation of substance results from additive, multiplicative or more complex interactions of disruptors in combination that yield fundamentally different effects from that of the respective substances alone.
The dissociation of dose implies that dose-effect relationships use to be nonlinear and sometimes even U-shaped, so that low or medium doses may have stronger effects than high doses.
The dissociation of sex reflects the fact that effects may be different depending on whether embryos or fetuses are female or male.
Legal approach:
United States The multitude of possible endocrine disruptors are technically regulated in the United States by many laws, including: the Toxic Substances Control Act, the Food Quality Protection Act, the Food, Drug and Cosmetic Act, the Clean Water Act, the Safe Drinking Water Act, and the Clean Air Act.
The Congress of the United States has improved the evaluation and regulation process of drugs and other chemicals. The Food Quality Protection Act of 1996 and the Safe Drinking Water Act of 1996 simultaneously provided the first legislative direction requiring the EPA to address endocrine disruption through establishment of a program for screening and testing of chemical substances.
Legal approach:
In 1998, the EPA announced the Endocrine Disruptor Screening Program by establishment of a framework for priority setting, screening and testing more than 85,000 chemicals in commerce. While the Food Quality Protection Act only required the EPA to screen pesticides for potential to produce effects similar to estrogens in humans, it also gave the EPA the authority to screen other types of chemicals and endocrine effects. Based on recommendations from an advisory panel, the agency expanded the screening program to include male hormones, the thyroid system, and effects on fish and other wildlife. The basic concept behind the program is that prioritization will be based on existing information about chemical uses, production volume, structure-activity and toxicity. Screening is done by use of in vitro test systems (by examining, for instance, if an agent interacts with the estrogen receptor or the androgen receptor) and via the use of in animal models, such as development of tadpoles and uterine growth in prepubertal rodents. Full scale testing will examine effects not only in mammals (rats) but also in a number of other species (frogs, fish, birds and invertebrates). Since the theory involves the effects of these substances on a functioning system, animal testing is essential for scientific validity, but has been opposed by animal rights groups. Similarly, proof that these effects occur in humans would require human testing, and such testing also has opposition.
Legal approach:
After failing to meet several deadlines to begin testing, the EPA finally announced that they were ready to begin the process of testing dozens of chemical entities that are suspected endocrine disruptors early in 2007, eleven years after the program was announced. When the final structure of the tests was announced there was objection to their design. Critics have charged that the entire process has been compromised by chemical company interference. In 2005, the EPA appointed a panel of experts to conduct an open peer-review of the program and its orientation. Their results found that "the long-term goals and science questions in the EDC program are appropriate", however this study was conducted over a year before the EPA announced the final structure of the screening program. The EPA is still finding it difficult to execute a credible and efficient endocrine testing program.As of 2016, the EPA had estrogen screening results for 1,800 chemicals.
Legal approach:
Europe In 2013, a number of pesticides containing endocrine disrupting chemicals were in draft EU criteria to be banned. On 2 May, US TTIP negotiators insisted the EU drop the criteria. They stated that a risk-based approach should be taken on regulation. Later the same day Catherine Day wrote to Karl Falkenberg asking for the criteria to be removed.The European Commission had been to set criteria by December 2013 identifying endocrine disrupting chemicals (EDCs) in thousands of products—including disinfectants, pesticides and toiletries—that have been linked to cancers, birth defects and development disorders in children. However, the body delayed the process, prompting Sweden to state that it would sue the commission in May 2014—blaming chemical industry lobbying for the disruption."This delay is due to the European chemical lobby, which put pressure again on different commissioners. Hormone disrupters are becoming a huge problem. In some places in Sweden we see double-sexed fish. We have scientific reports on how this affects fertility of young boys and girls, and other serious effects," Swedish Environment Minister Lena Ek told the AFP, noting that Denmark had also demanded action.In November 2014, the Copenhagen-based Nordic Council of Ministers released its own independent report that estimated the impact of environmental EDCs on male reproductive health, and the resulting cost to public health systems. It concluded that EDCs likely cost health systems across the EU anywhere from 59 million to 1.18 billion Euros a year, noting that even this represented only "a fraction of the endocrine related diseases".In 2020, the EU published their Chemicals Strategy for Sustainability which is concerned with a green transition of the chemical industry away from xenohormones and other hazardous chemicals.
Environmental and human body cleanup:
There is evidence that once a pollutant is no longer in use, or once its use is heavily restricted, the human body burden of that pollutant declines. Through the efforts of several large-scale monitoring programs, the most prevalent pollutants in the human population are fairly well known. The first step in reducing the body burden of these pollutants is eliminating or phasing out their production.
Environmental and human body cleanup:
The second step toward lowering human body burden is awareness of and potentially labeling foods that are likely to contain high amounts of pollutants. This strategy has worked in the past—pregnant and nursing women are cautioned against eating seafood that is known to accumulate high levels of mercury.The most challenging aspect of this problem is discovering how to eliminate these compounds from the environment and where to focus remediation efforts. Even pollutants no longer in production persist in the environment, and bio-accumulate in the food chain. An understanding of how these chemicals, once in the environment, move through ecosystems, is essential to designing ways to isolate and remove them. Global efforts have been made to label the most common POPs routinely found in the environment through usage of chemicals like insecticides. The twelve main POPs have been evaluated and placed in a demographic so as to streamline the information around the general population. Such facilitation has allowed nations around the world to effectively work on the testing and reduction of the usage of these chemicals. With an effort to reduce the presence of such chemicals in the environment, they can reduce the leaching of POPs into food sources which contaminate the animals commercially fed to the U.S. population.Many persistent organic compounds, PCB, DDT and PBDE included, accumulate in river and marine sediments. Several processes are currently being used by the EPA to clean up heavily polluted areas, as outlined in their Green Remediation program.One of the most interesting ways is the utilization of naturally occurring microbes that degrade PCB congeners to remediate contaminated areas.There are many success stories of cleanup efforts of large heavily contaminated Superfund sites. A 10-acre (40,000 m2) landfill in Austin, Texas contaminated with illegally dumped VOCs was restored in a year to a wetland and educational park.A US uranium enrichment site that was contaminated with uranium and PCBs was cleaned up with high tech equipment used to find the pollutants within the soil. The soil and water at a polluted wetlands site were cleaned of VOCs, PCBs and lead, native plants were installed as biological filters, and a community program was implemented to ensure ongoing monitoring of pollutant concentrations in the area. These case studies are encouraging due to the short amount of time needed to remediate the site and the high level of success achieved.
Environmental and human body cleanup:
Studies suggest that bisphenol A, certain PCBs, and phthalate compounds are preferentially eliminated from the human body through sweat. Although some pollutants like bisphenol A (BPA) are preferentially eliminated from the human body through sweat, recent scientific advances have been made to increase the rate of elimination of pollutants from the human body. For example, BPA removal techniques have been proposed that use enzymes such as laccase and peroxidase enzyme to degrade BPA into less harmful compounds. Another technique for BPA removal is the use of highly reactive radicals for degradation.
Economic effects:
Human exposure may cause some health effects, such as lower IQ and adult obesity. These effects may lead to lost productivity, disability, or premature death in some people. One source estimated that, within the European Union, this economic effect might have about twice the economic impact as the effects caused by mercury and lead contamination.The socio-economic burden of endocrine disrupting chemicals (EDC)-associated health effects for the European Union was estimated based on currently available literature and considering the uncertainties with respect to causality with EDCs and corresponding health-related costs to be in the range of €46 billion to €288 billion per year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eudiometer**
Eudiometer:
A eudiometer is a laboratory device that measures the change in volume of a gas mixture following a physical or chemical change.
Description:
Depending on the reaction being measured, the device can take a variety of forms. In general, it is similar to a graduated cylinder, and is most commonly found in two sizes: 50 mL and 100 mL. It is closed at the top end with the bottom end immersed in water or mercury. The liquid traps a sample of gas in the cylinder, and the graduation allows the volume of the gas to be measured.
Description:
For some reactions, two platinum wires (chosen for their non-reactivity) are placed in the sealed end so an electric spark can be created between them. The electric spark can initiate a reaction in the gas mixture and the graduation on the cylinder can be read to determine the change in volume resulting from the reaction. The use of the device is quite similar to the original barometer, except that the gas inside displaces some of the liquid that is used.
History:
In 1772, Joseph Priestley began experimenting with different "airs" using his own redesigned pneumatic trough in which mercury instead of water would trap gases that were usually soluble in water. From these experiments Priestley is credited with discovering many new gases such as oxygen, hydrogen chloride, and ammonia. He also discovered a way to find the purity or "goodness" of air using "nitrous air test". The eudiometer functions on the greater solubility of NO2 in water over NO, and the oxidation reaction of NO into NO2 by air oxygen: 2 NO + O2 → 2 NO2.A quantity of air is combined with NO over water, and the more soluble compound NO2 dissolves, leaving the remaining air somewhat contracted in volume. The richer the air was in oxygen, the greater was the contraction.Marsilio Landriani was studying pneumatic chemistry with Pietro Moscati when they attempted to quantify Priestley's nitric acid test for air quality. Landriani used a pneumatic trough in the form of a tall, graduated cylinder over water. As it measured the salubrity of air, he called it a eudiometer An associate of Moscati's, Felice Fontana also designed a eudiometer on the same principles and quantified the salubrity of the air.The eudiometer with the nitrous air test was the way Jan Ingenhousz verified that the bubbles given off under water by plant leaves exposed to sunlight were oxygen bubbles. His description of photosynthesis was published in 1779, and in 1785 he wrote about eudiometers in Journal de Physique (v 26, p 339). According to a biographer, Ingenhousz indicated that "many instruments were called eudiometers although strictly speaking they didn't deserve the name ... misunderstandings could exist when not everybody was using the same instruments.": 205 An electrified version of the eudiometer was developed by Count Alessandro Volta (1745–1827), an Italian physicist who is well known for his contributions to the electric battery and electricity. Aside from its laboratory function, the eudiometer is also known for its part in the "Volta pistol". Volta invented this instrument in 1777 for the purpose of testing the "goodness" of air, analyzing the flammability of gases, or to demonstrate the chemical effects of electricity. Volta's Pistol had a long glass tube that was closed at the top, like a eudiometer. Two electrodes were fed through the tube and produced a spark gap inside the tube. Volta's initial use of this instrument concerned the study of swamp gases in particular. Volta's pistol was filled with oxygen and another gas. The homogeneous mixture was taped shut with a cork. A spark could be introduced into the gas chamber by electrodes, and possibly catalyze a reaction by static electricity, using Volta's electrophorus. If the gases were flammable, they would explode, and increase the pressure within the gas chamber. This pressure would be too great and eventually cause the cork to become airborne. Volta's pistol was made with either glass or brass, however due to the electricity the glass was vulnerable to exploding. Volta's extensive studies on measuring and creating high levels of electric currents caused the electrical unit, the volt, to be named after him.In 1785 Henry Cavendish used a eudiometer to determine the fraction of oxygen in the Earth's atmosphere.
Etymology:
The name "eudiometer" comes from the Greek εὔδιος eúdios meaning clear or mild, which is the combination of the prefix eu- meaning "good", and -dios meaning "heavenly" or "of Zeus" (the god of the sky and atmosphere), with the suffix -meter meaning "measure". Because the eudiometer was originally used to measure the amount of oxygen in the air, which was thought to be greater in "nice" weather, the root eudio- appropriately describes the apparatus.
Usage:
Applications of a eudiometer include the analysis of gases and the determination of volume differences in chemical reactions. The eudiometer is filled with water, inverted so that its open end is facing the ground (while holding the open end so that no water escapes), and then submersed in a basin of water. A chemical reaction is taking place through which gas is created. One reactant is typically at the bottom of the eudiometer (which flows downward when the eudiometer is inverted) and the other reactant is suspended on the rim of the eudiometer, typically by means of a platinum or copper wire (due to their low reactivity). When the gas created by the chemical reaction is released, it should rise into the eudiometer so that the experimenter may accurately read the volume of the gas produced at any given time. Normally a person would read the volume when the reaction is completed. This procedure is followed in many experiments, including an experiment in which one experimentally determines the Ideal gas law constant R.
Usage:
The eudiometer is similar in structure to the meteorological barometer. Similarly, a eudiometer uses water to release gas into the eudiometer tube, converting the gas into a visible, measurable amount. A correct measurement of the pressure when performing these experiments is crucial for the calculations involved in the PV=nRT equation, because the pressure could change the density of the gas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Badass (guitar bridges)**
Badass (guitar bridges):
Badass was first trademarked by Leo Quan, a manufacturer of bridges for guitars and basses. Badass bridges (used on the Martin EB18 electric bass and a replacement bridge on the Fender Precision Bass) feature individually adjustable saddles, which allows for "extremely accurate intonation adjustments." The Badass came on the market in the 1970s, and was made by entrepreneur and guitar repairman Glen Quan, of Marin County music store Bananas At Large and Leo Malliaris of Oakland's Leo's Music (hence Leo Quan). The first Badass bridges were built from diecast zinc and were considered somewhat rough; later models were made from a high-density zinc alloy and more finely milled. Badass is currently owned and distributed by Allparts Music, a subsidiary of Morse Group. In late 2022 and early 2023 Allparts Music relaunched the entire Badass bridge line including the Badass II bass bridge, Badass III bass bridge, Badass V bass bridge, Badass Wraparound Guitar Bridge, and Badass Fine Tuner Guitar Tailpiece.
Notable users and models:
Frank Bello (Anthrax) on his Fender signature bass.
Mike Dirnt (Green Day), Badass II on a Fender Precision Bass made in the Fender Custom Shop, the source of the signature Mike Dirnt Precision Bass.
Steve Harris (Iron Maiden), Badass II bridge on his Fender Precision Bass.
Kirk Hammett (Metallica, formerly Exodus), on his 1974 Gibson Flying V.
Geddy Lee (Rush); the 1998 "Geddy Lee Limited Edition" Fender Jazz Bass is equipped with a Badass II, also his Jetglo Rickenbacker 4001 with a Badass I.
Marcus Miller, on a Fender Jazz Bass.
Jimmy Page (Led Zeppelin), on his Danelectro.
Malcolm Young (AC/DC); the Gretsch "Malcolm Young Series" was equipped with a Badass bridge, based on Young's customized early 1960s Gretsch Jet Firebird.
Jerry Only (Misfits); on his "devastator" bass.
Juan Alderete (The Mars Volta, Racer X), on his Fender Precision Bass.
David Ellefson (Megadeth); David's Jackson signature bass model has a Badass II bridge.
Sting (The Police) on his Spector NS-2 bass.
Nate Mendel (Foo Fighters) uses a Badass II on his signature Fender Precision Bass model.
Dan Andriano (Alkaline Trio, The Damned Things) uses Badass II bridges on many of his Fender Precision and Jazz Basses.
Flea (Red Hot Chili Peppers) used a Badass II on his Modulus Basses.
C.J. Ramone on his Fender Dee Dee Ramone Precision Basses.
Les Claypool (Primus) on his Carl Thompson's Maple Piccolo Bass.
Ben Shepherd (Soundgarden) on his primary Fender Precision and Jazz Basses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ContentTools**
ContentTools:
ContentTools is an open-source WYSIWYG editor for HTML content written in JavaScript/CoffeeScript by Anthony Blackshaw of Getme Limited.The ContentTools editor allows text content, images, embedded videos, tables and other page content to be edited, resized, or moved via drag and drop directly within the page. The ContentTools editor features a floating context-sensitive toolbar which can be repositioned by the user and which offers functionality and ease of use similar to a word processing application.The ContentTools editor is written in JavaScript / CoffeeScript, therefore it can easily be integrated with any HTML document, customised, extended or integrated into a full web-based content management system.The ContentTools editor has already been implemented into the content management systems of a number of websites built by Getme Limited, as well as used within the third party systems Yii and Page Builder Sandwich.
The ContentTools collection:
ContentTools is part of a collection of JavaScript libraries (ContentTools, ContentEdit, ContentSelect, FSM, HTMLParser) which were developed to aid in the creation of HTML WYSIWYG editors.
Browser compatibility:
The ContentTools editor is compatible with all major web browsers and Operating Systems including Google Chrome, Internet Explorer, Mozilla Firefox, and Safari.
Reception:
Response to the release of ContentTools from the web development community has been positive. Raymond Camden of the Telerik Developer Network wrote in his review that, despite his previous dislike of rich text editors, he was "pretty impressed with ContentTools" and described the general ease of use as "really well done". Jake Rocheleau of Web Design Ledger said "the functionality is stellar and would be superb if combined with user authentication".ContentTools was featured in Tutorialzine's 15 Interesting JavaScript and CSS Libraries for June 2016, where Danny Markov described ContentTools as "a powerful JavaScript library that can transform any HTML page into a WYSIWYG editor", adding "ContentTools reveals countless possibilities for building wondrous interactive apps and services."ContentTools was also featured on Sitepoint's 10 Best jQuery and HTML5 WYSIWYG Plugins, JavaScripting.com's selection of the best JavaScript libraries, frameworks, and plugins, and Unheap's repository of javascript plugins. ContentTools has been recommended by WebdesignerDepot, Codrops, WebAppers, Speckyboy and BestDevList.ContentTools was included in the 50+ Best CSS and JavaScript Libraries in 2016 by creatskills.com.
Implementations:
The ContentTools editor has been implemented into the following content management systems: Yii 2.0 via the implementation Yii 2.0: yii2-content-tools Page Builder Sandwich, a commercial page builder plugin for use with WordPress websites | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet Encyclopedia of Philosophy**
Internet Encyclopedia of Philosophy:
The Internet Encyclopedia of Philosophy (IEP) is a scholarly online encyclopedia, dealing with philosophy, philosophical topics, and philosophers. The IEP combines open access publication with peer reviewed publication of original papers. Contribution is generally by invitation, and contributors are recognized and leading international specialists within their field.
History:
The IEP was founded by philosopher James Fieser in 1995, operating through a non-profit organization with the aim of providing accessible and scholarly information on philosophy. The current general editors are philosophers James Fieser and Bradley Dowden, with the staff also including numerous area editors as well as volunteers. The entire website was redesigned in 2009, moving from static HTML pages to the open-source content management system WordPress.
Organization:
The intended audience for the IEP is philosophy students and faculty who are not specialists within the field, and thus articles are written in an accessible style. Articles consist of a brief survey or overview, followed by the body of the article, and an annotated bibliography. Articles are searchable either by an alphabetical index or through a Google-power search mechanism.
Usage:
Similarweb analytics suggest that the IEP website is accessed worldwide between two and three million times per month. Some 75% of this usage is through internet searches, 18% is through direct access, and 5% through referral, with the referring websites including other reference websites and university library guides.
Recognition:
The IEP is included by the American Library Association in its listing of Best Free Reference Sites; listed as an online philosophy resource by the Federation of Australasian Philosophy in Schools Associations; listed by EpistemeLinks as one of the "outstanding resources" in philosophy on the internet; and listed as a reliable resource in many university philosophy guides. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zoë (robot)**
Zoë (robot):
Zoë is a solar-powered autonomous robot with sensors able to detect microorganisms and map the distribution of life in the Atacama Desert of northern Chile, duplicating tasks that could be used in future exploration of Mars. Zoë is equipped with tools and sensors to search for direct evidence of life beneath the surface of the ground. Zoë significantly aids in research needed to study Mars because it acts as a mobile observer and analyzer of any signs of life in a given location. She collects primary data which will help in understanding conditions present on Mars. This project will verify reliability of autonomous, mobile, and scientifically made robots.
Specifications:
Zoë is a four-wheeled 220 kg rover with dimensions of approximately 2.9m in length, 2.9m in width, and a height of 2m.
It achieves a top speed of 1 m/s with a turning radius of 2.5m Zoë carries a 3.5m2 Solar Panel with a maximum power generation of 1600W which allows it to move, activate computers and sensors, and charges the batteries.
Tools and Sensors:
Zoë carries a one-meter long drill on its back. The drill will collect soil samples underground which will then be analyzed by other tools on board. This drill is able to collect soil samples underground with a maximum depth of 80 cm below surface. Zoë carries the Mars Microbeam Raman Spectrometer which is able to identify specific types of minerals in the subsurface. Other instruments include the Bio IV Fluorescence Imager which indicates the amount of natural fluorescence in soil and rocks, and detects chlorophyll-based-life; a sample carousel, which is used to store samples and used to deliver drill shavings to other instruments on-board; a colored-panoramic imager on a tilt unit, which captures high-resolution photographs up to 1mm/pixel resolution; a visible/near infrared Spectrometer, which transforms spectra to visible and near-infrared wavelengths; and includes environmental sensors which are able to detect weather conditions such as the temperature, humidity, wetness, and wind.
Functionality:
Solar powered Sensors Avoids rocks and slopes Drives and steers
Life in the Atacama Desert:
Zoë was placed in the driest Desert on earth, the Atacama Desert, which is also considered to be the most lifeless. Research on this habitat did not truly begin until interest rose for NASA’s astrobiology program. With much similar aspects to Mars, the Atacama Desert was proven to be a proper imitation of Mars. With this realization, scientists and researchers are dedicating their time to examine every part of the desert in hopes of finding microbial life.
Expeditions:
In the fall of 2004, a team of scientists accompanied Zoë on a two-month expedition in the Atacama Desert so she wouldn’t “drive off a cliff” Zoë was placed in the Atacama Desert in 2005 for its first field expedition. During its time there, it was able to map out any living organisms. In 2013, Zoë returned to the Desert with a meter-long drill to produce research of any existing microbial life beneath the Desert’s surface.
Team:
Carnegie Mellon Field Robotics Center Nasa Ames Research Center Nasa Jet Propulsion Lab The University of Tennessee Department of Geology Universidad Catolica del Norte Honeybee Robotics Spacecraft Mechanisms Corporation USGS Washington University in St. Louis Other: Raul Arias
Sources:
Life in the Atacama Space.com NASA SETI Institute Carnegie Mellon University | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multi-seat valve**
Multi-seat valve:
Multi-seat valves, or multi-port valves, are used to shut off or control complex liquid flows in sterile areas in the beverage and food industry as well as in the fine chemical and pharmaceutical industry. Multi-seat valves often replace combinations of several single-seated valves in aseptic processing.
Use and function:
Especially in the production of pharmaceutical active substances, but also for numerous aseptic processes, installations with a high degree of individualization are required as the high variant diversity can hardly be covered by standard components. As a result, valve manufacturers demand high expertise in the field of construction as well as close interaction between planner and manufacturer at the planning stage of (aseptic) installations to ensure e.g. the draining capability of the installation along with utmost compactness. That arises from the product-related high diversity of biotechnological processes and from particularities in the processing and consolidation of ingredients for these highly sensitive products.
Use and function:
Similar to single-seated valves, multi-seat valves consist of the valve body, however, are equipped with various diaphragms and actuators, as well as one control and feedback unit per actuator.
Use and function:
With the exclusive use of gate valves, i.e. diaphragm valves that block the flow in a pipeline using a single diaphragm, the variation options would be limited to different nominal connection sizes. Adding and combining T- or Y- shaped single-seated valves allows quite complicated process flows and distributions. Nonetheless, multi-seat valves contain specific advantages for aseptic processes, leading to a virtually endless flexibility in system design.
Use and function:
Since multi-seat valves are mostly made from a forged block, which enables an extremely compact design, they are sometimes referred to as block valves.The compactness of a multifunctional multi-seat valve and hence the installation can be strongly influenced by the size of available actuators. Significant advantages can be achieved by using actuators whose diameters is not larger than the diaphragms' diameters.
Use and function:
Distinguished manufacturers of multi-seat valves are GEMÜ, Bürkert, Saunders and SISTO.Comparison between single-seated valve constructions and multi-seat valves
Cleanability:
Due to the compact layout of the multi-seat valve, its dimensions are smaller, hence the (sub-) system requires less space. Furthermore, the surface in contact with medium inside the valve can be significantly reduced compared to the circular construction, which is advantageous in terms of cleaning and draining capability. Furthermore, multi-seat valves contain fewer areas of bad rinsing or accumulation of the product.
Cleanability:
In terms of rinsing out valve combinations, the "3D-rule" must be taken in account. According to this rule, the length of a branch up to the valve seat should not exceed three times the diameter of the branch. Using a multi-seated construction, "dead branches" can be avoided, so microbial contaminants in the stagnant medium are reduced considerably.
If the valve is installed correctly at the defined inclined position and the inner metal surface is of high quality, modern inner geometry allows a complete self-draining of the diaphragm valve.
Actuator types:
On account of the mostly complex constructions of multi-seat valves, pneumatic actuators are used in most cases. Pneumatic actuators can be differentiated as piston and diaphragm actuators depending on the design.
Advantages on the piston actuator are in a compact design that is especially suitable for multi-seat constructions. Lightweight multi-seat valves with less "dead spaces" can thus be designed. Since the piston actuator has a smooth surface without areas difficult to clean, the outside is easier to clean compared to the diaphragm actuator.
Advantage on the diaphragm actuator, however, is a more cost-effective assembly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gamma-Glutamylmethylamide**
Gamma-Glutamylmethylamide:
γ-Glutamylmethylamide (gamma-Glutamylmethylamide, abbrev. GMA, synonyms N-methyl-L-glutamine, metheanine) is an amino acid analog of the proteinogenic amino acids L-glutamic acid and L-glutamine, found primarily in plant and fungal species; simply speaking, it is L-glutamine methylated on the amide nitrogen. It is an identified important biosynthetic intermediate allowing bacteria (e.g., methanotrophs) use of methylated amines as carbon and nitrogen source for growth (and so of significant biotechnological interest). Like its close relative theanine, it is a pharmacologically active constituent of green tea, with preliminary evidence for at least comparable activity to theanine as a hypotensive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Castrol**
Castrol:
Castrol Limited is a British oil company that markets industrial and automotive lubricants, offering a wide range of oil, greases and similar products for most lubrication applications. The company originally named CC Wakefield, the name Castrol was originally just the brand name for CC Wakefield's motor oils, but the company eventually changed its name to Castrol when the product name became better-known than the original company name.Since 2000, Castrol Limited has been a subsidiary of BP, which acquired the company for $4.73 billion.
History:
The "Wakefield Oil Company" was founded by Charles Wakefield in 1899. Wakefield had previously left a job at Vacuum Oil to start a new business in London, selling lubricants for trains and heavy machinery. The company launched its first lubricant in 1906. The new business was established in Cheapside in London to commercialise lubricants for trains and other heavy machinery. Eight former Vacuum Oil employees joined Wakefield in his new company.
History:
In early 20th century, Wakefield Co. developed lubricants especially suited for automobiles and aeroplanes. The brand "Castrol" originated after researchers added measured amounts of castor oil (a vegetable oil derived from castor beans) to their lubricant formulations.
History:
By 1960, the name of the motor oil had eclipsed the company's name itself so "CC Wakefield & Company" became "Castrol Limited". In 1966, Castrol was acquired by company Burmah Oil, which was renamed "Burmah-Castrol". Burmah-Castrol was purchased by London-based multinational BP (then, "BP Amoco plc") in 2000.At the time of purchase, Burmah-Castrol had a turnover of nearly £3 billion with operating profits of £284 million. The company also had 18,000 employees worldwide, with operations in 55 countries. Respectively, BP Amoco had 80,400 employees worldwide and revenues of more than £63 billion.
History:
While Burmah's operations folded into the group, Castrol has remained as a subsidiary of BP.
On 28 February 2023, Castrol unveiled the new logo for the first time in 17 years, aimed at better reflecting its unique positioning in the market and the opportunities it sees in meeting the changing needs of customers.
Sponsorship:
Motorsport The brand has been involved in Formula One for many years, supplying to a number of teams, including McLaren (1979–1980 and 2017), Williams (1997–2005), Team Lotus (1992–1993), Brabham (1983–1984), Jaguar (2000–2004), Renault/Alpine (2017–present) and Walter Wolf Racing.Castrol has sponsored the Ford World Rally Team and M-Sport in the World Rally Championship since 2003, and the Chip Ganassi Racing Ford GT factory team from 2016, to 2019. It has also sponsored Volkswagen Motorsport activities in the Dakar Rally and later the World Rally Championship since 2005. Audi Sport's activities in rallying and touring car racing have been sponsored by Castrol, as well as its Le Mans Prototypes program since 2011. BMW Motorsport was sponsored by Castrol from 1999 to 2014.
Sponsorship:
Toyota Motorsport had Castrol sponsorship in the World Rally Championship from 1993 to 1999, and Hyundai Motorsport did so from 2000 to 2002. Also, the Honda factory team at the World Touring Car Championship had Castrol sponsorship from 2012 to 2020.In the All-Japan Grand Touring Championship, the 1997 championship-winning TOM'S Toyota Supra (famous from the Gran Turismo series by Polyphony Digital) and later the Mugen Honda NSX had Castrol sponsorships.
Sponsorship:
In North America, Castrol has been an active sponsor of NHRA drag racing. Castrol sponsored John Force Racing under the GTX brand from 1987 until the end of the 2014 season. Also, the All American Racers had Castrol sponsorship in the CART World Series from 1996 to 1999. In 2014, Castrol sponsored former Indy 500-winning IndyCar team Bryan Herta Autosport, with English rookie Jack Hawksworth behind the wheel.
Sponsorship:
Castrol is the name sponsor of Castrol Raceway, a multi-track oval, drag, and motocross racing facility in Edmonton, Alberta, Canada. Castrol is the sponsor of D. J. Kennington in the NASCAR Canadian Tire Series and NASCAR Cup Series.In Australia, Castrol has a long history with the Supercars category, and between 1993 and 2005, Castrol was the title sponsor of Perkins Engineering. It also sponsored Longhurst Racing between 1995 and 1999, Ford Performance Racing between 2007 and 2009, and Paul Morris Motorsport in 2010. In conjunction with a multi-year series sponsorship, between 2014 - 2016 several race events acquired Castrol naming rights including the Castrol Edge Townsville 500 and the Castrol Gold Coast 600. Castrol was the title sponsor of Team Bray, owned by Australian drag car legend, Victor Bray for 17 years.
Sponsorship:
Castrol was the main sponsor of the Castrol International Rally in Canberra for 11 years between 1976 and 1986. The same was true for an International Rally held in South Africa, ending annually in neighbouring Swaziland.
It was the most prestigious event on the South African rally calendar at the time, until Castrol ended its sponsorship of this event. Later only some competitors' cars were carrying the bright green and red colours of Castrol sponsorship in national rally events, notably the S.A. Toyota dealer team.
In 2019, Castrol extended their sponsorship activities by re-forming a partnership with Jaguar, this time supporting them in Formula E and also NASCAR Cup Series giants Roush Fenway Racing-Ford since 2020 season.
Castrol also briefly made an appearance in 1993 with Nissan in the British Touring Car Championship, where Keith O'Dor managed to win the 9th round of the season at Silverstone with his teammate Win Percy taking 2nd.
Sponsorship:
American football Castrol advertising has been a part of telecasts of the National Football League for years. In 2011, Castrol's Edge brand became the official motor oil sponsor for the league, along with Minnesota Vikings running back Adrian Peterson endorsing the product; it has since been renewed until the 2017 season. The endorsement deal with Peterson was terminated on 16 September 2014, due to ongoing child abuse allegations.
Sponsorship:
Cricket The Castrol Cricket Index for a team is a dynamic indicator of the overall performance of the cricket team. It is calculated by taking into consideration the batting momentum, the bowling efficiency, the performance of the teams in the quick start overs and the extreme performance overs and many other factors. Castrol Cricket also ranks cricketers based on their overall performance. India centric initiatives being undertaken like Castrol World Cup ka Hero was created during the 2011 Cricket World Cup.
Sponsorship:
Rugby Union In 2011, Castrol signed a four-year sponsorship deal for the Australian national rugby union team and as the naming rights sponsor of The Rugby Championship.
Football From 1995 until 1997, Castrol were also the shirt sponsors of English Football League side Swindon Town.
Advertising:
Castrol products are still marketed under the red, white and green colour scheme that dates from the launch of Castrol motor oil in 1909. Advertisements for Castrol oil historically featured the slogan "Castrol – liquid engineering". This was more recently refreshed and reintroduced as "It's more than just oil. It's liquid engineering."For many years, the opening notes of the second Nachtmusik movement of Mahler's Seventh Symphony were used as the signature theme of Castrol TV commercials.Wakefield vehicles advertised the company and Castrol on their sides; models of them were made by Dinky Toys, and in later times became sought-after collectors' items. One example from 1934 to 1935, in very good to excellent condition, was estimated to fetch £1,000-£1,500 at auction in 2016. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High-water mark (computer security)**
High-water mark (computer security):
In the fields of physical security and information security, the high-water mark for access control was introduced by Clark Weissmann in 1969. It pre-dates the Bell–LaPadula security model, whose first volume appeared in 1972.
Under high-water mark, any object less than the user's security level can be opened, but the object is relabeled to reflect the highest security level currently open, hence the name.
High-water mark (computer security):
The practical effect of the high-water mark was a gradual movement of all objects towards the highest security level in the system. If user A is writing a CONFIDENTIAL document, and checks the unclassified dictionary, the dictionary becomes CONFIDENTIAL. Then, when user B is writing a SECRET report and checks the spelling of a word, the dictionary becomes SECRET. Finally, if user C is assigned to assemble the daily intelligence briefing at the TOP SECRET level, reference to the dictionary makes the dictionary TOP SECRET, too.
Low-water mark:
Low-water mark is an extension to Biba Model. In the Biba model, no-write-up and no-read-down rules are enforced. In this model, the rules are exactly opposite of the rules in Bell-La Padula model. In the low-water mark model, read down is permitted, but the subject label, after reading, will be degraded to object label. It can be classified in floating label security models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AUSM**
AUSM:
The advection upstream splitting method (AUSM) is developed as a numerical inviscid flux function for solving a general system of conservation equations. It is based on the upwind concept and was motivated to provide an alternative approach to other upwind methods, such as the Godunov method, flux difference splitting methods by Roe, and Solomon and Osher, flux vector splitting methods by Van Leer, and Steger and Warming. The AUSM first recognizes that the inviscid flux consist of two physically distinct parts, i.e., convective and pressure fluxes. The former is associated with the flow (advection) speed, while the latter with the acoustic speed; or respectively classified as the linear and nonlinear fields. Currently, the convective and pressure fluxes are formulated using the eigenvalues of the flux Jacobian matrices. The method was originally proposed by Liou and Steffen for the typical compressible aerodynamic flows, and later substantially improved in to yield a more accurate and robust version. To extend its capabilities, it has been further developed in for all speed-regimes and multiphase flow. Its variants have also been proposed.
Features:
The Advection Upstream Splitting Method has many features. The main features are: accurate capturing of shock and contact discontinuities entropy-satisfying solution positivity-preserving solution algorithmic simplicity (not requiring explicit eigen-structure of the flux Jacobian matrices) and straightforward extension to additional conservation laws free of “carbuncle” phenomena uniform accuracy and convergence rate for all Mach numbers.Since the method does not specifically require eigenvectors, it is especially attractive for the system whose eigen-structure is not known explicitly, as the case of two-fluid equations for multiphase flow.
Applications:
The AUSM has been employed to solve a wide range of problems, low-Mach to hypersonic aerodynamics, large eddy simulation and aero-acoustics, direct numerical simulation, multiphase flow, galactic relativistic flow etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**A System of Logic**
A System of Logic:
A System of Logic, Ratiocinative and Inductive is an 1843 book by English philosopher John Stuart Mill.
Overview:
In this work, he formulated the five principles of inductive reasoning that are known as Mill's Methods. This work is important in the philosophy of science, and more generally, insofar as it outlines the empirical principles Mill would use to justify his moral and political philosophies.
Overview:
An article in "Philosophy of Recent Times" has described this book as an "attempt to expound a psychological system of logic within empiricist principles.” This work was important to the history of science, being a strong influence on scientists such as Dirac. A System of Logic also had an impression on Gottlob Frege, who rebuked many of Mill's ideas about the philosophy of mathematics in his work The Foundations of Arithmetic.Mill revised the original work several times over the course of thirty years in response to critiques and commentary by Whewell, Bain, and others.
Editions:
Mill, John Stuart, A System of Logic, University Press of the Pacific, Honolulu, 2002, ISBN 1-4102-0252-6
Sources:
Philosophy of Recent Times, ed. J. B. Hartmann (New York: McGraw-Hill, 1967), I, 14. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Straight-line program**
Straight-line program:
In mathematics, more specifically in computational algebra, a straight-line program (SLP) for a finite group G = ⟨S⟩ is a finite sequence L of elements of G such that every element of L either belongs to S, is the inverse of a preceding element, or the product of two preceding elements. An SLP L is said to compute a group element g ∈ G if g ∈ L, where g is encoded by a word in S and its inverses.
Straight-line program:
Intuitively, an SLP computing some g ∈ G is an efficient way of storing g as a group word over S; observe that if g is constructed in i steps, the word length of g may be exponential in i, but the length of the corresponding SLP is linear in i. This has important applications in computational group theory, by using SLPs to efficiently encode group elements as words over a given generating set.
Straight-line program:
Straight-line programs were introduced by Babai and Szemerédi in 1984 as a tool for studying the computational complexity of certain matrix group properties. Babai and Szemerédi prove that every element of a finite group G has an SLP of length O(log2|G|) in every generating set.
Straight-line program:
An efficient solution to the constructive membership problem is crucial to many group-theoretic algorithms. It can be stated in terms of SLPs as follows. Given a finite group G = ⟨S⟩ and g ∈ G, find a straight-line program computing g over S. The constructive membership problem is often studied in the setting of black box groups. The elements are encoded by bit strings of a fixed length. Three oracles are provided for the group-theoretic functions of multiplication, inversion, and checking for equality with the identity. A black box algorithm is one which uses only these oracles. Hence, straight-line programs for black box groups are black box algorithms.
Straight-line program:
Explicit straight-line programs are given for a wealth of finite simple groups in the online ATLAS of Finite Groups.
Definition:
Informal definition Let G be a finite group and let S be a subset of G. A sequence L = (g1,…,gm) of elements of G is a straight-line program over S if each gi can be obtained by one of the following three rules: gi ∈ S gi = gj ⋅ gk for some j,k < i gi = g−1j for some j < i.The straight-line cost c(g|S) of an element g ∈ G is the length of a shortest straight-line program over S computing g. The cost is infinite if g is not in the subgroup generated by S.
Definition:
A straight-line program is similar to a derivation in predicate logic. The elements of S correspond to axioms and the group operations correspond to the rules of inference.
Definition:
Formal definition Let G be a finite group and let S be a subset of G. A straight-line program of length m over S computing some g ∈ G is a sequence of expressions (w1,…,wm) such that for each i, wi is a symbol for some element of S, or wi = (wj,-1) for some j < i, or wi = (wj,wk) for some j,k < i, such that wm takes upon the value g when evaluated in G in the obvious manner.
Definition:
The original definition appearing in requires that G =⟨S⟩. The definition presented above is a common generalisation of this.
From a computational perspective, the formal definition of a straight-line program has some advantages. Firstly, a sequence of abstract expressions requires less memory than terms over the generating set. Secondly, it allows straight-line programs to be constructed in one representation of G and evaluated in another. This is an important feature of some algorithms.
Examples:
The dihedral group D12 is the group of symmetries of a hexagon. It can be generated by a 60 degree rotation ρ and one reflection λ. The leftmost column of the following is a straight-line program for λρ3: In S6, the group of permutations on six letters, we can take α=(1 2 3 4 5 6) and β=(1 2) as generators. The leftmost column here is an example of a straight-line program to compute (1 2 3)(4 5 6):
Applications:
Short descriptions of finite groups. Straight-line programs can be used to study compression of finite groups via first-order logic. They provide a tool to construct "short" sentences describing G (i.e. much shorter than |G|). In more detail, SLPs are used to prove that every finite simple group has a first-order description of length O(log|G|), and every finite group G has a first-order description of length O(log3|G|).Straight-line programs computing generating sets for maximal subgroups of finite simple groups. The online ATLAS of Finite Group Representations provides abstract straight-line programs for computing generating sets of maximal subgroups for many finite simple groups.
Applications:
Example: The group Sz(32), belonging to the infinite family of Suzuki groups, has rank 2 via generators a and b, where a has order 2, b has order 4, ab has order 5, ab2 has order 25 and abab2ab3 has order 25. The following is a straight-line program that computes a generating set for a maximal subgroup E32·E32⋊C31. This straight-line program can be found in the online ATLAS of Finite Group Representations.
Reachability theorem:
The reachability theorem states that, given a finite group G generated by S, each g ∈ G has a maximum cost of (1 + lg|G|)2. This can be understood as a bound on how hard it is to generate a group element from the generators.
Here the function lg(x) is an integer-valued version of the logarithm function: for k≥1 let lg(k) = max{r : 2r ≤ k}.
Reachability theorem:
The idea of the proof is to construct a set Z = {z1,…,zs} that will work as a new generating set (s will be defined during the process). It is usually larger than S, but any element of G can be expressed as a word of length at most 2|Z| over Z. The set Z is constructed by inductively defining an increasing sequence of sets K(i).
Reachability theorem:
Let K(i) = {z1α1·z2α2·…·ziαi : αj ∈ {0,1}}, where zi is the group element added to Z at the i-th step. Let c(i) denote the length of a shortest straight-line program that contains Z(i) = {z1,…,zi}. Let K(0) = {1G} and c(0)=0. We define the set Z recursively: If K(i)−1K(i) = G, declare s to take upon the value i and stop.
Reachability theorem:
Else, choose some zi+1 ∈ G\K(i)−1K(i) (which is non-empty) that minimises the "cost increase" c(i+1) − c(i).By this process, Z is defined in a way so that any g ∈ G can be written as an element of K(i)−1K(i), effectively making it easier to generate from Z.
We now need to verify the following claim to ensure that the process terminates within lg(|G|) many steps: The next claim is used to show that the cost of every group element is within the required bound.
It takes at most 2i steps to generate g1 ∈ K(i)−1K(i). There is no point in generating the element of maximum length, since it is the identity. Hence 2i −1 steps suffice. To generate g1·g2 ∈ G\K(i)−1K(i), 2i steps are sufficient.
We now finish the theorem. Since K(s)−1K(s) = G, any g ∈ G can be written in the form k−11·k2 with k−11,k2 ∈ K(s). By Corollary 2, we need at most s2 − s steps to generate Z(s) = Z, and no more than 2s − 1 steps to generate g from Z(s).
Therefore c(g|S) ≤ s2 + s − 1 ≤ lg2|G| + lg|G| − 1 ≤ (1 + lg|G|)2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Optical rogue waves**
Optical rogue waves:
Optical rogue waves are rare pulses of light analogous to rogue or freak ocean waves. The term optical rogue waves was coined to describe rare pulses of broadband light arising during the process of supercontinuum generation—a noise-sensitive nonlinear process in which extremely broadband radiation is generated from a narrowband input waveform—in nonlinear optical fiber. In this context, optical rogue waves are characterized by an anomalous surplus in energy at particular wavelengths (e.g., those shifted to the red of the input waveform) or an unexpected peak power. These anomalous events have been shown to follow heavy-tailed statistics, also known as L-shaped statistics, fat-tailed statistics, or extreme-value statistics. These probability distributions are characterized by long tails: large outliers occur rarely, yet much more frequently than expected from Gaussian statistics and intuition. Such distributions also describe the probabilities of freak ocean waves and various phenomena in both the man-made and natural worlds. Despite their infrequency, rare events wield significant influence in many systems. Aside from the statistical similarities, light waves traveling in optical fibers are known to obey the similar mathematics as water waves traveling in the open ocean (the nonlinear Schrödinger equation), supporting the analogy between oceanic rogue waves and their optical counterparts. More generally, research has exposed a number of different analogies between extreme events in optics and hydrodynamic systems. A key practical difference is that most optical experiments can be done with a table-top apparatus, offer a high degree of experimental control, and allow data to be acquired extremely rapidly. Consequently, optical rogue waves are attractive for experimental and theoretical research and have become a highly studied phenomenon. The particulars of the analogy between extreme waves in optics and hydrodynamics may vary depending on the context, but the existence of rare events and extreme statistics in wave-related phenomena are common ground.
History:
Optical rogue waves were initially reported in 2007 based on experiments investigating the stochastic properties of supercontinuum generation from a train of nearly-identical picosecond input pulses. In the experiments, radiation from a mode-locked laser (megahertz pulse train) was injected into a nonlinear optical fiber and characteristics of the output radiation were measured at the single-shot level for thousands of pulses (events). These measurements revealed that the attributes of individual pulses can be markedly different from those of the ensemble average. Consequently, these attributes are normally averaged out or hidden in time-averaged observations. The initial observations occurred at the University of California, Los Angeles as part of DARPA-funded research aiming to harness supercontinuum for time-stretch A/D conversion and other applications in which stable white light sources are required (e.g., real-time spectroscopy). The study of optical rogue waves ultimately showed that stimulated supercontinuum generation (as described further below) provides a means of becalming such broadband sources.Pulse-resolved spectral information was obtained by extracting wavelengths far from that of the input pulse using a longpass filter and detecting the filtered light with a photodiode and a real-time digital oscilloscope. The radiation can also be spectrally resolved with the time-stretch dispersive Fourier transform (TS-DFT), which produces a wavelength-to-time mapping such that the temporal traces collected for each event correspond to the actual spectral profile over the filtered bandwidth. The TS-DFT has subsequently been used to stretch the complete (unfiltered) output spectra of such broadband pulses, thereby allowing measurement of full pulse-resolved spectra at the megahertz repetition rate of the source (see below).Pulse-resolved measurements showed that a fraction of the pulses had much more redshifted energy content than the majority of events. In other words, the energy passed by the filter was much larger for a small fraction of the events, and the fraction of events with anomalous energy content in this spectral band could be increased by raising the power of the input pulses. Histograms of this energy content showed heavy-tailed properties. In some scenarios, the vast majority of events had a negligible amount of energy within the filter bandwidth (i.e., below the measurement noise floor), while a small number of events had energies at least 30–40 times the average value, making them very clearly visible.
History:
The analogy between these extreme optical events and hydrodynamic rogue waves was initially developed by noting a number of parallels, including the role of solitons, heavy-tailed statistics, dispersion, modulation instability, and frequency downshifting effects. Additionally, forms of the nonlinear Schrödinger equation are used to model both optical pulse propagation in nonlinear fiber and deep water waves, including hydrodynamic rogue waves. Simulations were then conducted with the nonlinear Schrödinger equation in an effort to model the optical findings. For each trial or event, the initial conditions consisted of an input pulse and a minute amount of broadband input noise. The initial conditions (i.e., pulse power and noise level) were chosen so that the spectral broadening was relatively limited in the typical events. Collecting the results from the trials, very similar filtered energy statistics were observed compared with those seen experimentally. The simulations showed that rare events had experienced significantly more spectral broadening than the others because a soliton had been ejected in the former class of events, but not in the vast majority of events. By applying a correlation analysis between the redshifted output energy and the input noise, it was observed that a particular component of the input noise was elevated each time a surplus in the redshifted noise was generated. The critical noise component has specific frequency and timing relative to the pulse envelope—a noise component that efficiently seeds modulation instability and can, therefore, accelerate the onset of soliton fission.
Principles:
Supercontinuum generation with long pulses Supercontinuum generation is a nonlinear process in which intense input light, usually pulsed, is broadened into a wideband spectrum. The broadening process can involve different pathways depending on the experimental conditions, yielding varying output properties. Especially large broadening factors can be realized by launching narrowband pump radiation (long pulses or continuous-wave radiation) into a nonlinear fiber at or near its zero-dispersion wavelength or in the anomalous dispersion regime. Such dispersive characteristics support modulation instability, which amplifies input noise and forms Stokes and anti-Stokes sidebands around the pump wavelength. This amplification process, manifested in the time domain as a growing modulation on the envelope of the input pulse, then leads to the generation of high-order solitons, which break apart into fundamental solitons and coupled dispersive radiation. This process, known as soliton fission, occurs in supercontinuum generation pumped by both short or long pulses, but with ultrashort pulses, noise amplification is not a prerequisite for it to occur. These solitonic and dispersive fission products are redshifted and blueshifted, respectively, with respect to the pump wavelength. With further propagation, the solitons continue to shift to the red through the Raman self-frequency shift, an inelastic scattering process.
Principles:
Fluctuations Supercontinuum generation can be sensitive to noise. Especially with narrowband input radiation and large broadening factors, much of the spectral broadening is initiated by input noise, causing the spectral and temporal properties of the radiation to inherit substantial variability from shot-to-shot and to be highly sensitive to the initial conditions. These shot-to-shot variations normally go unnoticed in conventional measurements as they average over a very large number of pulses. Based on such time-averaged measurements, the spectral profile of the supercontinuum generally appears smooth and relatively featureless, whereas, the spectrum of a single pulse may be highly structured in comparison. Other effects such as dispersion management and polarization changes can also influence stability and bandwidth.
Principles:
Both the pump power and input noise level are influential in the supercontinuum generation process, determining, e.g., the broadening factor and the onset of soliton-fission. Below the threshold for soliton fission, the soliton number generated from an average output pulse is below one, and well above threshold, it can be quite large. In the case of large pump power, soliton fission often has been compared to onset of boiling in a superheated liquid in that the transition begins rather suddenly and explosively. In short, supercontinuum generation amplifies the input noise, transferring its properties to macroscopic characteristics of the broadened pulse train. Many of the commercially available supercontinuum sources are pumped by long pulses and, therefore, tend to have relatively significant pulse-to-pulse spectral fluctuations.
Principles:
Input noise, or any other stimulus, matching the timing of the sensitive portion of the pump envelope and the frequency shift of modulation instability gain experiences the largest amplification. The interplay between nonlinearity and dispersion creates a particular portion on the pump envelope where the modulation instability gain is large enough and the walk-off between the pump and growing modulation is not too rapid. The frequency of this sensitive window is generally substantially shifted from the input wavelength of the pump, especially if the pump is near the zero-dispersion wavelength of the fiber. Experimentally, the dominant source of such noise is typically amplified spontaneous emission (ASE) from the laser itself or amplifiers used to increase the optical power. Once the growing modulation becomes large enough, soliton fission begins suddenly, liberating one or more redshifted solitons, which travel much slower than the remnants of the original envelope and continue shifting to the red due to Raman scattering. A properly positioned detection filter can be used to catch anomalous occurrences, such as a rare soliton that has been liberated due to a small surplus in the key input noise component.
Principles:
Non-Gaussian statistics Non-Gaussian statistics arise due to the nonlinear mapping of random initial conditions into output states. For example, modulation instability amplifies input noise, which ultimately leads to soliton formation. Also, in systems displaying heavy-tailed statistical properties, random input conditions often enter through a seemingly insignificant, nontrivial, or otherwise-hidden variable. Such is generally the case for optical rogue waves; for example, they can begin from a specific out-of-band noise component, which is usually very weak and unnoticed. Yet, in the output states, these minor input variations can be magnified into large potential swings in key observables. The latter may, therefore, exhibit substantial fluctuations for no readily apparent reason. Thus, the appearance of extreme statistics is often striking not only because of their counterintuitive probability assignments, but also because they frequently signify a nontrivial or unexpected sensitivity to initial conditions. It is important to recognize that rogue waves in both optics and hydrodynamics are classical phenomena and, therefore, intrinsically deterministic. However, determinism does not necessarily indicate that it is straightforward or practical to make useful predictions. Optical rogue waves and their statistical properties can be investigated in numerical simulations with the generalized nonlinear Schrödinger equation, a classical propagation equation that is also used to model supercontinuum generation and, more generally, pulse propagation in optical fiber. In such simulations, a source of input noise is needed to produce the stochastic output variations. Frequently, input phase noise with a power amplitude of one photon per mode is employed, corresponding to shot noise. Yet, noise levels beyond the one-photon-per-mode level are generally more experimentally realistic and often needed.Measurements of redshifted energy serve as a means of detecting the presence of rare solitons. Additionally, peak intensity and redshifted energy are well correlated variables in supercontinuum generation with low soliton number; thus, redshifted energy serves as an indicator of peak intensity in this regime. This may be understood by recognizing that for sufficiently small soliton number only rare events contain a well-formed soliton. Such a soliton has short duration and high peak intensity, and Raman scattering ensures that it is also redshifted relative to the majority of the input radiation. Even if more than one soliton occurs in a single event, the most intense one generally has the most redshifted energy in this scenario. The solitons generally have little opportunity to interact with other intense features. As previously noted, the situation at higher pump power is different in that soliton fission occurs explosively; soliton structures appear in number at essentially the same point of the fiber and relatively early in the propagation, allowing collisions to occur. Such collisions are accompanied by an energy exchange facilitated by third-order dispersion and Raman effects, causing some solitons to absorb energy from others, thereby creating the potential for anomalous spectral redshifts. In this situation, the anomalous occurrences are not necessarily tied to the largest peak intensities. In summary, rare solitons may be generated at low pump power or input noise levels, and these events can be identified by their redshifted energy. At higher power, many solitons are generated and simulations suggest that their collisions can also yield extremes in redshifted energy, although in this case, the redshifted energy and peak intensity may not be as strongly correlated. Oceanic rogue waves are also thought to arise from both seeding of modulation instability and collisions between solitons, as in the optical scenario.Just above the soliton-fission threshold, where one or more solitons are liberated in a typical event, rare narrowband events are detected as deficiencies in redshifted energy. In this regime of operation, the pulse-resolved redshifted energy follows left-skewed heavy-tailed statistics. These rare narrowband events are not generally correlated with reductions in components of the input noise. Instead, a rare frustration of spectral broadening occurs because noise components can seed multiple presolitonic features; thus, the seeds can effectively compete for gain within the pump envelope, and therefore, the growth is suppressed. Under various operating conditions (pump power level, filter wavelength, etc.), a wide variety of statistical distributions are observed.
Principles:
Other conditions Supercontinuum sources driven by ultrashort pump pulses (on the order of tens of femtoseconds in duration or less) are generally much more stable than those pumped by longer pulses. Even though such supercontinuum sources may make use of anomalous or zero dispersion, the propagation lengths are usually short enough that noise-seeded modulation instability has a less significant impact. The broadband nature of the input radiation makes it such that octave-spanning supercontinua can be achieved with relatively modest broadening factors. Even so, the noise dynamics of such sources can still be nontrivial, though they are generally stable and can be suitable for precision time-resolved measurements and frequency metrology. Nevertheless, soliton timing jitter in supercontinuum generation with 100 fs pulses has also been traced to input noise amplification by modulation instability, and L-shaped statistics in filtered energy have been observed in supercontinuum sources driven by such pulses. Extreme statistics have also been observed with pumping in the normal dispersion regime, wherein modulation instability occurs due to the contribution of higher-order dispersion.
Principles:
Turbulence and breathers Wave turbulence or convective instability induced by third-order dispersion and/or Raman scattering have also been employed to describe the formation of optical rogue waves. Third-order dispersion and Raman scattering play a central role in the generation of large redshifts, and turbulence treats the statistical properties of weakly-coupled waves with randomized relative phases. Another theoretical description focused on analytical methodology has examined periodic nonlinear waves known as breathers. These structures provide a means of investigating modulation instability and are solitonic in nature. The Peregrine soliton, a specific breather solution, has attracted attention as a possible type of rogue wave that may have significance in optics and hydrodynamics, and this solution has been observed experimentally in both contexts. Yet, the stochastic nature of rogue waves in optics and hydrodynamics is one of their defining features, but remains an open question for these solutions as well as other postulated analytic forms.
Principles:
Extreme events in beam filamentation Extreme phenomena have been observed in single-shot studies of the temporal dynamics of optical beam filamentation in air and the two-dimensional transverse profiles of beams forming multiple filaments in a nonlinear Xenon cell. In the former studies, spectral analysis of self-guided optical filaments, which were generated with pulses close to the critical power for filamentation in air, showed that the shot-to-shot statistics become heavy-tailed at the short wavelength and long wavelength edges of the spectrum. Termed optical rogue wave statistics, this behavior was studied in simulations, which supported an explanation based on pump noise transfer by self-phase modulation. In the latter experimental study, filaments of extreme intensity described as optical rogue waves were observed to emerge due to mergers between filament strings when multiple filaments are generated. In contrast, the statistical properties were found to be approximately Gaussian for low filament numbers. It was noted that extreme spatio-temporal events are found only in certain nonlinear media even though other media have larger nonlinear responses, and the experimental findings suggested that laser-induced thermodynamic fluctuations within the nonlinear medium are the origin of the extreme events observed in multifilamention. Numerical predictions of extreme occurrences in multiple beam filamentation have also been performed, with some differences in conditions and interpretation.
Stimulated supercontinuum generation:
Supercontinuum generation is generally unstable when pumped by long pulses. The occurrence of optical rogue waves are an extreme manifestation of this instability and arise due to a sensitivity to a particular component of input noise. This sensitivity can be exploited to stabilize and increase the generation efficiency of the spectral broadening process by actively seeding the instability with a controlled signal instead of allowing it to begin from noise. The seeding can be accomplished with an extraordinarily weak, tailored optical seed pulse, which stabilizes supercontinuum radiation by actively controlling or stimulating modulation instability. Whereas noise-induced (i.e., spontaneously generated) supercontinuum radiation usually has significant intensity noise and little to no pulse-to-pulse coherence, controlled stimulation results in a supercontinuum pulse train with greatly improved phase and amplitude stability. Additionally, the stimulus can also be used to actuate the broadband output, i.e., to switch the supercontinuum on and off by applying or blocking the seed. The seed may be derived from the pump pulse by broadening a portion of it slightly and then carving out a stable portion of the broadened tail. The relative delay between the pump and seed pulses is then adjusted accordingly, and the two pulses are combined in the nonlinear fiber. Alternatively, the extremely stable stimulated supercontinuum can be generated by deriving both pump and seed radiation from a parametric process, e.g., the two-color output (signal and idler) of an optical parametric oscillator. Added input modulations have also been studied for changing the frequency of rare events and optical feedback can be employed to speed up the spectral broadening process. Stimulated supercontinuum radiation can also be generated using an independent continuous-wave seed, which avoids the need to control the timing but the seed must instead must have higher average power. A continuous-wave-seeded supercontinuum source has been employed in time stretch microscopy, yielding improved images compared with those obtained using unseeded sources. Stimulated supercontinuum generation can be slowed or frustrated by applying a second seed pulse with the proper frequency and timing to the mix. Thus, applying one seed pulse can accelerate the spectral broadening process, and the application of a second seed pulse can once again delay spectral broadening. This frustration effect occurs because the two seeds effectively compete for gain within the pump envelope, and it is a controlled version of the rare narrowband events known to occur stochastically in certain supercontinuum pulse trains (see above).Stimulation has been harnessed for enhancing silicon-based supercontinuum generation at telecommunications wavelengths. Normally, spectral broadening in silicon is self-limiting because of strong nonlinear absorption effects: two-photon absorption and the associated free-carrier generation rapidly sap the pump, and increasing the pump power leads to more rapid depletion. In silicon nanowires, stimulated supercontinuum generation can greatly extend the broadening factor by circumventing the clamping effect of nonlinear loss, make broadening much more efficient, and yield coherent output radiation with the proper seed radiation.
Pulse-resolved spectra:
Complete single-shot spectral profiles of modulation instability and supercontinuum have been mapped into the time domain with the TS-DFT for capture at megahertz repetition rates. These experiments have been used to collect large volumes of spectra data very rapidly, permitting detailed statistical analyses of the underlying dynamics in ways that are exceedingly difficult or impossible to achieve with standard measurement techniques. Latent intrapulse correlations have been identified in modulation instability and supercontinuum spectra through such experiments. In particular, spectral measurements with the TS-DFT have been employed to reveal a number of key aspects of modulation instability in the pulsed (i.e., temporally-confined) scenario. Experimental data show that modulation instability amplifies discrete spectral modes, which exhibit mode asymmetry between Stokes and anti-Stokes wavelengths. Furthermore, the dynamics display prominent competition effects between these amplified modes, an interaction that favors domination of one mode over others. Such TS-DFT measurements have provided insights into the mechanism that often causes single patterns to dominate a given spatial or temporal region in the various contexts in which modulation instability appears. This type of exclusive mode growth is also influential in the initiation of optical rogue waves. Optically, these features become apparent in single-shot studies of pulse-driven modulation instability, but such effects are normally unrecognizable in time-averaged measurements due to inhomogeneous broadening of the modulation instability gain profile. The acquisition of a large number of such single-shot spectra also has a critical role in these analyses. This measurement technique has been used to measure supercontinuum spectra spanning an octave in bandwidth, and in such broadband measurements, rare rogue solitons have been observed at redshifted wavelengths. Single-shot spectral measurements with the TS-DFT have also recorded rogue-wave-like probability distributions caused by cascaded Raman dynamics in the process of intracavity Raman conversion in a partially mode-locked fiber laser. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sony E PZ 18-110mm F4 G OSS**
Sony E PZ 18-110mm F4 G OSS:
The Sony E PZ 18-110mm F4 G OSS in photography is an advanced constant maximum aperture zoom lens for the Sony E-mount, announced by Sony on September 9, 2016. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aztec use of entheogens**
Aztec use of entheogens:
The ancient Aztecs employed a variety of entheogenic plants and animals within their society. The various species have been identified through their depiction on murals, vases, and other objects.
History:
There are many pieces of archaeological evidence in reference to the use of entheogens early in the history of Mesoamerica. Olmec burial sites with remains of the Bufo toad (Bufo marinus), Maya mushroom effigies, and Spanish writings all point to a heavy involvement with psychoactive substances in the Aztec lifestyle.
The Florentine codex contains multiple references to the use of psychoactive plants among the Aztecs. The 11th book of the series contains identifications of five plant entheogens. R. Gordon Wasson, Richard Evans Schultes, and Albert Hofmann have suggested that the statue of Xochipilli, the Aztec 'Prince of Flowers,' contains effigies of a number of plant based entheogens.
The plants were primarily used by the priests, or tlamacazqui, other nobility, and visiting dignitaries. They would use them for divination much as the indigenous groups of central Mexico do today. The priests would also ingest the entheogens to engage in prophecy, interpret visions, and heal.
Entheogens:
Ololiuqui and Tlitliltzin Ololiuqui (Coatl xoxouhqui) was identified as Rivea corymbosa in 1941 by Richard Evans Schultes. The name Ololiuqui refers to the brown seeds of the Rivea corymbosa (Morning Glory) plant.
Tlitliltzin was identified later as being Ipomoea violacea by R. Gordon Wasson. This variation contains black seeds and usually has bluish hued flowers.
The seeds of these plants contain the psychoactive d-lysergic acid amide, or LSA. The preparation of the seeds involved grinding them on a metate, then filtering them with water to extract the alkaloids. The resulting brew was then drunk to bring forth visions.
The Florentine Codex Book 11 describes the Ololiuqui intoxication: It makes one besotted; it deranges one, troubles one, maddens one, makes one possessed. He who eats it, who drinks it, sees many things which greatly terrify him. He is really frightened [by the] poisonous serpent which he sees for that reason.
The morning glory was also utilized in healing rituals by the ticitl. The ticitl would often take ololiuqui to determine the cause of diseases and illness. It was also used as an anesthetic to ease pain by creating a paste from the seeds and tobacco leaf, then rubbing it on the affected body part.
Entheogens:
Mushrooms Called "Teonanácatl" in Nahuatl (literally "god mushroom"—compound of the words teo(tl) (god) and nanácatl (mushroom))—the mushroom genus Psilocybe has a long history of use within Mesoamerica. The members of the Aztec upper class would often take teonanácatl at festivals and other large gatherings. According to Fernando Alvarado Tezozomoc, it was often a difficult task to procure mushrooms. They were quite costly as well as very difficult to locate, requiring all-night searches.
Entheogens:
Both Fray Bernardino de Sahagún and Fray Toribio de Benavente Motolinia describe the use of the mushrooms. The Aztecs would drink chocolate and eat the mushrooms with honey. Those partaking in the mushroom ceremonies would fast before ingesting the sacrament. The act of taking mushrooms is known as monanacahuia, meaning to "mushroom oneself".
Entheogens:
Some written observations under the influence of the doctrine of Catholicism recount the use of the mushroom among the Montezumanic people. Allegedly, during the emperor's coronation ceremony, many prisoners were sacrificed, had their flesh eaten, and their hearts removed. Those who were invited guests to the feast ate mushrooms, which Diego Durán describes as causing those who ate them to go insane. After the defeat of the Aztecs, the Spanish forbade traditional religious practices and rituals that they considered "pagan idolatry", including ceremonial mushroom use.
Entheogens:
Sinicuichi Not much is known of the use of sinicuichi (alternate spelling sinicuiche) among the Aztecs. R. Gordon Wasson identified the flower on the statue of Xochipilli and suggested from its placement with other entheogens that it was probably used in a ritualistic context. Multiple alkaloids have been isolated from the plant; with cryogenine, lythrine, and nesodine being the most important.
Entheogens:
Sinicuichi could be the plant tonatiuh yxiuh "the herb of the sun" from the Aztec Herbal of 1552. tonatiuh means sun. This is interesting because today in Central and South America, sinicuichi is often called abre-o-sol, or the "sun opener." Tonatiuh yxiuh is described as being a summer blooming plant, as is Heimia.
The Herbal also includes a recipe for a potion to conquer fear. It reads: Let one who is fear-burdened take as a drink a potion made of the herb tonatiuh yxiuh which throws out the brightness of gold.
One of the effects of sinicuichi is that it adds a golden halo or tinge to objects when ingested.
Entheogens:
Tlapatl and Mixitl Tlapatl and mixitl are both Datura species, Datura stramonium and Datura innoxia, with strong hallucinogenic (deliriant) properties. The plants typically have large, white or purplish, trumpet-shaped flowers and spiny seed capsules, that of D. stramonium being held erect and dehisceing by four valves and that of D. innoxia nodding downward and breaking up irregularly. The active principles are the tropane alkaloids atropine, scopolamine, and hyoscyamine.
Entheogens:
The use of datura spans millennia. It has been employed by both many indigenous groups in North, Central, and South America for a variety of uses. Called toloache today in Mexico, datura species were used among the Aztec for medicine, divination, and malevolent purposes.
Entheogens:
For healing, tlapatl was made into an ointment which was spread over infected areas to cure gout, as well as applied as a local anesthetic. The plants were also utilized to cause harm to others. For example, it was believed that mixitl would cause a being to become paralyzed and mute, while tlapatl will cause those who take it to be disturbed and go mad.
Entheogens:
Peyotl The cactus known as peyotl, or more commonly peyote (Lophophora williamsii), has a rich history of use in Mesoamerica. Its use in northern Mexico among the Huichol has been written about extensively. It is thought that since peyote only grows in certain regions of Mexico, the Aztecs would receive dried buttons through long-distance trade. Peyote was viewed as being a protective plant by the Aztec. Sahagún suggested that the plant is what allowed the Aztec warriors to fight as they did.
Entheogens:
Pipiltzintzintli R. Gordon Wasson has posited that the plant known as pipiltzintzintli is in fact Salvia divinorum. It is not entirely known whether or not this plant was used by the Aztecs as a psychotropic, but Jonathan Ott (1996) argues that although there are competing species for the identification of pipiltzintzintli, Salvia divinorum is probably the "best bet." There are references to use of pipiltzintzintli in Spanish arrest records from the conquest, as well as a reference to the mixing of ololiuqui with pipiltzintzintli.
Entheogens:
Contemporaneously, the Mazatec, meaning "people of the deer" in Nahuatl, from the Oaxaca region of Mexico utilize Salvia divinorum when Psilocybe spp. mushrooms are not readily available. They chew and swallow the leaves of fresh salvia to enter into a shamanic state of consciousness. The Mazatec use the plant in both divination and healing ceremonies, perhaps as the Aztecs did 500 years ago. Modern users of Salvia have adapted the traditional method, forgoing the swallowing of juices due to Salvinorin A being readily absorbed by the mucous membranes of the mouth.
Entheogens:
Toloatzin Toloatzin refers to Datura innoxia specifically, although it is often confused with Datura stramonium in general.
Ad-hocs:
Cacahua Aztecs combined cacao with psilocybin mushrooms, a polysubstance combination referred to as "cacahua-xochitl", which literally means "chocolate-mushrooms".
Ad-hocs:
At the very first, mushrooms had been served...They ate no more food; they only drank chocolate during the night. And they ate the mushrooms with honey. When the mushrooms took effect on them, then they danced, then they wept. But some, while still in command of their senses, entered and sat there by the house on their seats; they did no more, but only sat there nodding.
Sources:
De Rios, Marlene Dobkin. "Hallucinogens, cross-cultural perspectives." University of New Mexico Press. Albuquerque, New Mexico, 1984.
Dibble, Charles E., et al. (trans). "Florentine Codex: Book 9." The University of Utah. Utah, 1959.
Dibble, Charles E., et al. (trans). "Florentine Codex: Book 11 - Earthly Things." The School of American Research. Santa Fe, New Mexico, 1963.
Elferink, Jan G. R., Flores, Jose A., Kaplan Charles D. "The Use of Plants and Other Natural Products for Malevolent Practices Among the Aztecs and Their Successors." Estudios de Cultura Nahuatl Volume 24, 1994.
Furst, Peter T. "Flesh of the Gods: The Ritual Use of Hallucinogens." Waveland Press, Prospect Heights, Illinois, 1972.
Gates, William. "The De La Cruz-Badiano Aztec Herbal of 1552." The Maya Society. Baltimore, Maryland, 1939.
Hofmann, Albert. "Teonanácatl and Ololiuqui, two ancient magic drugs of Mexico." UNODC Bulletin on Narcotics. Issue 1, pp. 3–14, 1971.
Ott, Jonathan. "On Salvia divinorum" Eleusis, n. 4, pp. 31–39, April 1996.
Schultes, Richard Evans. "The Plant Kingdom and the Hallucinogens." UNODC Bulletin on Narcotics. Issue 4, 1969.
Steck, Francis Borgia. "Motolinia's History of the Indians of New Spain." William Byrd Press, Inc. Richmond, Virginia, 1951.
Townsend, Richard F. "The Aztecs." Thames & Hudson Inc. New York, New York, 2000.
Lyncho Ruiz 1996 Balancing Act Research and Education Founder, President, Ecosystem Management Director and Ethnobotanist | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ergodicity**
Ergodicity:
In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.
Ergodicity:
Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.
Ergodicity:
Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.
Ergodicity:
The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.
Informal explanation:
Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. Equivalently, ergodicity can be understood in terms of stochastic processes. They are one and the same, despite using dramatically different notation and language.
Informal explanation:
Measure-preserving dynamical systems The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as (X,A,μ,T).
Informal explanation:
The set X is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure μ is understood to define the natural volume of the space X and of its subspaces. The collection of subspaces is denoted by A , and the size of any given subset A⊂X is μ(A) ; the size is its volume. Naively, one could imagine A to be the power set of X ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, A consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements of open sets; these can always be taken to be measurable.
Informal explanation:
The time evolution of the system is described by a map T:X→X . Given some subset A⊂X , its map T(A) will in general be a deformed version of A – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set T(A) must have the same volume as A ; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).
Informal explanation:
A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be x≠y with T(x)=T(y) . Worse, a single point x∈X has no size. These difficulties can be avoided by working with the inverse map T−1:A→A ; it will map any given subset A⊂X to the parts that were assembled to make it: these parts are T−1(A)∈A . It has the important property of not losing track of where things came from. More strongly, it has the important property that any (measure-preserving) map A→A is the inverse of some map X→X . The proper definition of a volume-preserving map is one for which μ(A)=μ(T−1(A)) because T−1(A) describes all the pieces-parts that A came from.
Informal explanation:
One is now interested in studying the time evolution of the system. If a set A∈A eventually comes to fill all of X over a long period of time (that is, if Tn(A) approaches all of X for large n ), the system is said to be ergodic. If every set A behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets A wander away, never to be returned to. An example would be water running downhill: once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part.
Informal explanation:
Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets A,B , and not just between some set A and X . That is, given any two sets A,B∈A , a system is said to be (topologically) mixing if there is an integer N such that, for all A,B and n>N , one has that Tn(A)∩B≠∅ . Here, ∩ denotes set intersection and ∅ is the empty set. Other notions of mixing include strong and weak mixing, which describe the notion that the mixed substances intermingle everywhere, in equal proportion. This can be non-trivial, as practical experience of trying to mix sticky, gooey substances shows.
Informal explanation:
Ergodic Processes The above discussion appeals to a physical sense of a volume. The volume does not have to literally be some portion of 3D space; it can be some abstract volume. This is generally the case in statistical systems, where the volume (the measure) is given by the probability. The total volume corresponds to probability one. This correspondence works because the axioms of probability theory are identical to those of measure theory; these are the Kolmogorov axioms.The idea of a volume can be very abstract. Consider, for example, the set of all possible coin-flips: the set of infinite sequences of heads and tails. Assigning the volume of 1 to this space, it is clear that half of all such sequences start with heads, and half start with tails. One can slice up this volume in other ways: one can say "I don't care about the first n−1 coin-flips; but I want the n 'th of them to be heads, and then I don't care about what comes after that". This can be written as the set (∗,⋯,∗,h,∗,⋯) where ∗ is "don't care" and h is "heads". The volume of this space is again one-half.
Informal explanation:
The above is enough to build up a measure-preserving dynamical system, in its entirety. The sets of h or t occurring in the n 'th place are called cylinder sets. The set of all possible intersections, unions and complements of the cylinder sets then form the Borel set A defined above. In formal terms, the cylinder sets form the base for a topology on the space X of all possible infinite-length coin-flips. The measure μ has all of the common-sense properties one might hope for: the measure of a cylinder set with h in the m 'th position, and t in the k 'th position is obviously 1/4, and so on. These common-sense properties persist for set-complement and set-union: everything except for h and t in locations m and k obviously has the volume of 3/4. All together, these form the axioms of a sigma-additive measure; measure-preserving dynamical systems always use sigma-additive measures. For coin flips, this measure is called the Bernoulli measure.
Informal explanation:
For the coin-flip process, the time-evolution operator T is the shift operator that says "throw away the first coin-flip, and keep the rest". Formally, if (x1,x2,⋯) is a sequence of coin-flips, then T(x1,x2,⋯)=(x2,x3,⋯) . The measure is obviously shift-invariant: as long as we are talking about some set A∈A where the first coin-flip x1=∗ is the "don't care" value, then the volume μ(A) does not change: μ(A)=μ(T(A)) . In order to avoid talking about the first coin-flip, it is easier to define T−1 as inserting a "don't care" value into the first position: T−1(x1,x2,⋯)=(∗,x1,x2,⋯) . With this definition, one obviously has that μ(T−1(A))=μ(A) with no constraints on A . This is again an example of why T−1 is used in the formal definitions.
Informal explanation:
The above development takes a random process, the Bernoulli process, and converts it to a measure-preserving dynamical system (X,A,μ,T).
Informal explanation:
The same conversion (equivalence, isomorphism) can be applied to any stochastic process. Thus, an informal definition of ergodicity is that a sequence is ergodic if it visits all of X ; such sequences are "typical" for the process. Another is that its statistical properties can be deduced from a single, sufficiently long, random sample of the process (thus uniformly sampling all of X ), or that any collection of random samples from a process must represent the average statistical properties of the entire process (that is, samples drawn uniformly from X are representative of X as a whole.) In the present example, a sequence of coin flips, where half are heads, and half are tails, is a "typical" sequence.
Informal explanation:
There are several important points to be made about the Bernoulli process. If one writes 0 for tails and 1 for heads, one gets the set of all infinite strings of binary digits. These correspond to the base-two expansion of real numbers. Explicitly, given a sequence (x1,x2,⋯) , the corresponding real number is y=∑n=1∞xn2n The statement that the Bernoulli process is ergodic is equivalent to the statement that the real numbers are uniformly distributed. The set of all such strings can be written in a variety of ways: {h,t}∞={h,t}ω={0,1}ω=2ω=2N.
Informal explanation:
This set is the Cantor set, sometimes called the Cantor space to avoid confusion with the Cantor function C(x)=∑n=1∞xn3n In the end, these are all "the same thing".
Informal explanation:
The Cantor set plays key roles in many branches of mathematics. In recreational mathematics, it underpins the period-doubling fractals; in analysis, it appears in a vast variety of theorems. A key one for stochastic processes is the Wold decomposition, which states that any stationary process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.
Informal explanation:
The Ornstein isomorphism theorem states that every stationary stochastic process is equivalent to a Bernoulli scheme (a Bernoulli process with an N-sided (and possibly unfair) gaming die). Other results include that every non-dissipative ergodic system is equivalent to the Markov odometer, sometimes called an "adding machine" because it looks like elementary-school addition, that is, taking a base-N digit sequence, adding one, and propagating the carry bits. The proof of equivalence is very abstract; understanding the result is not: by adding one at each time step, every possible state of the odometer is visited, until it rolls over, and starts again. Likewise, ergodic systems visit each state, uniformly, moving on to the next, until they have all been visited.
Informal explanation:
Systems that generate (infinite) sequences of N letters are studied by means of symbolic dynamics. Important special cases include subshifts of finite type and sofic systems.
History and etymology:
The term ergodic is commonly thought to derive from the Greek words ἔργον (ergon: "work") and ὁδός (hodos: "path", "way"), as chosen by Ludwig Boltzmann while he was working on a problem in statistical mechanics. At the same time it is also claimed to be a derivation of ergomonode, coined by Boltzmann in a relatively obscure paper from 1884. The etymology appears to be contested in other ways as well.The idea of ergodicity was born in the field of thermodynamics, where it was necessary to relate the individual states of gas molecules to the temperature of a gas as a whole and its time evolution thereof. In order to do this, it was necessary to state what exactly it means for gases to mix well together, so that thermodynamic equilibrium could be defined with mathematical rigor. Once the theory was well developed in physics, it was rapidly formalized and extended, so that ergodic theory has long been an independent area of mathematics in itself. As part of that progression, more than one slightly different definition of ergodicity and multitudes of interpretations of the concept in different fields coexist.For example, in classical physics the term implies that a system satisfies the ergodic hypothesis of thermodynamics, the relevant state space being position and momentum space.
History and etymology:
In dynamical systems theory the state space is usually taken to be a more general phase space. On the other hand in coding theory the state space is often discrete in both time and state, with less concomitant structure. In all those fields the ideas of time average and ensemble average can also carry extra baggage as well—as is the case with the many possible thermodynamically relevant partition functions used to define ensemble averages in physics, back again. As such the measure theoretic formalization of the concept also serves as a unifying discipline. In 1913 Michel Plancherel proved the strict impossibility for ergodicity for a purely mechanical system.
Ergodicity in physics and geometry:
A review of ergodicity in physics, and in geometry follows. In all cases, the notion of ergodicity is exactly the same as that for dynamical systems; there is no difference, except for outlook, notation, style of thinking and the journals where results are published.
Physical systems can be split into three categories: classical mechanics, which describes machines with a finite number of moving parts, quantum mechanics, which describes the structure of atoms, and statistical mechanics, which describes gases, liquids, solids; this includes condensed matter physics. These presented below.
Ergodicity in physics and geometry:
In statistical mechanics This section reviews ergodicity in statistical mechanics. The above abstract definition of a volume is required as the appropriate setting for definitions of ergodicity in physics. Consider a container of liquid, or gas, or plasma, or other collection of atoms or particles. Each and every particle xi has a 3D position, and a 3D velocity, and is thus described by six numbers: a point in six-dimensional space R6.
Ergodicity in physics and geometry:
If there are N of these particles in the system, a complete description requires 6N numbers. Any one system is just a single point in R6N.
The physical system is not all of R6N , of course; if it's a box of width, height and length W×H×L then a point is in (W×H×L×R3)N.
Nor can velocities be infinite: they are scaled by some probability measure, for example the Boltzmann–Gibbs measure for a gas. None-the-less, for N close to the Avogadro number, this is obviously a very large space. This space is called the canonical ensemble.
Ergodicity in physics and geometry:
A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system. For the above example, this implies that any given atom not only visits every part of the box W×H×L with uniform probability, but it does so with every possible velocity, with probability given by the Boltzmann distribution for that velocity (so, uniform with respect to that measure). The ergodic hypothesis states that physical systems actually are ergodic. Multiple time scales are at work: gases and liquids appear to be ergodic over short time scales. Ergodicity in a solid can be viewed in terms of the vibrational modes or phonons, as obviously the atoms in a solid do not exchange locations. Glasses present a challenge to the ergodic hypothesis; time scales are assumed to be in the millions of years, but results are contentious. Spin glasses present particular difficulties.
Ergodicity in physics and geometry:
Formal mathematical proofs of ergodicity in statistical physics are hard to come by; most high-dimensional many-body systems are assumed to be ergodic, without mathematical proof. Exceptions include the dynamical billiards, which model billiard ball-type collisions of atoms in an ideal gas or plasma. The first hard-sphere ergodicity theorem was for Sinai's billiards, which considers two balls, one of them taken as being stationary, at the origin. As the second ball collides, it moves away; applying periodic boundary conditions, it then returns to collide again. By appeal to homogeneity, this return of the "second" ball can instead be taken to be "just some other atom" that has come into range, and is moving to collide with the atom at the origin (which can be taken to be just "any other atom".) This is one of the few formal proofs that exist; there are no equivalent statements e.g. for atoms in a liquid, interacting via van der Waals forces, even if it would be common sense to believe that such systems are ergodic (and mixing). More precise physical arguments can be made, though.
Ergodicity in physics and geometry:
Simple dynamical systems The formal study of ergodicity can be approached by examining fairly simple dynamical systems. Some of the primary ones are listed here.
Ergodicity in physics and geometry:
The irrational rotation of a circle is ergodic: the orbit of a point is such that eventually, every other point in the circle is visited. Such rotations are a special case of the interval exchange map. The beta expansions of a number are ergodic: beta expansions of a real number are done not in base-N, but in base- β for some β.
Ergodicity in physics and geometry:
The reflected version of the beta expansion is tent map; there are a variety of other ergodic maps of the unit interval. Moving to two dimensions, the arithmetic billiards with irrational angles are ergodic. One can also take a flat rectangle, squash it, cut it and reassemble it; this is the previously-mentioned baker's map. Its points can be described by the set of bi-infinite strings in two letters, that is, extending to both the left and right; as such, it looks like two copies of the Bernoulli process. If one deforms sideways during the squashing, one obtains Arnold's cat map. In most ways, the cat map is prototypical of any other similar transformation.
Ergodicity in physics and geometry:
In classical mechanics and geometry Ergodicity is a widespread phenomenon in the study of symplectic manifolds and Riemannian manifolds. Symplectic manifolds provide the generalized setting for classical mechanics, where the motion of a mechanical system is described by a geodesic. Riemannian manifolds are a special case: the cotangent bundle of a Riemannian manifold is always a symplectic manifold. In particular, the geodesics on a Riemannian manifold are given by the solution of the Hamilton–Jacobi equations.
Ergodicity in physics and geometry:
The geodesic flow of a flat torus following any irrational direction is ergodic; informally this means that when drawing a straight line in a square starting at any point, and with an irrational angle with respect to the sides, if every time one meets a side one starts over on the opposite side with the same angle, the line will eventually meet every subset of positive measure. More generally on any flat surface there are many ergodic directions for the geodesic flow.
Ergodicity in physics and geometry:
For non-flat surfaces, one has that the geodesic flow of any negatively curved compact Riemann surface is ergodic. A surface is "compact" in the sense that it has finite surface area. The geodesic flow is a generalization of the idea of moving in a "straight line" on a curved surface: such straight lines are geodesics. One of the earliest cases studied is Hadamard's billiards, which describes geodesics on the Bolza surface, topologically equivalent to a donut with two holes. Ergodicity can be demonstrated informally, if one has a sharpie and some reasonable example of a two-holed donut: starting anywhere, in any direction, one attempts to draw a straight line; rulers are useful for this. It doesn't take all that long to discover that one is not coming back to the starting point. (Of course, crooked drawing can also account for this; that's why we have proofs.) These results extend to higher dimensions. The geodesic flow for negatively curved compact Riemannian manifolds is ergodic. A classic example for this is the Anosov flow, which is the horocycle flow on a hyperbolic manifold. This can be seen to be a kind of Hopf fibration. Such flows commonly occur in classical mechanics, which is the study in physics of finite-dimensional moving machinery, e.g. the double pendulum and so-forth. Classical mechanics is constructed on symplectic manifolds. The flows on such systems can be deconstructed into stable and unstable manifolds; as a general rule, when this is possible, chaotic motion results. That this is generic can be seen by noting that the cotangent bundle of a Riemannian manifold is (always) a symplectic manifold; the geodesic flow is given by a solution to the Hamilton–Jacobi equations for this manifold. In terms of the canonical coordinates (q,p) on the cotangent manifold, the Hamiltonian or energy is given by H=12∑ijgij(q)pipj with gij the (inverse of the) metric tensor and pi the momentum. The resemblance to the kinetic energy E=12mv2 of a point particle is hardly accidental; this is the whole point of calling such things "energy". In this sense, chaotic behavior with ergodic orbits is a more-or-less generic phenomenon in large tracts of geometry. Ergodicity results have been provided in translation surfaces, hyperbolic groups and systolic geometry. Techniques include the study of ergodic flows, the Hopf decomposition, and the Ambrose–Kakutani–Krengel–Kubo theorem. An important class of systems are the Axiom A systems. A number of both classification and "anti-classification" results have been obtained. The Ornstein isomorphism theorem applies here as well; again, it states that most of these systems are isomorphic to some Bernoulli scheme. This rather neatly ties these systems back into the definition of ergodicity given for a stochastic process, in the previous section. The anti-classification results state that there are more than a countably infinite number of inequivalent ergodic measure-preserving dynamical systems. This is perhaps not entirely a surprise, as one can use points in the Cantor set to construct similar-but-different systems. See measure-preserving dynamical system for a brief survey of some of the anti-classification results.
Ergodicity in physics and geometry:
In quantum mechanics As to quantum mechanics, there is no universal quantum definition of ergodocity or even chaos (see quantum chaos). However, there is a quantum ergodicity theorem stating that the expectation value of an operator converges to the corresponding microcanonical classical average in the semiclassical limit ℏ→0 . Nevertheless, the theorem does not imply that all eigenstates of the Hamiltionian whose classical counterpart is chaotic are features and random. For example, the quantum ergodicity theorem do not exclude the existence of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars.
Definition for discrete-time systems:
Formal definition Let (X,B) be a measurable space. If T is a measurable function from X to itself and μ a probability measure on (X,B) then we say that T is μ -ergodic or μ is an ergodic measure for T if T preserves μ and the following condition holds: For any A∈B such that T−1(A)=A either μ(A)=0 or μ(A)=1 .In other words there are no T -invariant subsets up to measure 0 (with respect to μ ). Recall that T preserving μ (or μ being T -invariant) means that μ(T−1(A))=μ(A) for all A∈B (see also measure-preserving dynamical system).
Definition for discrete-time systems:
Some authors relax the requirement that T preserves μ to the requirement that T is a non-singular transformation, that means that it preserves the subsets of measure zero.
Definition for discrete-time systems:
Examples The simplest example is when X is a finite set and μ the counting measure. Then a self-map of X preserves μ if and only if it is a bijection, and it is ergodic if and only if T has only one orbit (that is, for every x,y∈X there exists k∈N such that y=Tk(x) ). For example, if X={1,2,…,n} then the cycle (12⋯n) is ergodic, but the permutation (12)(34⋯n) is not (it has the two invariant subsets {1,2} and {3,4,…,n} ).
Definition for discrete-time systems:
Equivalent formulations The definition given above admits the following immediate reformulations: for every A∈B with μ(T−1(A)△A)=0 we have μ(A)=0 or μ(A)=1 (where △ denotes the symmetric difference); for every A∈B with positive measure we have {\textstyle \mu {\mathord {\left(\bigcup _{n=1}^{\infty }T^{-n}(A)\right)}}=1} for every two sets A,B∈B of positive measure, there exists n>0 such that μ((T−n(A))∩B)>0 Every measurable function f:X→R with f∘T=f is constant on a subset of full measure.Importantly for applications, the condition in the last characterisation can be restricted to square-integrable functions only: If f∈L2(X,μ) and f∘T=f then f is constant almost everywhere.
Definition for discrete-time systems:
Further examples Bernoulli shifts and subshifts Let S be a finite set and X=SZ with μ the product measure (each factor S being endowed with its counting measure). Then the shift operator T defined by T((sk)k∈Z))=(sk+1)k∈Z is μ -ergodic.There are many more ergodic measures for the shift map T on X . Periodic sequences give finitely supported measures. More interestingly, there are infinitely-supported ones which are subshifts of finite type.
Definition for discrete-time systems:
Irrational rotations Let X be the unit circle {z∈C,|z|=1} , with its Lebesgue measure μ . For any θ∈R the rotation of X of angle θ is given by Tθ(z)=e2iπθz . If θ∈Q then Tθ is not ergodic for the Lebesgue measure as it has infinitely many finite orbits. On the other hand, if θ is irrational then Tθ is ergodic.
Definition for discrete-time systems:
Arnold's cat map Let X=R2/Z2 be the 2-torus. Then any element g∈SL2(Z) defines a self-map of X since g(Z2)=Z2 . When {\textstyle g=\left({\begin{array}{cc}2&1\\1&1\end{array}}\right)} one obtains the so-called Arnold's cat map, which is ergodic for the Lebesgue measure on the torus.
Definition for discrete-time systems:
Ergodic theorems If μ is a probability measure on a space X which is ergodic for a transformation T the pointwise ergodic theorem of G. Birkhoff states that for every measurable functions f:X→R and for μ -almost every point x∈X the time average on the orbit of x converges to the space average of f . Formally this means that The mean ergodic theorem of J. von Neumann is a similar, weaker statement about averaged translates of square-integrable functions.
Definition for discrete-time systems:
Related properties Dense orbits An immediate consequence of the definition of ergodicity is that on a topological space X , and if B is the σ-algebra of Borel sets, if T is μ -ergodic then μ -almost every orbit of T is dense in the support of μ This is not an equivalence since for a transformation which is not uniquely ergodic, but for which there is an ergodic measure with full support μ0 , for any other ergodic measure μ1 the measure {\textstyle {\frac {1}{2}}(\mu _{0}+\mu _{1})} is not ergodic for T but its orbits are dense in the support. Explicit examples can be constructed with shift-invariant measures.
Definition for discrete-time systems:
Mixing A transformation T of a probability measure space (X,μ) is said to be mixing for the measure μ if for any measurable sets A,B⊂X the following holds: It is immediate that a mixing transformation is also ergodic (taking A to be a T -stable subset and B its complement). The converse is not true, for example a rotation with irrational angle on the circle (which is ergodic per the examples above) is not mixing (for a sufficiently small interval its successive images will not intersect itself most of the time). Bernoulli shifts are mixing, and so is Arnold's cat map.
Definition for discrete-time systems:
This notion of mixing is sometimes called strong mixing, as opposed to weak mixing which means that Proper ergodicity The transformation T is said to be properly ergodic if it does not have an orbit of full measure. In the discrete case this means that the measure μ is not supported on a finite orbit of T
Definition for continuous-time dynamical systems:
The definition is essentially the same for continuous-time dynamical systems as for a single transformation. Let (X,B) be a measurable space and for each t∈R+ , then such a system is given by a family Tt of measurable functions from X to itself, so that for any t,s∈R+ the relation Ts+t=Ts∘Tt holds (usually it is also asked that the orbit map from R+×X→X is also measurable). If μ is a probability measure on (X,B) then we say that Tt is μ -ergodic or μ is an ergodic measure for T if each Tt preserves μ and the following condition holds: For any A∈B , if for all t∈R+ we have Tt−1(A)⊂A then either μ(A)=0 or μ(A)=1 Examples As in the discrete case the simplest example is that of a transitive action, for instance the action on the circle given by Tt(z)=e2iπtz is ergodic for Lebesgue measure.
Definition for continuous-time dynamical systems:
An example with infinitely many orbits is given by the flow along an irrational slope on the torus: let X=S1×S1 and α∈R . Let Tt(z1,z2)=(e2iπtz1,e2αiπtz2) ; then if α∉Q this is ergodic for the Lebesgue measure.
Ergodic flows Further examples of ergodic flows are: Billiards in convex Euclidean domains; the geodesic flow of a negatively curved Riemannian manifold of finite volume is ergodic (for the normalised volume measure); the horocycle flow on a hyperbolic manifold of finite volume is ergodic (for the normalised volume measure)
Ergodicity in compact metric spaces:
If X is a compact metric space it is naturally endowed with the σ-algebra of Borel sets. The additional structure coming from the topology then allows a much more detailed theory for ergodic transformations and measures on X Functional analysis interpretation A very powerful alternate definition of ergodic measures can be given using the theory of Banach spaces. Radon measures on X form a Banach space of which the set P(X) of probability measures on X is a convex subset. Given a continuous transformation T of X the subset P(X)T of T -invariant measures is a closed convex subset, and a measure is ergodic for T if and only if it is an extreme point of this convex.
Ergodicity in compact metric spaces:
Existence of ergodic measures In the setting above it follows from the Banach-Alaoglu theorem that there always exists extremal points in P(X)T . Hence a transformation of a compact metric space always admits ergodic measures.
Ergodic decomposition In general an invariant measure need not be ergodic, but as a consequence of Choquet theory it can always be expressed as the barycenter of a probability measure on the set of ergodic measures. This is referred to as the ergodic decomposition of the measure.
Ergodicity in compact metric spaces:
Example In the case of X={1,…,n} and T=(12)(34⋯n) the counting measure is not ergodic. The ergodic measures for T are the uniform measures μ1,μ2 supported on the subsets {1,2} and {3,…,n} and every T -invariant probability measure can be written in the form tμ1+(1−t)μ2 for some t∈[0,1] . In particular {\textstyle {\frac {2}{n}}\mu _{1}+{\frac {n-2}{n}}\mu _{2}} is the ergodic decomposition of the counting measure.
Ergodicity in compact metric spaces:
Continuous systems Everything in this section transfers verbatim to continuous actions of R or R+ on compact metric spaces.
Unique ergodicity The transformation T is said to be uniquely ergodic if there is a unique Borel probability measure μ on X which is ergodic for T In the examples considered above, irrational rotations of the circle are uniquely ergodic; shift maps are not.
Probabilistic interpretation: ergodic processes:
If (Xn)n≥1 is a discrete-time stochastic process on a space Ω , it is said to be ergodic if the joint distribution of the variables on ΩN is invariant under the shift map (xn)n≥1↦(xn+1)n≥1 . This is a particular case of the notions discussed above.
The simplest case is that of an independent and identically distributed process which corresponds to the shift map described above. Another important case is that of a Markov chain which is discussed in detail below.
A similar interpretation holds for continuous-time stochastic processes though the construction of the measurable structure of the action is more complicated.
Ergodicity of Markov chains:
The dynamical system associated with a Markov chain Let S be a finite set. A Markov chain on S is defined by a matrix P∈[0,1]S×S , where P(s1,s2) is the transition probability from s1 to s2 , so for every s∈S we have {\textstyle \sum _{s'\in S}P(s,s')=1} . A stationary measure for P is a probability measure ν on S such that νP=ν ; that is {\textstyle \sum _{s'\in S}\nu (s')P(s',s)=\nu (s)} for all s∈S Using this data we can define a probability measure μν on the set X=SZ with its product σ-algebra by giving the measures of the cylinders as follows: Stationarity of ν then means that the measure μν is invariant under the shift map T((sk)k∈Z))=(sk+1)k∈Z Criterion for ergodicity The measure μν is always ergodic for the shift map if the associated Markov chain is irreducible (any state can be reached with positive probability from any other state in a finite number of steps).The hypotheses above imply that there is a unique stationary measure for the Markov chain. In terms of the matrix P a sufficient condition for this is that 1 be a simple eigenvalue of the matrix P and all other eigenvalues of P (in C ) are of modulus <1.
Ergodicity of Markov chains:
Note that in probability theory the Markov chain is called ergodic if in addition each state is aperiodic (the times where the return probability is positive are not multiples of a single integer >1). This is not necessary for the invariant measure to be ergodic; hence the notions of "ergodicity" for a Markov chain and the associated shift-invariant measure are different (the one for the chain is strictly stronger).Moreover the criterion is an "if and only if" if all communicating classes in the chain are recurrent and we consider all stationary measures.
Ergodicity of Markov chains:
Examples Counting measure If P(s,s′)=1/|S| for all s,s′∈S then the stationary measure is the counting measure, the measure μP is the product of counting measures. The Markov chain is ergodic, so the shift example from above is a special case of the criterion.
Ergodicity of Markov chains:
Non-ergodic Markov chains Markov chains with recurring communicating classes are not irreducible are not ergodic, and this can be seen immediately as follows. If S1⊊S are two distinct recurrent communicating classes there are nonzero stationary measures ν1,ν2 supported on S1,S2 respectively and the subsets S1Z and S2Z are both shift-invariant and of measure 1.2 for the invariant probability measure {\textstyle {\frac {1}{2}}(\nu _{1}+\nu _{2})} . A very simple example of that is the chain on S={1,2} given by the matrix {\textstyle \left({\begin{array}{cc}1&0\\0&1\end{array}}\right)} (both states are stationary).
Ergodicity of Markov chains:
A periodic chain The Markov chain on S={1,2} given by the matrix {\textstyle \left({\begin{array}{cc}0&1\\1&0\end{array}}\right)} is irreducible but periodic. Thus it is not ergodic in the sense of Markov chain though the associated measure μ on {1,2}Z is ergodic for the shift map. However the shift is not mixing for this measure, as for the sets and we have {\textstyle \mu (A)={\frac {1}{2}}=\mu (B)} but
Generalisations:
The definition of ergodicity also makes sense for group actions. The classical theory (for invertible transformations) corresponds to actions of Z or R For non-abelian groups there might not be invariant measures even on compact metric spaces. However the definition of ergodicity carries over unchanged if one replaces invariant measures by quasi-invariant measures.
Important examples are the action of a semisimple Lie group (or a lattice therein) on its Furstenberg boundary.
A measurable equivalence relation it is said to be ergodic if all saturated subsets are either null or conull. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Benzylpiperidine**
2-Benzylpiperidine:
2-Benzylpiperidine is a stimulant drug of the piperidine class. It is similar in structure to other drugs such as methylphenidate and desoxypipradrol but around one twentieth as potent, and while it boosts norepinephrine levels to around the same extent as d-amphetamine, it has very little effect on dopamine levels, with its binding affinity for the dopamine transporter around 175 times lower than for the noradrenaline transporter. 2-benzylpiperidine is little used as a stimulant, with its main use being as a synthetic intermediate in the manufacture of other drugs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neurological pupil index**
Neurological pupil index:
Clinicians routinely check the pupils of critically injured and ill patients to monitor neurological status. However, manual pupil measurements (performed using a penlight or ophthalmoscope) have been shown to be subjective, inaccurate, and not repeatable or consistent. Automated assessment of the pupillary light reflex has emerged as an objective means of measuring pupillary reactivity across a range of neurological diseases, including stroke, traumatic brain injury and edema, tumoral herniation syndromes, and sports or war injuries. Automated pupillometers are used to assess an array of objective pupillary variables including size, constriction velocity, latency, and dilation velocity, which are normalized and standardized to compute an indexed score such as the Neurological Pupil index (NPi).
Pupillary evaluation:
Pupillary evaluation involves the assessment of two components—pupil size and reactivity to light.
Pupil size is traditionally measured using a pupil gauge to estimate the diameter in millimeters of the pupil at rest before any light is shone into the eye—pupils should be of similar size. Manual measurement is subjective and prone to error.
Pupillary evaluation:
Pupil light reactivity is evaluated by shining a light into a patient's eye to make the pupil constrict in reaction to the light. The pupil should dilate again when the light is moved away. The pupil's reaction is numerically graded, typically on scales from one to three, to translate how brisk the pupillary reflex is. These terms are subjective and applied without a standard clinical protocol or definition.
Neurological Pupil index (NPi):
The Neurological Pupil index, or NPi, is an algorithm developed by NeurOptics, Inc., that removes subjectivity from the pupillary evaluation. A patient's pupil measurement (including variables such as size, latency, constriction velocity, dilation velocity, etc.) is obtained using a pupillometer, and the measurement is compared against a normative model of pupil reaction to light and automatically graded by the NPi on a scale of 0 to 4.9. Pupil reactivity is express numerically so that changes in both pupil size and reactivity can be trended over time, just like other vital signs.
Neurological Pupil index (NPi):
The numeric scale of the NPi allows for a more rigorous interpretation and classification of the pupil response than subjective assessment.
Interpreting the Neurological Pupillary index (NPi):
Each NPi measurement taken is rated on a scale ranging from 0 to 4.9. A score equal to or above 3 means that the pupil measurement falls within the boundaries of normal pupil behavior as defined by the NPi. However, a value closer to 4.9 is more normal data than a value closer to 3. An NPi score below 3 means the reflex is abnormal, i.e., weaker than a normal pupil response, and values closer to 0 are more abnormal than values closer to 3. A difference in NPi between Right and Left pupils of greater than or equal to 0.7 may also be considered an abnormal pupil reading.
Validity of score indices in pupillometry:
More than 100 studies published in peer-reviewed academic journals indicate the effectiveness of automated pupillometry and the NPi scale for use in critical care medicine, neurology, neurosurgery, emergency medicine, and applied research settings.
Validity of score indices in pupillometry:
According to the new American Heart Association (AHA) guidelines, most deaths attributable to post-cardiac arrest brain injury are due to active withdrawal of life-sustaining treatment based on a predicted poor neurological outcome. The NPi and automated pupillometry have recently been included in the updated 2020 American Heart Association (AHA) Guidelines for Cardiopulmonary Resuscitation (CPR) and Emergency Cardiovascular Care (ECC) as an object measurement supporting brain injury prognosis in patients following cardiac arrest.
Validity of score indices in pupillometry:
Clinical Neurology and Neurosurgery published a study that found that intracerebral hemorrhage volume and shift of midline structures correlate with NPi, and abnormalities in NPi can be predicted by hematoma volume and other CT indicators of ICH severity.
A case study series published in the Journal of Neuroscience of Nursing revealed that automated infrared pupillometry is an accurate tool that provides reliable data in patients with a poor baseline neurological examination after stroke.
A study published in Neurocritical Care found that automated pupillometry is more reliable than standard clinical assessments in detecting and tracking subtle changes in cerebral edema and pupillary reactivity during osmotic therapy.
Data published in the Journal of Neuroscience Nursing demonstrated significant differences in pupillary values for NPi, latency, and constriction velocity when stratified by age, sex, or severity of illness defined by the Glasgow Coma Scale score. The study provided a greater understanding of expected distributions for automated pupillometry values in a wide range of neurocritical care populations.
Another study in Neurocritical Care found that abnormal NPi and pupillary light reflex measurements by pupillometer are predictive of poor outcomes very early after resuscitation from cardiac arrest, and are not usually associated with dilated pupils.
Critical Care published a study examining the relationship between the NPi and invasive intracranial pressure (ICP) in 54 patients with severe traumatic brain injury (TBI). The study found that episodes of sustained elevated ICP were associated with a concomitant decrease in NPi in subjects with intracranial hypertension.
Validity of score indices in pupillometry:
Data published in the Journal of Stroke and Cerebrovascular Diseases demonstrated that in patients with ischemic and hemorrhagic strokes, there is a significant correlation between the suptum pellucidum (SPS) and the NPi, but not with pupil size. The authors concluded NPi assessment by automated pupillometry could be considered a useful surrogate to non-invasively monitor midline shift in stroke patients, and help in the utilization of imaging and assessing the need for intervention.
Validity of score indices in pupillometry:
A study published in the Journal of Neurosurgery found that the NPi may signal an early warning of potential delayed cerebral ischemia and enable preemptive escalation of care.
Validity of score indices in pupillometry:
A case report published in Brain Injury presented a patient who was "saved" by the use of NPi as part of his clinical assessment. Based on manual light assessment of the pupils, it was determined that the patient had suffered irreversible demise from an acute traumatic subdural hematoma. However, a subsequent assessment using an automated pupillometer with NPi showed that his pupils were in fact reactive—he then made a good recovery following treatment.
Validity of score indices in pupillometry:
Intensive Care Medicine published a study that found quantitative NPi had excellent ability to predict an unfavorable outcome from day one after cardiac arrest, with no false positives, and significantly higher specificity than standard manual pupillary examination.
Validity of score indices in pupillometry:
Four studies published in the Journal of Neuroscience Nursing concluded that automated pupillometry, using the NPi, enhanced clinical decision-making and added value to patient care in neuroscience/neurotrauma intensive care settings. One study showed that the integration of routine pupillometer assessments improves the accuracy of examinations and correlates with intracranial pressure values. Another study concluded that automated pupillometry represents an alternative to manual pupillary assessment, which may have greater interrater agreement and reliability.
Validity of score indices in pupillometry:
The American Journal of Critical Care revealed that critical care and neurosurgical nurses consistently underestimated pupil size, were unable to identify anisocoria (unequal pupil size), and incorrectly assessed pupil reactivity. The study concluded that automated pupillometry is a necessary tool for accuracy and consistency, and may facilitate earlier detection of subtle pupil changes, allowing for more effective and timely diagnostic and treatment interventions.
Validity of score indices in pupillometry:
It has been shown that quantitative NPi can predict a poor outcome in patients with cardiac arrest from day 1 after VA-ECMO insertion, with no false positives. Combining NPi and 12-h PREDICT-VA ECMO score increased the sensitivity of outcome prediction, while maintaining 100% specificity.
Validity of score indices in pupillometry:
Researchers also concluded that the use of NPi provides important supplementary diagnostic, therapeutic, and prognostic information to guide the management of nonconvulsive status epilepticus and severe TBI patients.Additional studies published in peer-reviewed journals continue to demonstrate the effectiveness of NeurOptics' NPi in helping clinicians improve patient outcomes.Others: A study published in Vision Development & Rehabilitation demonstrates the effectiveness of automated pupillometry using Reflex, an iOS based pupillometer that does not require capital equipment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ehrlichia muris**
Ehrlichia muris:
Ehrlichia muris is a species of pathogenic bacteria first isolated from mice, with type strain AS145T. Its genome has been sequenced. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Season creep**
Season creep:
In phenology, season creep refers to observed changes in the timing of the seasons, such as earlier indications of spring widely observed in temperate areas across the Northern Hemisphere.
Phenological records analyzed by climate scientists have shown significant temporal trends in the observed time of seasonal events, from the end of the 20th century and continuing into the 21st century. In Europe, season creep has been associated with the arrival of spring moving up by approximately one week in a recent 30-year period.
Season creep:
Other studies have put the rate of season creep measured by plant phenology in the range of 2–3 days per decade advancement in spring, and 0.3–1.6 days per decade delay in autumn, over the past 30–80 years.Observable changes in nature related to season creep include birds laying their eggs earlier and buds appearing on some trees in late winter. In addition to advanced budding, flowering trees have been blooming earlier, for example the culturally-important cherry blossoms in Japan, and Washington, D.C.Northern hardwood forests have been trending toward leafing out sooner, and retaining their green canopies longer.
Season creep:
The agricultural growing season has also expanded by 10–20 days over the last few decades.The effects of season creep have been noted by non-scientists as well, including gardeners who have advanced their spring planting times, and experimented with plantings of less hardy warmer climate varieties of non-native plants. While summer growing seasons are expanding, winters are getting warmer and shorter, resulting in reduced winter ice cover on bodies of water, earlier ice-out, earlier melt water flows, and earlier spring lake level peaks.
Season creep:
Some spring events, or "phenophases", have become intermittent or unobservable; for example, bodies of water that once froze regularly most winters now freeze less frequently, and formerly migratory birds are now seen year-round in some areas.
Relationship to global warming:
The full impact of global warming is forecast to happen in the future, but climate scientists have cited season creep as an easily observable effect of climate change that has already occurred and continues to occur.
A large systematic phenological examination of data on 542 plant species in 21 European countries from 1971–2000 showed that 78% of all leafing, flowering, and fruiting records advanced while only 3% were significantly delayed, and these observations were consistent with measurements of observed warming.
Relationship to global warming:
Similar changes in the phenology of plants and animals are occurring across marine, freshwater, and terrestrial groups studied, and these changes are also consistent with the expected impact of global warming.While phenology fairly consistently points to an earlier spring across temperate regions of North America, a recent comprehensive study of the subarctic showed greater variability in the timing of green-up, with some areas advancing, and some having no discernible trend over a recent 44-year period.
Relationship to global warming:
Another 40 year phenological study in China found greater warming over that period in the more northerly sites studied, with sites experiencing cooling mostly in the south, indicating that the temperature variation with latitude is decreasing there.
Relationship to global warming:
This study also confirmed that season creep was correlated with warming, but the effect is non-linear—phenophases advanced less with greater warming, and retarded more with greater cooling.Shorter winters and longer growing seasons may appear to be a benefit to society from global warming, but the effects of advanced phenophases may also have serious consequences for human populations. Modeling of snowmelt predicted that warming of 3 to 5 °C in the Western United States could cause snowmelt-driven runoff to occur as much as two months earlier, with profound effects on hydroelectricity, land use, agriculture, and water management.
Relationship to global warming:
Since 1980, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season there.Season creep may also have adverse effects on plant species as well. Earlier flowering could occur before pollinators such as honey bees become active, which would have negative consequences for pollination and reproduction. Shorter and warmer winters may affect other environmental adaptations including cold hardening of trees, which could result in frost damage during more severe winters.
Etymology:
Season creep was included in the 9th edition of the Collins English Dictionary published in London June 4, 2007.
The term was popularized in the media after the report titled "Season Creep: How Global Warming Is Already Affecting The World Around Us" was published by the American environmental organization Clear the Air on March 21, 2006.
Etymology:
In the "Season Creep" report, Jonathan Banks, Policy Director for Clear the Air, introduced the term as follows: While to some, an early arrival of spring may sound good, an imbalance in the ecosystem can wreak havoc. Natural processes like flowers blooming, birds nesting, insects emerging, and ice melting are triggered in large part by temperature. As temperatures increase globally, the delicately balanced system begins to fall into ecological disarray. We call this season creep.
Other uses:
The term "season creep" has been applied in other contexts as well: In professional sports, season creep refers to lengthening of the playing season, especially the extension of the MLB season to 162 games.
In college athletics, season creep refers to longer periods athletes spend training in their sport.
In American politics, campaign season creep refers to the need for candidates to start fund raising activities sooner.
In retailing, holiday season creep, also known as Christmas creep refers to the earlier appearance of Christmas-themed merchandising, extending the holiday shopping season. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Live, virtual, and constructive**
Live, virtual, and constructive:
Live, Virtual, & Constructive (LVC) Simulation is a broadly used taxonomy for classifying Modeling and Simulation (M&S). However, categorizing a simulation as a live, virtual, or constructive environment is problematic since there is no clear division among these categories. The degree of human participation in a simulation is infinitely variable, as is the degree of equipment realism. The categorization of simulations also lacks a category for simulated people working real equipment.
Categories:
The LVC categories as defined by the United States Department of Defense in the Modeling and Simulation Glossary as follows: Live - A simulation involving real people operating real systems. Military training events using real equipment are live simulations. They are considered simulations because they are not conducted against a live enemy.
Virtual - A simulation involving real people operating simulated systems. Virtual simulations inject a Human-in-the-Loop into a central role by exercising motor control skills (e.g., flying jet or tank simulator), decision making skills (e.g., committing fire control resources to action), or communication skills (e.g., as members of a C4I team).
Categories:
Constructive - A simulation involving simulated people operating simulated systems. Real people stimulate (make inputs to) such simulations, but are not involved in determining the outcomes. A constructive simulation is a computer program. For example, a military user may input data instructing a unit to move and to engage an enemy target. The constructive simulation determines the speed of movement, the effect of the engagement with the enemy and any battle damage that may occur. These terms should not be confused with specific constructive models such as Computer Generated Forces (CGF), a generic term used to refer to computer representations of forces in simulations that attempts to model human behavior. CGF is just one example model being used in a constructive environment. There are many types of constructive models that involve simulated people operating simulated systems.Other associated terms are as follows: LVC Enterprise - The overall enterprise of resources in which LVC activities take place.
Categories:
LVC Integration - The process of linking LVC simulations through a suitable technology or protocol to exploit simulation interoperability within a federated simulation environment such as the HLA or EuroSim.
Categories:
LVC Integrating Architecture (LVC-IA) - The aggregate representation of the foundational elements of a LVC Enterprise, including hardware, software, networks, databases and user interfaces, policies, agreements, certifications/accreditations and business rules. LVC-IA is intrinsically an Enterprise Architecture, given the system-of-systems environment it must support. LVC-IA bridges M&S technology to the people who need and use the information gained through simulation. To accomplish this a LVC-IA provides the following: Integration through simulation equipment, interoperability tools and support personnel. See also Enterprise integration and Enterprise architecture. Integration creates network-centric linkages to collect, retrieve and exchange data among live instrumentation, virtual simulators and constructive simulations as well as between the joint military and specific service command systems. Integration also bridges together data management, exercise management, exercise collaboration and updating training support systems.
Categories:
Interoperability through common protocols, specifications, standards and interfaces to standardize LVC components and tools for mission rehearsals and training, testing, acquisition, analysis, experimentation, and logistics planning.
Categories:
Composability through common and reusable components and tools such as enterprise after action review, adapters, correlated terrain databases, Multilevel security for multinational players and hardware/software requirements.Other definitions used in LVC discussions (Webster's dictionary) Enterprise: a project or undertaking that is especially difficult, complicated, or risky A: a unit of economic organization or activity; especially: a business organization B: a systematic purposeful activity Environment: The aggregate of surrounding things, conditions or influences; surroundings Construct: To make or form by combining or arranging components Component: One of the parts of somethingCurrent and emerging technology to enable true LVC technology for Combat Air Forces ("CAF") training require standardized definitions of CAF LVC events to be debated and developed. The dictionary terms used above provide a solid foundation of understanding of the fundamental structure of the LVC topic as applied universally to DoD activities. The terms and use cases described below are a guidepost for doctrine that uses these terms to eliminate any misunderstanding. The following paragraph uses these terms to layout the global view, and will be explained in detail throughout the rest of the document. In short: Training and Operational Test are conducted through the combined use of three separate Constructs (Live, Simulator and Ancillary) which are in turn made up of several enabling Components to prepare, test and/or train warfighters in their respective disciplines. The LVC Enterprise, a component of the Live construct, is the totality of personnel, hardware and software that enables warfighters to combine three historically disparate Environments (Live, Virtual and Constructive) to improve performance in their combat role.
Categories:
Central to a functionally accurate understanding of the paragraph above is a working knowledge of the Environment definitions, provided below for clarity: Live Environment (L)*: Warfighters operating their respective disciplines’ operational system in a real-world applicationVirtual Environment (V)*: Warfighters operating fielded simulators or trainersConstructive Environment (C)*: Computer Generated Forces (CGFs) used to augment and force multiply Live and/or Virtual scenario developmentThe Environments (L, V, & C) by themselves are generally well understood and apply universally to a diverse range of disciplines such as the medical field, law enforcement or operational military applications. Using the medical field as an example, the Live Environment can be a doctor performing CPR on a human patient in a critical real world situation. In this same context, the Virtual Environment would include a doctor practicing CPR on a training mannequin, and the Constructive Environment is the software within the training mannequin that drives its behavior. In a second example, consider fighter pilot training or operational testing. The Live environment is the pilot flying the combat aircraft. The Virtual environment would include that same pilot flying a simulator. The constructive environment includes the networks, computer generated forces, and weapons servers, etc. that enable the Live and Virtual environments to be connected and interact. Although there are clearly secondary and tertiary training benefits, it is important to understand combining one or more environments for the purpose of making Live real world performance better is the sole reason the LVC concept was created. However, when referring to specific activities or programs designed to integrate the environments across the enterprise, the use and application of terms differ widely across the DoD. Therefore, the words that describe specifically how future training or operational testing will be accomplished require standardization as well. This is best described by backing away from technical terminology and thinking about how human beings actually prepare for their specific combat responsibilities. In practice, human beings prepare for their roles in one of three Constructs: Live (with actual combat tools), in a Simulator of some kind, or in other Ancillary ways (tests, academics, computer-based training, etc.). Actions within each of the Constructs are further broken down into Components that specify differing ways to get the job done or achieve training objectives. The three Constructs are described below:
Live Construct:
Live is one of three constructs representing humans operating their respective disciplines’ operational system. Operational system examples could consist of a tank, a naval vessel, an aircraft or eventually even a deployed surgical hospital. Three components of the Live Construct follow Live vs. Live: Traditional Live vs. Live training is a component of the Live Construct and occurs when Live operational systems interact with one another to augment scenario complexity (incidentally this is how actual combat is accomplished as well; making this component the most fully immersive form of combat training available today) LC: Live, Constructive is a component of the Live Construct whereby CGFs are injected into Live operational systems in a bi-directional, integrated, secure, dynamically adaptable network to augment scenario complexity LVC: Live, Virtual and Constructive (LVC) is a component of the Live Construct whereby Virtual entities and CGFs are injected into Live operational systems in an integrated, secure, dynamically adaptable network to augment scenario complexity
Simulator Construct:
The Simulator Construct is a combination of Virtual and Constructive (VC), and is composed of humans operating simulated devices in lieu of Live operational systems. The Simulator Construct consists of three components: A locally networked set of identical simulators typical of the environment (stand-alone simulators) A networked set of disparate simulators (Distributed Mission Operations (DMO)) A locally closed-loop networked enclave of multiple simulator devices in support of High-End Testing, Tactics, and Advanced Training (HET3)
Ancillary Construct:
Is the third construct other than Live or Simulator whereby training is accomplished via many components (not all-inclusive) Computer-based instruction Self-study Platform instructed academicsUtilizing the definitions above, the following table provides a graphical representation of how the terms relate in the context of CAF Training or Operational Test: Using the figure above as a guide, it is clear LVC activity is the use of the Virtual and Constructive environments to enhance scenario complexity for the Live environment – and nothing more. An LVC system must have a bi-directional, adaptable, ad-hoc and secure communication system between the Live environment and the VC environment. Most importantly, LVC used as a verb is an integrated interaction of the three environments with the Live environment always present. For example, a Simulator Construct VC event should be called something other than LVC (such as Distributed Mission Operations (DMO)). In the absence of the Live environment LVC and LC do not exist, making the use of the LVC term wholly inappropriate as a descriptor.
Ancillary Construct:
As the LVC Enterprise pertains to a training program, LVC lines of effort are rightly defined as “a collaboration of OSD, HAF, MAJCOM, Joint and Coalition efforts toward a technologically sound and fiscally responsible path for training to enable combat readiness.” The “lines of effort,” in this case, would not include Simulator Construct programs and development but would be limited to the Construct that includes the LVC Enterprise. The other common term, “Doing LVC” would then imply “readiness training conducted utilizing an integration of Virtual and Constructive assets for augmenting Live operational system scenarios and mission objective outcomes.” Likewise, LVC-Operational Training (in a CAF fighter training context) or “LVC-OT” are the tools and effort required to integrate Live, Virtual and Constructive mission systems, when needed, to tailor robust and cost-efficient methods of Operational Training and/or Test.
Misused and extraneous terms:
To ensure clarity of discussions and eliminate misunderstanding, when speaking in the LVC context, only the terms in this document should be used to describe the environments, constructs, and components. Words like “synthetic” and “digi” should be replaced with “Constructive” or “Virtual” instead. Additionally, Embedded Training (ET) systems, defined as a localized or self-contained Live/Constructive system (like on the F-22 or F-35) should not be confused with or referred to as LVC systems.
History:
Prior to 1990, the field of M&S was marked by fragmentation and limited coordination between activities across key communities. In recognition of these deficiencies, Congress directed the Department of Defense (DoD) to “... establish an Office of the Secretary of Defense (OSD) level joint program office for simulation to coordinate simulation policy, to establish interoperability standards and protocols, to promote simulation within the military departments, and to establish guidelines and objectives for coordination [sic] of simulation, wargaming, and training.” (ref Senate Authorization Committee Report, FY91, DoD Appropriations Bill, SR101-521, pp. 154–155, October 11, 1990) Consistent with this direction, the Defense Modeling and Simulation Office (DMSO) was created, and shortly afterwards many DoD Components designated organizations and/or points of contact to facilitate coordination of M&S activities within and across their communities. For over a decade, the ultimate goal of the DoD in M&S is to create a LVC-IA to assemble models and simulations quickly, which create an operationally valid LVC environment to train, develop doctrine and tactics, formulate operational plans and assess warfighting situations. A common use of these LVC environments will promote closer interaction between operations and acquisition communities. These M&S environments will be constructed from composeable components interoperating through an integrated architecture. A robust M&S capability enables the DOD to meet operational and support objectives effectively across the diverse activities of the military services, combatant commands and agencies.The number of available architectures have increased over time. M&S trends indicate that once a community of use develops around an architecture, that architecture is likely to be used regardless of new architectural developments. M&S trends also indicate that few, if any, architectures will be retired as new ones come online. When a new architecture is created to replace one or more of the existing set, the likely outcome is one more architecture will be added to the available set. As the number of mixed-architecture events increase over time, the inter-architecture communication problem increases as well.M&S has made significant progress in enabling users to link critical resources through distributed architectures.
History:
In the mid 1980s, SIMNET became the first successful implementation of a large-scale, real-time, man-in-the-loop simulator networking for team training and mission rehearsal in military operations. The earliest successes that came through the SIMNET program was the demonstration that geographically dispersed simulation systems could support distributed training by interacting with each other across network connections.The Aggregate Level Simulation Protocol (ALSP) extended the benefits of distributed simulation to the force-level training community so that different aggregate-level simulations could cooperate to provide theater-level experiences for battle-staff training. The ALSP has supported an evolving “confederation of models” since 1992, consisting of a collection of infrastructure software and protocols for both inter-model communication through a common interface and time advance using a conservative Chandy-Misra-based algorithm.At about the same time, the SIMNET protocol evolved and matured into the Distributed Interactive Simulation (DIS) Standard. DIS allowed an increased number of simulation types to interact in distributed events, but was primarily focused on the platform-level training community. DIS provided an open network protocol standard for linking real-time platform-level wargaming simulations.In the mid 1990s, the Defense Modeling and Simulation Office (DMSO) sponsored the High Level Architecture (HLA) initiative. Designed to support and supplant both DIS and ALSP, investigation efforts were started to prototype an infrastructure capable of supporting these two disparate applications. The intent was to combine the best features of DIS and ALSP into a single architecture that could also support uses in the analysis and acquisition communities while continuing to support training applications.
History:
The DoD test community started development of alternate architectures based on their perception that HLA yielded unacceptable performance and included reliability limitations. The real-time test range community started development of the Test and Training Enabling Architecture (TENA) to provide low-latency, high-performance service in the hard-real-time application of integrating live assets in the test-range setting. TENA, through its common infrastructure, including the TENA Middleware and other complementary architecture components, such as the TENA Repository, Logical Range Archive, and other TENA utilities and tools, provides the architecture and software implementation and capabilities necessary to quickly and economically enable interchangeability among range systems, facilities, and simulations.Similarly, the U.S. Army started the development of the Common Training Instrumentation Architecture (CTIA) to link a large number of live assets requiring a relatively narrowly bounded set of data for purposes of providing After Action Reviews (AARs) on Army training ranges in the support of large-scale exercises.Other efforts that make the LVC architecture space more complex include universal interchangeability software packages such as OSAMS or CONDOR developed and distributed by commercial vendors.
History:
As of 2010 all of the DoD architectures remain in service with the exception of SIMNET. Of the remaining architectures: CTIA, DIS, HLA, ALSP and TENA, some are in early and growing use (e.g., CTIA, TENA) while others have seen a user-base reduction (e.g., ALSP). Each of the architectures is providing an acceptable level of capability within the areas where they have been adopted. However, DIS, HLA, TENA, and CTIA-based federations are not inherently interoperable with each other. when simulations rely on different architectures, additional steps must be taken to ensure effective communication between all applications. These additional steps, typically involving interposing gateways or bridges between the various architectures, may introduce increased risk, complexity, cost, level of effort, and preparation time. Additional problems extend beyond the implementation of individual simulation events. As a single example, the ability to reuse supporting models, personnel (expertise), and applications across the different protocols is limited. The limited inherent interoperability between the different protocols introduces a significant and unnecessary barrier to the integration of live, virtual, and constructive simulations.
Challenges:
The current status of LVC interoperability is fragile and subject to several reoccurring problems that must be resolved (often anew) whenever live, virtual or constructive simulation systems are to be components in a mixed-architecture simulation event. Some of the attendant problems stem from simulation system capability limitations and other system-to-system incompatibilities. Other types of problems arise from the general failure to provide a framework which achieves a more complete semantic-level interoperability between disparate systems. Interoperability, Integration and Composeablity have been identified as the most technical challenging aspects of a LVC-IA since at least 1996. The Study on the Effectiveness of Modeling and Simulation in the Weapon System Acquisition Process identified cultural and managerial challenges as well. By definition a LVC-IA is a socialtechnical system, a technical system that interacts directly with people. The following table identifies the 1996 challenges associated with the technical, cultural and managerial aspects. In addition, the challenges or gaps found in a 2009 study are also included. The table shows there is little difference between the challenges of 1996 and the challenges of 2009.
Approaches to a solution:
A virtual or constructive model usually focuses on the fidelity or accuracy of the element being represented. A live simulation, by definition represents the highest fidelity, since it is reality. But a simulation quickly becomes more difficult when it is created from various live, virtual and constructive elements, or sets of simulations with various network protocols, where each simulation consists of a set of live, virtual and constructive elements. The LVC simulations are socialtechical systems due to the interaction between people and technology in the simulation. The users represent stakeholders from across the acquisition, analysis, testing, training, planning and experimentation communities. M&S occurs across the entire Joint Capabilities Integration Development System (JCID) lifecycle. See the "M&S in the JCID Process" figure. A LVC-IA is also considered an Ultra Large Scale (ULS) system due to the use by a wide variety of stakeholders with conflicting needs and the continuously evolving construction from heterogeneous parts. By definition, people are not just users but elements of a LVC simulation.
Approaches to a solution:
During the development of various LVC-IA environments, attempts to understand the foundational elements of integration, composability and interoperability emerged. As of 2010, our understanding of these three elements are still evolving, just as software development continues to evolve. Consider software architecture; as a concept it was first identified in the research work of Edsger Dijkstra in 1968 and David Parnas in the early 1970s. The area of software architecture was only recently adopted in 2007 by ISO as ISO/IEC 42010:2007. Integration is routinely described using the methods of architectural and software patterns. The functional elements of integration can be understood due to universality of integration patterns, e.g. Mediation (intra-communication) and Federation (inter-communication); process, data synchronization and concurrency patterns.
Approaches to a solution:
A LVC-IA is dependent on the Interoperability and Composability attributes, not just the technical aspects, but the social or cultural aspects as well. There are sociotechnical challenges, as well as ULS system challenges associated with these features. An example of a cultural aspect is the problem of composition validity. In an ULS the ability to control all interfaces to ensure a valid composition is extremely difficult. The VV&A paradigms are challenged to identify a level of acceptable validity.
Approaches to a solution:
Interoperability The study of interoperability concerns methodologies to interoperate different systems distributed over a network system. Andreas Tolk introduced the Levels of Conceptual Interoperability Model (LCIM) which identified seven levels of interoperability among participating systems as a method to describe technical interoperability and the complexity of interoperations.Bernard Zeigler's Theory of Modeling and Simulation extends on the three basic levels of interoperability: Pragmatic Semantic SyntacticThe pragmatic level focuses on the receiver’s interpretation of messages in the context of application relative to the sender’s intent. The semantic level concerns definitions and attributes of terms and how they are combined to provide shared meaning to messages. The syntactic level focuses on a structure of messages and adherence to the rules governing that structure. The linguistic interoperability concept supports simultaneous testing environment at multiple levels.
Approaches to a solution:
The LCIM associate the lower layers with the problems of simulation interoperation while the upper layers relate to the problems of reuse and composition of models. They conclude “simulation systems are based on models and their assumptions and constraints. If two simulation systems are combined, these assumptions and constraints must be aligned accordingly to ensure meaningful results”. This suggests that levels of interoperability that have been identified in the area of M&S can serve as guidelines to discussion of information exchange in general.
Approaches to a solution:
The Zeigler Architecture provides an architecture description language or conceptual model in which to discuss M&S. The LCIM provides a conceptual model as a means to discuss integration, interoperability and composability. The three linguistic elements relates the LCIM to the Ziegler conceptual model. Architectural and structural complexity are an area of research in systems theory to measure the cohesion and coupling and is based on the metrics commonly used in software development projects. Zeigler, Kim, and Praehofer present a theory of modeling and simulation which provides a conceptual framework and an associated computational approach to methodological problems in M&S. The framework provides a set of entities and relations among the entities that, in effect, present an ontology of the M&S domain.
Approaches to a solution:
Composability Petty and Weisel formulated the current working definition: "Composability is the capability to select and assemble simulation components in various combinations into simulation systems to satisfy specific user requirements." Both a technical and user interaction is required indicative of a sociotechnical system is involved. The ability for a user to access data or access models is an important factor when considering composability metrics. If the user does not have visibility into a repository of models, the aggregation of models becomes problematic.
Approaches to a solution:
In Improving the Composability of Department of Defense Models and Simulation, the factors associated with the ability to provide composability are as follows: The complexity of the system being modeled. The size (complexity) of the M&S environment.
The difficulty of the objective for the context in which the composite M&S will be used. The flexibility of exploration, extensibility.
The strength of underlying science and technology, including standards.
Approaches to a solution:
Human considerations, such as the quality of management, having a common community of interest, and the skill and knowledge of the work force.Tolk introduced an alternative view on Composability, focusing more on the need for conceptual alignment: The M&S community understands interoperability quite well as the ability to exchange information and to use the data exchanged in the receiving system. Interoperability can be engineered into a system or a service after definition and implementation. ...Composability is different from interoperability. Composability is the consistent representation of truth in all participating systems. It extends the ideas of interoperability by adding the pragmatic level to cover what happens within the receiving system based on the received information. In contrast to interoperability, composability cannot be engineered into a system after the fact. Composability requires often significant changes to the simulation.
Approaches to a solution:
In other words: Propertied concepts, if they are modeled in more than one participating system, have to represent the same truth. It is not allowed for composable systems to gain different answer to the same question in both systems. The requirement for consistent representation of truth supersedes the requirement for meaningful use of received information known from interoperability.
Approaches to a solution:
LVC requires integratability, interoperability, and composability Page et al. suggest defining integratability contending with the physical/technical realms of connections between systems, which include hardware and firmware, protocols, networks, etc., interoperability contending with the software and implementation details of interoperations; this includes exchange of data elements via interfaces, the use of middleware, mapping to common information exchange models, etc., and composability contending with the alignment of issues on the modeling level. As captured, among others, by Tolk, successful interoperation of solutions of LVC components requires integratability of infrastructures, interoperability of systems, and composability of models. LVC Architectures must holistically address all three aspects in well aligned systemic approaches.
Economic drivers:
To produce the greatest impact from its investments, the DoD needs to manage its M&S programs utilizing an enterprise-type approach. This includes both identifying gaps in M&S capabilities that are common across the enterprise and providing seed moneys to fund projects that have widely applicable payoffs, and conducting M&S investment across the Department in ways that are systematic and transparent. In particular, “Management processes for models, simulations, and data that … Facilitate the cost effective and efficient development of M&S systems and capabilities….” such as are cited in the vision statement require comprehensive Departmental M&S best-practice investment strategies and processes. M&S investment management requires metrics, both for quantifying the extent of potential investments and for identifying and understanding the full range of benefits resulting from these investments. There is at this time no consistent guidance for such practice.
Economic drivers:
The development & use costs associated with LVC can be summarized as follows: Live - Relatively high cost since it is very human resource/materiel intensive and not particularly repeatable.
Virtual - Relatively medium cost since it is less human resource/materiel intensive, some reuse can occur, and repeatability is moderate.
Economic drivers:
Constructive - Relatively low cost since it is the least human resource/materiel intensive, reuse is high, and repeatability is high.In contrast, the fidelity of M&S is highest in Live, lower in Virtual, and lowest in Constructive. As such, DoD policy is a mixed use of LVC through the Military Acquisition life cycle, also known as the LVC Continuum. In the LVC Continuum figure to the right, the JCIDS process is related to the relative use of LVC through the Military Acquisition life cycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Classical limit**
Classical limit:
The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior.
Quantum theory:
A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of the Planck constant normalized by the action of these systems becomes very small. Often, this is approached through "quasi-classical" techniques (cf. WKB approximation).More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than the reduced Planck constant ħ, so the "deformation parameter" ħ/S can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction.
Quantum theory:
In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with ω = 2 Hz, m = 10 g, and maximum amplitude x0 = 10 cm, has S ≈ E/ω ≈ mωx20/2 ≈ 10−4 kg·m2/s = ħn, so that n ≃ 1030. Further see coherent states. It is less clear, however, how the classical limit applies to chaotic systems, a field known as quantum chaos.
Quantum theory:
Quantum mechanics and classical mechanics are usually treated with entirely different formalisms: quantum theory using Hilbert space, and classical mechanics using a representation in phase space. One can bring the two into a common mathematical framework in various ways. In the phase space formulation of quantum mechanics, which is statistical in nature, logical connections between quantum mechanics and classical statistical mechanics are made, enabling natural comparisons between them, including the violations of Liouville's theorem (Hamiltonian) upon quantization.In a crucial paper (1933), Dirac explained how classical mechanics is an emergent phenomenon of quantum mechanics: destructive interference among paths with non-extremal macroscopic actions S » ħ obliterate amplitude contributions in the path integral he introduced, leaving the extremal action Sclass, thus the classical action path as the dominant contribution, an observation further elaborated by Feynman in his 1942 PhD dissertation. (Further see quantum decoherence.)
Time-evolution of expectation values:
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential V , the Ehrenfest theorem says mddt⟨x⟩=⟨p⟩;ddt⟨p⟩=−⟨V′(X)⟩.
Time-evolution of expectation values:
Although the first of these equations is consistent with the classical mechanics, the second is not: If the pair (⟨X⟩,⟨P⟩) were to satisfy Newton's second law, the right-hand side of the second equation would have read ddt⟨p⟩=−V′(⟨X⟩) .But in most cases, ⟨V′(X)⟩≠V′(⟨X⟩) .If for example, the potential V is cubic, then V′ is quadratic, in which case, we are talking about the distinction between ⟨X2⟩ and ⟨X⟩2 , which differ by (ΔX)2 An exception occurs in case when the classical equations of motion are linear, that is, when V is quadratic and V′ is linear. In that special case, V′(⟨X⟩) and ⟨V′(X)⟩ do agree. In particular, for a free particle or a quantum harmonic oscillator, the expected position and expected momentum exactly follows solutions of Newton's equations.
Time-evolution of expectation values:
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point x0 , then V′(⟨X⟩) and ⟨V′(X)⟩ will be almost the same, since both will be approximately equal to V′(x0) . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.Now, if the initial state is very localized in position, it will be very spread out in momentum, and thus we expect that the wave function will rapidly spread out, and the connection with the classical trajectories will be lost. When the Planck constant is small, however, it is possible to have a state that is well localized in both position and momentum. The small uncertainty in momentum ensures that the particle remains well localized in position for a long time, so that expected position and momentum continue to closely track the classical trajectories for a long time.
Relativity and other deformations:
Other familiar deformations in physics involve: The deformation of classical Newtonian into relativistic mechanics (special relativity), with deformation parameter v/c; the classical limit involves small speeds, so v/c → 0, and the systems appear to obey Newtonian mechanics.
Similarly for the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild-radius/characteristic-dimension, we find that objects once again appear to obey classical mechanics (flat space), when the mass of an object times the square of the Planck length is much smaller than its size and the sizes of the problem addressed. See Newtonian limit.
Wave optics might also be regarded as a deformation of ray optics for deformation parameter λ/a.
Likewise, thermodynamics deforms to statistical mechanics with deformation parameter 1/N. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bulk tank**
Bulk tank:
In dairy farming a bulk milk cooling tank is a large storage tank for cooling and holding milk at a cold temperature until it can be picked up by a milk hauler. The bulk milk cooling tank is an important piece of dairy farm equipment. It is usually made of stainless steel and used every day to store the raw milk on the farm in good condition. It must be cleaned after each milk collection. The milk cooling tank can be the property of the farmer or be rented from a dairy plant.
Bulk tank types:
Raw milk producers have a choice of either open (from 150 to 3000 litres) or closed (from 1000 to 10000 litres) tanks. The cost can vary considerably, depending on manufacturing norms and whether a new or second hand tank is purchased.
Milk silos (10,000 litres and plus) are suitable for the very large producer. These are designed to be installed outside and adjacent to the dairy, all controls and the milk outlet pipe being situated in the dairy.
Tank Construction A milk cooling tank, also known as a bulk tank or milk cooler, consists of an inner and an outer tank, both made of high quality stainless steel.
The space between the outer tank and the inner tank is isolated with polyurethane foam. In case of a power failure with an outside temperature of 30°C, the content of the tank will warm up only 1°C in 24 hours.
To facilitate an adequate and rapid cooling of the entire content of a tank, every tank is equipped with at least one agitator. Stirring the milk ensures that all milk inside the tank is of the same temperature and that the milk stays homogeneous.
Bulk tank types:
On top of every closed milk cooling tank is a manhole of about 40 centimetres diameter. This enables thorough cleaning and inspection of the inner tank if necessary. The manhole is covered by a lid and sealed watertight with a rubber ring. Also on top are 2 or 3 small inlets. One is covered with an air-vent, the other(s) can be used to pump milk into the tank.
Bulk tank types:
A milk cooling tank usually stands on 4, 6, or 8 adjustable legs. The built-in tilt of the inner tank ensures that even the last drop of milk will eventually flow to the outlet.
At the bottom, every milk cooling tank has a threaded outlet, usually including a valve.
All tanks have a thermometer, allowing for immediate inspection of the inner temperature.
Most tanks include an automatic cleaning system. Using hot and cold water, an acid and/or alkaline cleaning fluid, a pump and a spray lance will clean the inner tank, ensuring an hygienic inner environment each time the tank is emptied.
Almost every tank has a control box. It manages the cooling process by use of a thermostat. The user can turn the system on and off, allow for extra and immediate stirring, start the cleaning routine, and reset the entire system in case of a failure.
Bulk tank types:
New and bigger milk cooling tanks are now being equipped with monitoring and alarm systems. These systems guard temperature of the milk inside the tank, check the functioning of the agitator, the cooling unit and temperature of the cleaning water. In case of malfunctioning of any of these functions, the alarm will activate. The monitoring system will also keep a record of the temperature and of all malfunctions for a given period.
Bulk tank types:
Bulk tank manufacturing norms Norms define among other criteria: insulation, milk agitation, cooling power required, variations in milk quantity measurement, calibration, … Some are more demanding than others.
Bulk tank types:
ISO standard 5708 (Refrigerated bulk milk tanks), published in 1983 European standard EN 13732 (Food processing machinery – Bulk milk coolers on farms – Requirements for construction, performance, suitability for use, safety and hygiene), published in 2003, updated in 2009 Northern American sanitary standard 3A 13-11 (3-A Sanitary Standards for Farm Milk Cooling and Holding Tanks, 13-11), effective July 23, 2012 www.3-a.org Bulk tank outlet standards Swedish outlet (SMS 1145), German outlet (DIN 11851), English RJT (BS 4825), IDF (ISO 2853), tri-clamp (ISO 2852), Danish outlet (DS 722), .... can be found, not to mention different diameters. They vary from country to country. Non standard outlets make the milk collection process difficult, as the operator needs to adapt to each different standard/diameter.
Bulk tank types:
Cooling Systems There are two primary methods of cooling milk entering the bulk tank, each with its own advantages and disadvantages. The tank capacity and type will depend on herd size, calving pattern, frequency of milk collection, required milk quality, energy and water availability and future plans for development.
Bulk tank types:
Direct Expansion A bulk tank with direct expansion cooling has pipes or pillow plates carrying refrigerant which are welded directly to the exterior of the milk chamber. A layer of insulation covers the exterior of the milk tank and the cooling lines, with an exterior metal shell over the insulation. Direct expansion cooling cannot run when the tank is empty or the inside walls of the tank would freeze. Instead, the tank is rapidly cooled as warm milk first enters the tank, and then the tank is cooled slowly just to maintain a low storage temperature. The rapid cooling during milking requires very large refrigeration compressors and condenser radiators to quickly expel heat from the milk, and is better suited for very large farming operations where three-phase electric power is available to operate the high-power cooling system.
Bulk tank types:
Ice Bank A bulk tank using an Ice Builder or Ice Bank immerses the bottom of the inner milk chamber in an open pool of water with copper tubes containing refrigerant suspended in the water. Between milkings, a small low-power cooling system slowly builds up a coating of ice around the copper tubes, and prevents icing of the pool over by continuously circulating the water in the pool. After the ice has achieved a thickness of 2-3 inches, the cooling system stops running.
Bulk tank types:
During milking, the milk entering the tank is primarily cooled by circulating the water in the pool around the walls of the inner milk chamber, and the melting of the ice. After the ice has melted sufficiently the cooling system restarts to assist the ice bank and restart the ice building. Ice bank bulk tanks are better suited for small family farm operations where only single-phase electric power is available, and high-power cooling systems would be either too expensive or difficult to install.
Bulk tank types:
Milk pre-cooling For energy savings and quality reasons it is advisable to pre-cool the milk before it enters the tank using a plate or a tube cooler (shell and tube heat exchanger) supplied with chilled water from the well water, the ice builder or the condensing unit. The quicker milk is cooled after leaving the cow the better. This system achieves most of the cooling before the milk enters the tank, so that chilled milk, rather than warm milk, is being added to the already cooled milk in the tank.
Bulk tank types:
Cooling temperature Generic temperature for milk storage is 3 to 4°C. For raw milk cheese manufacturing, it would be advisable to keep the milk at 12°C, as milk characteristics will be kept in a better state.
The milk cooling tank is usually not completely filled at once. A 2 milking tank is designed to cool 50% of its capacity at once. A 4 milking tank is designed to cool 25% of its capacity at once, and a 6 milking tank is designed to cool 16.7% of its capacity at once.
The cooling performance depends on the number of milking it takes to completely fill the tank, the ambient temperature and the cooling time.
Bulk tank cleaning systems:
There are two primary methods of cleaning bulk tanks, via manual scrubbing or automatic washing. Both methods generally use four steps to clean the tank: prerinsing with water to wet the surface and rinse off remaining milk residue washing with hot soapy water rinsing with water to remove the soap final sanitizing rinse with an approved bulk tank sanitizer solution Manual Scrubbing Manual scrubbing requires the bulk tank to have large hinged covers that can be lifted open to permit easy access to the interior surfaces of the tank. It tends to be much more thorough than automatic methods since it permits the tank to be carefully inspected during the washing process. If the tank is not found to be cleaned well enough, a troublesome area can be given additional cleansing attention.
Bulk tank cleaning systems:
Manual Scrubbing Limitations This job is difficult to perform for very large tanks, and becomes more difficult as the overall cross-section or diameter of the tank increases, requiring either a longer brush or a raised work platform around the tank to lift the cleaning worker to reach over the side of a tall tank.
Bulk tank cleaning systems:
Automatic Washing Automatic bulk tank washing and are normally activated by the milk collection truck driver after each milk collection. The cleaning system operates similar to a consumer dishwasher and consists of one or more free-spinning high-pressure spray nozzles with tangential jets, with the spray nozzle mounted on the end of a flexible whip suspended down into the center of the interior. As the cleaning solution sprays out of the jet, the force of the expelled water causes the jet to spin around and the whip to wildly swing back and forth, spraying the cleaning solution randomly all over the interior of the tank.
Bulk tank cleaning systems:
Automatic Washing Limitations Because no physical scrubbing occurs with automatic wash systems, the cleanser relies on surfactants and detergents to dissolve the fats left on the interior of the tank by the cream in the milk. However, this is not sufficient to remove milkstone buildup, and the tank may need to be washed occasionally with milkstone remover to remove this scale buildup that can harbor bacteria and contaminants.
Bulk tank cleaning systems:
Automatic scrubbing only cleans the interior of the tank. It is not capable of cleaning the exterior of the tank, and it does not do a good job of washing around the cover seals. While it is possible to just clean the interior and call it good enough, it does not provide the maximum sanitation of manually washing down the exterior of the tank following or during the automatic wash process. Also, some components that contact the milk such as the drain valve cannot be properly cleaned automatically without disassembling the valve and retaining washer and directly scrubbing in soapy water.
Operating costs:
Substantial reductions in running costs can be made when an ice builder is used in conjunction with off-peak electricity. Pre-cooling milk using a plate or a tube cooler supplied with mains or well water can also reduce costs and add to the cooling capacity of the tank.
Operating costs:
Bulk tank condenser units, which are not an integral part of the tank, should be fitted in an adjacent, suitable and well ventilated place. If at all possible, condenser units should not be fitted on a wall facing the sun. They should be installed in a way which allows them to draw in and discharge adequate quantities of air for efficient operation.
Operating costs:
Bulk tank should be easily accessible by large bulk collection tankers and positioned so that the tanker approaches can be kept clean and free from cow traffic at all times.
Although tanks have been calibrated when first installed, bulk tank miscalibration is not uncommon and in some cases it can result in significant loss of income. Milk tanks calibrated on the low side, can cheat raw milk producers by up to 22 litres on each shipment. It is therefore advisable to re-calibrate a bulk tank.
Other usage of bulk tanks:
Stainless steel bulk tanks are also used to heat or cool a fluid or simply to keep it isolated and warm/cold. Because of the hygienical finishing of the inner and outer side of the tanks, almost any fluid can be stored: water, fruit juices, honey, wine, beer, ink, paint, cosmetics, aromatic food-additives, bacterial cultures, cleansers, oil, or blood. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rattle GUI**
Rattle GUI:
Rattle GUI is a free and open source software (GNU GPL v2) package providing a graphical user interface (GUI) for data mining using the R statistical programming language. Rattle is used in a variety of situations. Currently there are 15 different government departments in Australia, in addition to various other organisations around the world, which use Rattle in their data mining activities and as a statistical package.
Rattle GUI:
Rattle provides considerable data mining functionality by exposing the power of the R Statistical Software through a graphical user interface. Rattle is also used as a teaching facility to learn the R software Language. There is a Log Code tab, which replicates the R code for any activity undertaken in the GUI, which can be copied and pasted. Rattle can be used for statistical analysis, or model generation. Rattle allows for the dataset to be partitioned into training, validation and testing. The dataset can be viewed and edited. There is also an option for scoring an external data file.
Features:
File Inputs = CSV, TXT, Excel, ARFF, ODBC, R Dataset, RData File, Library Packages Datasets, Corpus, and Scripts.
Statistics = Min, Max, Quartiles, Mean, St Dev, Missing, Medium, Sum, Variance, Skewness, Kurtosis, chi square.
Statistical tests = Correlation, Wilcoxon-Smirnov, Wilcoxon Rank Sum, T-Test, F-Test, and Wilcoxon Signed Rank.
Clustering = KMeans, Clara, Hierarchical, and BiCluster.
Modeling = Decision Trees, Random Forests, ADA Boost, Support Vector Machine, Logistic Regression, and Neural Net.
Evaluation = Confusion Matrix, Risk Charts, Cost Curve, Hand, Lift, ROC, Precision, Sensitivity.
Charts = Box Plot, Histogram, Correlations, Dendrograms, Cumulative, Principal Components, Benford, Bar Plot, Dot Plot, and Mosaic.
Features:
Transformations = Rescale (Recenter, Scale 0-1, Median/MAD, Natural Log, and Matrix) - Impute ( Zero/Missing, Mean, Median, Mode & Constant), Recode (Binning, Kmeans, Equal Widths, Indicator, Join Categories) - Cleanup (Delete Ignored, Delete Selected, Delete Missing, Delete Obs with Missing)Rattle also uses two external graphical investigation / plotting tools. Latticist and GGobi are independent applications which provide highly dynamic and interactive graphic data visualisation for exploratory data analysis.
Packages:
The capabilities of R are extended through user-submitted packages, which allow specialized statistical techniques, graphical devices, as well as import/export capabilities to many external data formats. Rattle uses these packages - RGtk2, pmml, colorspace, ada, amap, arules, biclust, cba, descr, doBy, e1071, ellipse, fEcofin, fBasics, foreign, fpc, gdata, gtools, gplots, gWidgetsRGtk2, Hmisc, kernlab, latticist, Matrix, mice, network, nnet, odfWeave, party, playwith, psych, randomForest, reshape, RGtk2Extras, ROCR, RODBC, rpart, RSvgDevice, survival, timeDate, graph, RBGL, bitops, | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formyl fluoride**
Formyl fluoride:
Formyl fluoride is the organic compound with the formula HC(O)F.
Preparation:
HC(O)F was first reported in 1934. Among the many preparations, a typical one involves the reaction of sodium formate with benzoyl fluoride (generated in situ from KHF2 and benzoyl chloride): HCOONa + C6H5C(O)F → FC(O)H + C6H5COONa
Structure:
The molecule is planar; C-O and C-F distances are 1.18 and 1.34 A, respectively.
Reactions:
HC(O)F decomposes autocatalytically near room temperature to carbon monoxide and hydrogen fluoride: HC(O)F → HF + COBecause of the compound's sensitivity, reactions are conducted at low temperatures and samples are often stored over anhydrous alkali metal fluorides, e.g. potassium fluoride which absorbs HF.
Reactions:
Benzene (and other arenes) react with formyl fluoride in the presence of boron trifluoride to give benzaldehyde. In a related reaction, formyl chloride is implicated in Gattermann-Koch formylation reaction. The reaction of formyl fluoride/BF3 with perdeuteriobenzene (C6D6) exhibits a kinetic isotope effect of 2.68, similar to the isotope effect observed in Friedel-Crafts acetylation of benzene. Formylation of benzene with a mixture of CO and hexafluoroantimonic acid however, exhibits no isotope effect (C6H6 and C6D6 react at the same rate), indicating that this reaction involves a more reactive formylating agent, possibly CHO+.Formyl fluoride undergoes the reactions expected of an acyl halide: alcohols and carboxylic acids are converted to formate esters and mixed acid anhydrides, respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-hydroxypropionate dehydrogenase (NADP+)**
3-hydroxypropionate dehydrogenase (NADP+):
3-hydroxypropionate dehydrogenase (NADP+) (EC 1.1.1.298) is an enzyme with systematic name 3-hydroxypropionate:NADP+ oxidoreductase. This enzyme catalyses the following chemical reaction 3-hydroxypropionate + NADP+ ⇌ malonate semialdehyde + NADPH + H+This enzyme catalyses the reduction of malonate semialdehyde to 3-hydroxypropionate, which is a key step in the 3-hydroxypropionate and the 3-hydroxypropionate/4-hydroxybutyrate cycles, autotrophic CO2 fixation pathways found in some green non-sulfur phototrophic bacteria and archaea, respectively. The enzyme from Chloroflexus aurantiacus is bifunctional, and also catalyses the upstream reaction in the pathway, EC 1.2.1.75. Different from EC 1.1.1.59, 3-hydroxypropionate dehydrogenase (NAD+), by cofactor preference. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Incomplete polylogarithm**
Incomplete polylogarithm:
In mathematics, the Incomplete Polylogarithm function is related to the polylogarithm function. It is sometimes known as the incomplete Fermi–Dirac integral or the incomplete Bose–Einstein integral. It may be defined by: Li s(b,z)=1Γ(s)∫b∞xs−1ex/z−1dx.
Expanding about z=0 and integrating gives a series representation: Li s(b,z)=∑k=1∞zkksΓ(s,kb)Γ(s) where Γ(s) is the gamma function and Γ(s,x) is the upper incomplete gamma function. Since Γ(s,0)=Γ(s), it follows that: Li Li s(z) where Lis(.) is the polylogarithm function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Test vector**
Test vector:
In computer science and engineering, a test vector is a set of inputs provided to a system in order to test that system. In software development, test vectors are a methodology of software testing and software verification and validation.
Rationale:
In computer science and engineering, a system acts as a computable function. An example of a specific function could be y=f(x) where y is the output of the system and x is the input; however, most systems' inputs are not one-dimensional. When the inputs are multi-dimensional, we could say that the system takes the form y=f(x1,x2,...) ; however, we can generalize this equation to a general form Y=C(X) where Y is the result of the system's execution, C belongs to the set of computable functions, and X is an input vector. While testing the system, various test vectors must be used to examine the system's behavior with differing inputs.
Example:
For example, consider a login page with two input fields: a username field and a password field. In that case, the login system can be described as: y=L(u,p) with y∈{true,false} and u,p∈{String} , with true designating login successful, and false designating login failure, respectively.
Making things more generic, we can suggest that the function L takes input as a 2-dimensional vector and outputs a one-dimensional vector (scalar).
This can be written in the following way:- Y=L(X) with X=[x1,x2]=[u,p];Y=[y1] In this case, X is called the input vector, and Y is called the output vector.
In order to test the login page, it is necessary to pass some sample input vectors {X1,X2,X3,...} . In this context Xi is called a test vector. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Asset tracking**
Asset tracking:
Asset tracking refers to the method of tracking physical assets, either by scanning barcode labels attached to the assets or by using tags using GPS, BLE, LoRa, or RFID which broadcast their location. These technologies can also be used for indoor tracking of persons wearing a tag.
RFID:
'Passive' RFID tags broadcast their location but have limited transmission range (typically a few meters). Longer-range "smart tags" use 'active' RFID -where a radio transmitter is powered by a battery and can transmit up to 2000 meters (6,600 feet) in optimum conditions. RFID-based Asset Tracking requires an infrastructure to be put in place before the whereabouts of tags may be ascertained. An asset tracking system can record the location and usage of the assets and generate various reports. Unlike traditional barcode labels, RFID tags can be read faster and have higher durability. RFID is integrated in smart warehousing to track inventory and other assets. Realtime information of the tagged objects are obtained including the location, activity and other characteristics.
Barcodes:
Assets can be tracked via manually scanning barcodes such as QR codes. QR codes can be scanned using smartphones with cameras and dedicated apps, as well as with barcode readers.
NFC:
Latest trend in asset tracking is using NFC. NFC technology simplifies tracking of assets by tapping the assets and getting the details. This is an advantage for tracking critical assets where user needs to see the condition of the asset to be tracked.
GPS asset tracking:
Assets may also be tracked globally using devices which combine the GPS system and mobile phone and/or satellite phone technology. Such devices are known as GPS asset trackers and are different from other GPS tracking units in that they rely on an internal battery for power rather than being hard-wired to a vehicle's battery. The frequency with which the position of the device must be known or available dictates the quality, size or type of GPS asset tracker required. It is common for asset tracking devices to fail due to Faraday cage effects as a huge proportion of the worlds assets are moved via intermodal containers. However modern tracking technology has now seen advances in signal transmission that allows enough signal strength reception from the GPS satellite system which can then be reported via GPRS to terrestrial networks.
GPS asset tracking:
Mobile phones are personal devices. Asset tracking apps for smart devices had been used as a means of personal tracking and rescues. For example, Find My iPhone is an app and service provided by Apple; it was used in rescuing a missing person in deep ravine from a car accident after other attempts failed. There are also tracking apps combined with viewing functions used as a surveillance similar to FAA's Automatic dependent surveillance – broadcast (ADS-B).
Wi-fi, IR, LoRa and Bluetooth:
For indoor asset tracking Wi-fi combined with another technology like IR has been used. Bluetooth and LoRa technology have also been used, and may provide more accuracy even if these radio technologies weren't primarily developed for location tracking. In order to position an asset via Bluetooth or LoRa, which is a basic requirement for asset tracking, the RSSI and TDOA is used to calculate a distance to the signaling transmitter from the signal strength. The principles for position determination are trilateration, triangulation and fingerprinting. LoRa can be used for both indoor and outdoor tracking as LoRa signals can travel longer distances (kilometers). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1,5-anhydro-D-fructose reductase**
1,5-anhydro-D-fructose reductase:
In enzymology, a 1,5-anhydro-D-fructose reductase (EC 1.1.1.263) is an enzyme that catalyzes the chemical reaction 1,5-anhydro-D-glucitol + NADP+ ⇌ 1,5-anhydro-D-fructose + NADPH + H+Thus, the two substrates of this enzyme are 1,5-anhydro-D-glucitol and NADP+, whereas its 3 products are 1,5-anhydro-D-fructose, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 1,5-anhydro-D-glucitol:NADP+ oxidoreductase.
Structural studies:
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2GLX. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Treacle sponge pudding**
Treacle sponge pudding:
A treacle sponge pudding is a traditional British dessert dish consisting of a steamed sponge cake with treacle cooked on top of it, sometimes also poured over it and often served with hot custard.The dish has been mass-produced and imported into the United States, and provided to consumers as a canned product that can be cooked in a microwave oven. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**4-oxalocrotonate decarboxylase**
4-oxalocrotonate decarboxylase:
The enzyme 4-oxalocrotonate decarboxylase (EC 4.1.1.77) catalyzes the chemical reaction 4-oxalocrotonate ⇌ 2-oxopent-4-enoate + CO2This enzyme belongs to the family of lyases, specifically the carboxy-lyases, which cleave carbon-carbon bonds. The systematic name of this enzyme class is 4-oxalocrotonate carboxy-lyase (2-oxopent-4-enoate-forming). This enzyme is also called 4-oxalocrotonate carboxy-lyase. This enzyme participates in 3 metabolic pathways: benzoate degradation via hydroxylation, toluene and xylene degradation, and fluorene degradation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Earring**
Earring:
An earring is a piece of jewelry attached to the ear via a piercing in the earlobe or another external part of the ear (except in the case of clip earrings, which clip onto the lobe). Earrings have been worn by people in different civilizations and historic periods, often with cultural significance.
Earring:
Locations for piercings other than the earlobe include the rook, tragus, and across the helix (see image at right). The simple term "ear piercing" usually refers to an earlobe piercing, whereas piercings in the upper part of the external ear are often referred to as "cartilage piercings". Cartilage piercings are more complex to perform than earlobe piercings and take longer to heal.Earring components may be made of any number of materials, including metal, plastic, glass, precious stone, beads, wood, bone, and other materials. Designs range from small hoops and studs to large plates and dangling items. The size is ultimately limited by the physical capacity of the earlobe to hold the earring without tearing. However, heavy earrings worn over extended periods of time may lead to stretching of the earlobe and the piercing.
History:
Ear piercing is one of the oldest known forms of body modification, with artistic and written references from cultures around the world dating back to early history. Gold earrings, along with other jewelry made of gold, lapis lazuli, and carnelian were found in the ancient sites in Lothal, India, and Sumerian Royal Cemetery at Ur from the Early Dynastic period. Gold, silver and bronze hoop earrings were prevalent in the Minoan Civilization (2000–1600 BCE) and examples can be seen on frescoes on the Aegean island of Santorini, Greece. During the late Minoan and early Mycenaean periods of Bronze Age Greece hoop earrings with conical pendants were fashionable. Early evidence of earrings worn by men can be seen in archeological evidence from Persepolis in ancient Persia. The carved images of soldiers of the Persian Empire, displayed on some of the surviving walls of the palace, show them wearing an earring.
History:
Howard Carter writes in his description of Tutankhamun's tomb that the Pharaoh's earlobes were perforated, but no earrings were inside the wrappings, although the tomb contained some. The burial mask's ears were perforated as well, but the holes were covered with golden discs. That implies that at the time, earrings were only worn in Egypt by children, much like in Egypt of Carter's times.
History:
Other early evidence of earring-wearing is evident in the Biblical record. In Exodus 32:1–4, it is written that while Moses was up on Mount Sinai, the Israelites demanded that Aaron make a god for them. It is written that he commanded them to bring their sons' and daughters' earrings (and other pieces of jewelry) to him in order that he might comply with their demand (c. 1500 BCE). By the classical period, including in the Middle East, as a general rule, they were considered exclusively female ornaments. During certain periods in Greece and Rome also, earrings were worn mainly by women, though they were popular among men in early periods and had resurfaced later on, as famous figures like Plato were known to have worn them.The practice of wearing earrings was a tradition for Ainu men and women, but the Government of Meiji Japan forbade Ainu men to wear earrings in the late-19th century. Earrings were also commonplace among nomadic Turkic tribes and Korea. Lavish ear ornaments have remained popular in India from ancient times to the present day. And it was common that men and women wear earrings during Silla, Goryeo to Joseon.
History:
In Western Europe, earrings became fashionable among English courtiers and gentlemen in the 1590s during the English Renaissance. A document published in 1577 by clergyman William Harrison, Description of England, states "Some lusty courtiers and gentlemen of courage do wear either rings of gold, stones or pearls in their ears." Among sailors, a pierced earlobe was a symbol that the wearer had sailed around the world or had crossed the equator.By the late 1950s or early 1960s, the practice re-emerged in the Western world. Teenage girls were known to hold ear piercing parties, where they performed the procedure on one another. By the mid-1960s, some physicians offered ear piercing as a service. Simultaneously, Manhattan jewelry stores were some of the earliest commercial, non-medical locations for getting an ear piercing.In the late 1960s, ear piercing began to make inroads among men through the hippie and gay communities, although they had been popular among sailors for decades (or longer).
History:
By the early 1970s, ear piercing was common among women, thus creating a broader market for the procedure. Department stores throughout the country would hold ear piercing events, sponsored by earring manufacturers. At these events, a nurse or other trained person would perform the procedure, either pushing a sharpened and sterilized starter earring through the earlobe by hand, or using an ear-piercing instrument modified from the design used by physicians.In the late 1970s, amateur piercings, sometimes with safety pins or multiple piercings, became popular in the punk rock community. By the 1980s, the trend for male popular music performers to have pierced ears helped establish a fashion trend for men. This was later adopted by many professional athletes. British men started piercing both ears in the 1980s; George Michael of Wham! was a prominent example. As of now, it is widely acceptable for teenage and pre-teen boys to have both ears pierced as well simply as a fashion statement. Multiple piercings in one or both ears first emerged in mainstream America in the 1970s. Initially, the trend was for women to wear a second set of earrings in the earlobes, or for men to double-pierce a single earlobe. Asymmetric styles with more and more piercings became popular, eventually leading to the cartilage piercing trend. Double ear piercing in newborn babies is a phenomenon in Central America, in particular in Costa Rica.
History:
A variety of specialized cartilage piercings have since become popular. These include the tragus piercing, antitragus piercing, rook piercing, industrial piercing, helix piercing, orbital piercing, daith piercing, and conch piercing. In addition, earlobe stretching, while common in indigenous cultures for thousands of years, began to appear in Western society in the 1990s, and is now a fairly common sight. However, these forms of ear piercing are still infrequent compared to standard ear piercing.
History:
Religious According to Hindu dharma tradition, most girls and some boys (especially the "twice-born") get their ears pierced as part of a Dharmic rite known as Karnavedha before they are about five years old. Infants may get their ears pierced as early as several days after their birth.
Similar customs are practiced in other Asian countries, including Nepal, Sri Lanka, and Laos, although traditionally most males wait to get their ears pierced until they have reached young adulthood.
History:
Ear piercing is mentioned in the Bible in several contexts. The most familiar refers to a Hebrew slave who was to be freed in the seventh year of servitude but wishes to continue serving his master and refuses to go free: "…his master shall take him before God. He shall be brought to the door or the doorpost, and his master shall pierce his ear with an awl, and he shall then remain his slave for life" (Exodus 21:6).
Types of earrings:
Modern standard pierced earrings Barbell earrings Barbell earrings get their name from their resemblance to a barbell, generally coming in the form of a metal bar with an orb on either end. One of these orbs is affixed in place, while the other can be detached to allow the barbell to be inserted into a piercing. Several variations on this basic design exist, including barbells with curves or angles in the bar of the earring.
Types of earrings:
Claw earrings The claw, talon or pincher is essentially a curved taper which is worn in stretched ear lobe piercings. The thickest end is generally flared and may be decorated, and a rubber o-ring may also be used to prevent the talon from becoming dislodged when worn. Common materials include acrylic and glass. A similar item of jewelry is the crescent, or pincher, which as the name suggests, is shaped like a crescent moon and is tapered at both ends. Talons and claws may also be quite ornamental (e.g.: carved in the form of a serpent or dragon). Consequently, they may prove to be an impractical choice of jewelry as they may snag on hair, clothing, etc.
Types of earrings:
Statement earrings Statement earrings can be defined as "earrings which invite attention from others by demonstrating bold, original, and unique designs with innovative construction and material combinations". They include one or more of the following design features:Dangles Tassels Sparkles Bold or striking colours Hoops Stud/minimal earrings The main characteristic of stud earrings is the appearance of floating on the ear or earlobe without a visible (from the front) point of connection. Studs are invariably constructed on the end of a post, which penetrates straight through the ear or earlobe. The post is held in place by a removable friction back or clutch (also known as a butterfly scroll). A stud earring features a gemstone or other ornament mounted on a narrow post that passes through a piercing in the ear or earlobe, and is held in place by a fixture on the other side. Studs commonly come in the form of solitaire diamonds. Some stud earrings are constructed so that the post is threaded, allowing a screw back to hold the earring in place securely, which is useful in preventing the loss of expensive earrings containing precious stones, or made of precious metals.
Types of earrings:
Hoop earrings Hoop earrings are circular or semi-circular in design and look very similar to a ring. Hoop earrings generally come in the form of a hoop of metal that can be opened to pass through the ear piercing. They are often constructed of metal tubing, with a thin wire attachment penetrating the ear. The hollow tubing is permanently attached to the wire at the front of the ear, and slips into the tube at the back. The entire device is held together by tension between the wire and the tube. Other hoop designs do not complete the circle, but penetrate through the ear in a post, using the same attachment techniques that apply to stud earrings. A variation is the continuous hoop earring. In this design, the earring is constructed of a continuous piece of solid metal, which penetrates through the ear and can be rotated almost 360°. One of the ends is permanently attached to a small piece of metallic tubing or a hollow metallic bead. The other end is inserted into the tubing or bead, and is held in place by tension. One special type of hoop earring is the sleeper earring, a circular wire normally made of gold, with a diameter of approximately one centimeter. Hinged sleepers, which were common in Britain in the 1960s and 1970s, comprise two semi-circular gold wires connected via a tiny hinge at one end, and fastened via a small clasp at the other, to form a continuous hoop whose fastening mechanism is effectively invisible to the naked eye. Because their small size makes them unobtrusive and comfortable, and because they are normally otherwise unadorned, sleepers are so-called because they were intended to be worn at night to keep a pierced ear from closing, and were often the choice for the first set of earrings immediately following the ear piercing in the decades before ear-piercing guns using studs became commonplace, but are often a fashion choice in themselves because of their attractive simplicity and because they subtly call attention to the fact that the ear is pierced.
Types of earrings:
A drop earring attaches to the earlobe and features a gemstone or ornament that dangles down from a chain, hoop, or similar object. The length of these ornaments vary from the very short to the extravagantly long. Such earrings are occasionally known as droplet earrings, dangle earrings, or pendant earrings. They also include chandelier earrings, which branch out into elaborate, multi-level pendants.
Types of earrings:
Dangle earrings Dangle earrings (also known as drop earrings) are designed to suspend from the bottoms of the earlobes. Their lengths vary from a centimeter or two, all the way to brushing the wearer's shoulders. A pierced dangle earring is generally attached to the ear with a thin wire passing through the earlobe. It may connect to itself with a small hook at the back, or in the French hook design, the wire passes through the earlobe piercing without closure, although small plastic or silicone retainers are sometimes used on ends. Rarely, dangle earrings use the post attachment design. There are also variants that attach without piercing.
Types of earrings:
Huggy earrings Huggy earrings are hoops that closely follow the curve of the earlobe, instead of dangling down beneath it as in regular hoop earrings. Commonly, stones are channel set in huggy earrings.
Ear thread Ear thread, or earthreader, ear string, or threader earrings, are a chain that is thin enough to slip into the ear hole, dangling down at the back. Sometimes, people add beads or other materials onto the chain, so the chain dangles with beads below the ear.
Jhumka earrings A type of dangling bell-shaped traditional earrings mostly worn by women of Indian subcontinent.
Chandelier earrings Chandelier earrings have an appearance similar to that of chandeliers, with a design that dangles below the ear and is wider at the base than the top.
Body piercing jewelry used as earrings Body piercing jewelry is often used for ear piercings, and is selected for a variety of reasons including the availability of larger gauges, better piercing techniques, and a reduced risk of healing complications.
Types of earrings:
Captive bead rings – Captive bead rings, often abbreviated as CBRs and sometimes called ball closure rings, are a style of body piercing jewelry that is an almost 360° ring with a small gap for insertion through the ear. The gap is closed with a small bead that is held in place by the ring's tension. Larger gauge ball closure rings exhibit considerable tension, and may require ring expanding pliers for insertion and removal of the bead.
Types of earrings:
Barbells – Barbells are composed of a thin, straight metal rod with a bead permanently fixed to one end. The other end is threaded, either externally or tapped with an internal thread, and the other bead is screwed into place after the barbell is inserted through the ear. Since the threads on externally threaded barbells tend to irritate the piercing, internal threads have become the most common variety. Another variation are threadless barbells or press-fit jewelry, with a hollow post, a fixed back disk and a front end that is attached with a slightly bend pin that is inserted into the post.
Types of earrings:
Circular barbells – Circular barbells are similar to ball-closure rings, except that they have a larger gap, and have a permanently attached bead at one end, and a threaded bead at the other, like barbells. This allows for much easier insertion and removal than with ball closure rings, but at the loss of a continuous look.
Plugs – Earplugs are short cylindrical pieces of jewelry. Some plugs have flared ends to hold them in place, others require small elastic rubber rings (O-rings) to keep them from falling out. They are usually used in large-gauge piercings.
Flesh tunnels – Flesh tunnels, also known as eyelets or bullet holes, are similar to plugs; however, they are hollow in the middle. Flesh tunnels are most commonly used in larger gauge piercings either because weight is a concern to the wearer or for aesthetic reasons.
Gauges and other measuring systems For an explanation of how earring sizes are denoted, see the article Body jewelry sizes.
Clip-on and other non-pierced earrings Several varieties of non-pierced earrings exist.
Clip-on earrings – Clip-on earrings have existed longer than any other variety of non-pierced earrings. The clip itself is a two-part piece attached to the back of an earring. The two pieces closed around the earlobe, using mechanical pressure to hold the earring in place.
Magnetic earrings – Magnetic earrings simulate the look of a (pierced) stud earring by attaching to the earlobe with a magnetic back that hold the earring in place on by magnetic force.
Stick-on earrings – Stick-on earrings are adhesive-backed items which stick to the skin of the earlobe and simulate the look of a (pierced) stud earring. They are considered a novelty item.
Spring hoop earrings – Spring hoops are almost indistinguishable from standard hoop earrings and stay in place by means of spring force.
An alternative which is often used is bending a wire or even just using the ring portion of a CBR to put on the earlobe, which stays on by pinching the ear Ear hook earrings – A large hook like the fish hook that is big enough to hook and hang over the whole ear and dangles.
The hoop – A hoop threads over the ear and hangs from just inside the ear, above where ears are pierced. Mobiles or other dangles can be hung from the hoop to create a variety of styles.
Ear screws – Screwed onto the lobe, allow for exact adjustment—an alternative for those who find clips too painful.
Ear cuffs – Wrap around the outer cartilage (similar to a conch piercing) and may be chained to a lobe piercing.
Types of earrings:
Permanent earrings Where most earrings worn in the western world are designed to be removed easily to be changed at will, earrings can also be permanent (non-removable). They were once used as a mark of slavery or ownership. They appear today in the form of larger gauge rings which are difficult or impossible for a person to remove without assistance. Occasionally, hoop earrings are permanently installed by the use of solder, though this poses some risks due to toxicity of metals used in soldering and the risk of burns from the heat involved. Besides permanent installations, locking earrings are occasionally worn due to their personal symbolism or erotic value.
Ear piercing:
Pierced ears are earlobes or the cartilage portion of the external ears which have had one or more holes created in them for the wearing of earrings. The holes may be permanent or temporary. The holes become permanent when a fistula is created by scar tissue forming around the initial earring.
Conch piercing A conch piercing is a perforation of the part of the external human ear called the "concha", for the purpose of inserting and wearing jewelry. Conch piercings have become popular among young women in recent decades as part of a trend for multiple ear piercings.
Ear piercing:
Helix piercing The helix piercing is a perforation of the helix or upper ear cartilage for the purpose of inserting and wearing a piece of jewelry. The piercing itself is usually made with a small gauge hollow piercing needle, and typical jewelry would be a small diameter captive bead ring, or a stud.Sometimes, two helix piercings hold the same piece of jewelry, usually a barbell, which is called an industrial piercing.
Ear piercing:
The information that helix piercings do not have nerve endings and do not hurt is completely false. They do hurt like any other cartilage piercing and bumping or tugging on them by accident during healing can be extremely painful. However, when they are left alone and not being irritated or touched, they typically do not hurt at all. Being careful of shifting the piercing at all during healing, which can take 6 to 9 months, is essential.
Ear piercing:
Snug piercing A snug (or antihelix) piercing is a piercing which passes through the anti-helix of the ear from the medial to lateral surfaces.
Ear piercing:
Spiral piercing An ear spiral is a thick spiral that is usually worn through the earlobe. It is worn in ears that have been stretched and normally held in place only by its own downward pressure. Glass ear spirals are shown but many materials are used. Some designs are quite ornate and may include decorative appendages flaring from the underlying concentric pattern.
Ear piercing:
Piercing techniques A variety of techniques are used to pierce ears, ranging from "do it yourself" methods using household items to medically sterile methods using specialized equipment.
Ear piercing:
A long-standing home method involves using ice as a local anesthetic, a sewing needle as a puncture instrument, a burning match and rubbing alcohol for disinfection, and a semi-soft object, such as a potato, cork, bar of soap or rubber eraser, as a push point. Sewing thread may be drawn through the piercing and tied, as a device for keeping the piercing open during the healing process. Alternatively, a gold stud or wire earring may be directly inserted into the fresh piercing as the initial retaining device. Home methods are often unsafe and risky due to issues of improper sterilization or placement.
Ear piercing:
Another method for piercing ears, first made popular in the 1960s, was the use of sharpened spring-loaded earrings known as self-piercers, trainers, or sleepers, which gradually pushed through the earlobe. However, these could slip from their initial placement position, often resulting in more discomfort, and many times would not go all the way through the earlobe without additional pressure being applied. This method has fallen into disuse due to the popularity of faster and more successful piercing techniques.
Ear piercing:
Ear piercing instruments, sometimes called ear-piercing guns, were originally developed for physician use but with modifications became available in retail settings. Today more and more people in the Western world have their ears pierced with an ear piercing instrument in specialty jewellery or accessory stores, or at home using disposable ear piercing instruments. An earlobe piercing performed with an ear piercing instrument is often described as feeling similar to being pinched, or being snapped by a rubber band. Piercing with this method, especially for cartilage piercings, is not recommended by many piercing professionals and physicians, as it can cause blunt force trauma to the skin, and takes far longer to heal than needle piercing. In addition, the vast majority of ear piercing instruments are made of plastic, which means they can never be truly sterilized by use of an Autoclave, increasing chance of infection exponentially. In the case of cartilage piercing, doing it with an ear piercing instrument can shatter the ear cartilage and lead to serious complications.
Ear piercing:
An alternative which is growing in practice is the use of a hollow piercing needle, as is done in body piercing. The piercer disinfects the earlobe with alcohol and puts a mark on the lobe with a pen. It gives the opportunity to the client to check whether the position is correct or not. Then, the piercer uses a clamp with flat ends and holes at the end to hold the earlobe, with the dot in the middle of the holes. This device will support the skin during the piercing process. A cork can be placed behind the earlobe to stop the movement of the needle after the piercing process, and protect the tip of the needle for the client's comfort. Then, the piercer places the hollow needle perpendicular to the skin's surface and check the position of the needle, to pierce at the desired place and the right angle. The piercing process consists of pushing the needle through the earlobe, until it gets out in the other side. The client has to remain still during all the process. Then, the clamp can be put off. After that, the piercer puts the jewel in the hollow needle and pushes the needle through until the jewel enters into the lobe. Then, the needle is removed and disposed properly. The jewel is attached to the lobe and the piercer disinfects the lobe again.
Ear piercing:
In tribal cultures and among some neo-primitive body piercing enthusiasts, the piercing is made using other tools, such as animal or plant organics.
Ear piercing:
Initial healing time for an earlobe piercing performed with an ear piercing instrument is typically six to eight weeks. After that time, earrings can be changed, but if the hole is left unfilled for an extended period of time, there is some risk of the piercing closing. Piercing professionals recommend wearing earrings in the newly pierced ears for at least six months, and sometimes even a full year. Cartilage piercing will usually require more healing time than earlobe piercing, sometimes two to three times as long. After healing, earlobe piercings will shrink to smaller gauges in the prolonged absence of earrings, and in most cases will completely disappear.
Ear piercing:
Health risks The health risks with conventional earlobe piercing are common but tend to be minor, particularly if proper technique and hygienic procedures are followed. One study found that up to 35 percent of persons with pierced ears had one or more complications, including minor infection (77 percent of pierced ear sites with complications), allergic reaction (43 percent), keloids (2.5 percent), and traumatic tearing (2.5 percent). Pierced ears are a significant risk factor for contact allergies to the nickel in jewelry. Earlobe tearing during the healing period or after healing is complete can be minimized by not wearing earrings, especially wire-based dangle earrings, during activities in which they are likely to become snagged, such as while playing sports. Also, larger gauge jewelry will lessen the chance of the earring being torn out.With cartilage piercing, the blunt force of an ear piercing instrument will traumatize the cartilage, and therefore make healing more difficult. Also, because there is substantially less blood flow in ear cartilage than in the earlobe, infection is a much more serious issue. There have been several documented cases of severe infections of the upper ear following piercing with an ear piercing instrument, which required courses of antibiotics to clear up. There are many ways that an infection can occur: the most common way is when the person that got pierced decides to take out the piercing too early. According to the A.M.A. the proper waiting period to change or take out a piercing with substantially less risk of infection would be three weeks.
Ear piercing:
For all ear piercings, the use of a sterilized hollow piercing needle tends to minimize the trauma to the tissue and minimize the chances of contracting a bacterial infection during the procedure. As with any invasive procedure, there is always a risk of infection from blood borne pathogens such as hepatitis and HIV. However, modern piercing techniques make this risk extremely small (the risk being greater to the piercer than to the pierced due to the potential splash-back of blood). There has never been a documented case of HIV transmission due to ear/body piercing or tattooing, although there have been instances of the Hepatitis B virus being transmitted through these practices.
Ear piercing:
Negative effects of wearing earrings in light of research The most frequent complications connected with wearing earrings are: inflammation keloids loss of tissue by tearing mechanical division of earlobes potential skin disorderResearchers observed a correlation between the piercing of young girls' earlobes and subsequent development of allergies.In Professor Ewa Czarnobilska's view (the manager of the research team) the main reason of allergy (listed by allergists) is presence of nickel as a component of alloys used in the production of earrings – however the ingredients declared by producer are not significant, because nickel is a standard component of jewellery.Symptoms of allergy are visible as eczema. This symptom is often justified to be food allergy (e.g. to milk), meanwhile the reason is contact with the earring (precisely nickel ions) with the lymphatic system.Ceasing of wearing earrings by children does not result in vanishing allergy symptoms. The immune system remembers the presence of nickel ions that existed in someone's blood and lymph. Even though the children ceased wearing earrings, it can appear as an allergic reaction to: metal parts of wardrobe dental braces dental prosthesis orthotics meals cooked in pots with addition of nickel margarine (nickel is a catalyst in hydrogenation of unsaturated fats) coins chocolate nuts leguminous vegetables wine beerResearch studying a sample of 428 pupils, age seven and eight, and sixteen and seventeen noticed that: thirty percent of population were allergic to nickel allergy occurred for many girls who had started wearing earrings in early childhoodOther symptoms of allergy to nickel are: recurring infections asthma attacks chronic larynxis | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EGLN1**
EGLN1:
Hypoxia-inducible factor prolyl hydroxylase 2 (HIF-PH2), or prolyl hydroxylase domain-containing protein 2 (PHD2), is an enzyme encoded by the EGLN1 gene. It is also known as Egl nine homolog 1. PHD2 is a α-ketoglutarate/2-oxoglutarate-dependent hydroxylase, a superfamily non-haem iron-containing proteins. In humans, PHD2 is one of the three isoforms of hypoxia-inducible factor-proline dioxygenase, which is also known as HIF prolyl-hydroxylase.
The hypoxia response:
HIF-1α is a ubiquitous, constitutively synthesized transcription factor responsible for upregulating the expression of genes involved in the cellular response to hypoxia. These gene products may include proteins such as glycolytic enzymes and angiogenic growth factors. In normoxia, HIF alpha subunits are marked for the ubiquitin-proteasome degradation pathway through hydroxylation of proline-564 and proline-402 by PHD2. Prolyl hydroxylation is critical for promoting pVHL binding to HIF, which targets HIF for polyubiquitylation.
Structure:
PHD2 is a 46-kDa enzyme that consists of an N-terminal domain homologous to MYND zinc finger domains, and a C-terminal domain homologous to the 2-oxoglutarate dioxygenases. The catalytic domain consists of a double-stranded β-helix core that is stabilized by three α-helices packed along the major β-sheet. The active site, which is contained in the pocket between the β-sheets, chelates iron(II) through histidine and aspartate coordination. 2-oxoglutarate displaces a water molecule to bind iron as well. The active site is lined by hydrophobic residues, possibly because such residues are less susceptible to potential oxidative damage by reactive species leaking from the iron center.PHD2 catalyses the hydroxylation of two sites on HIF-α, which are termed the N-terminal oxygen dependent degradation domain (residues 395-413, NODD) and the C-terminal oxygen dependent degradation domain (residues 556-574, CODD). These two HIF substrates are usually represented by 19 amino acid long peptides in in vitro experiments. X-ray crystallography and NMR spectroscopy showed that both peptides bind to the same binding site on PHD2, in a cleft on the PHD2 surface. An induced fit mechanism was indicated from the structure, in which residues 237-254 adopt a closed loop conformation, whilst in the structure without CODD or NODD, the same residues adopted an open finger-like conformation. Such conformational change was confirmed by NMR spectroscopy, X-ray crystallography and molecular dynamics calculations. A recent study found a second peptide binding site on PHD2 although peptide binding to this alternative site did not seem to affect the catalytic activity of the enzyme. Further studies are required to fully understand the biological significance of this second peptide binding site.
Structure:
The enzyme has a high affinity for iron(II) and 2-oxoglutarate (also known as α-ketoglutarate), and forms a long-lived complex with these factors. It has been proposed that cosubstrate and iron concentrations poise the HIF hydroxylases to respond to an appropriate "hypoxic window" for a particular cell type or tissue. Studies have revealed that PHD2 has a KM for dioxygen slightly above its atmospheric concentration, and PHD2 is thought to be the most important sensor of the cell's oxygen status.
Mechanism:
The enzyme incorporates one oxygen atom from dioxygen into the hydroxylated product, and one oxygen atom into the succinate coproduct. Its interactions with HIF-1α rely on a mobile loop region that helps to enclose the hydroxylation site and helps to stabilize binding of both iron and 2-oxyglutarate. A feedback regulation mechanism that involves the displacement of HIF-1α by hydroxylated HIF-1α when 2-oxoglutarate is limiting was also proposed.
Biological role and disease relevance:
PHD2 is the primary regulator of HIF-1α steady state levels in the cell. A PHD2 knockdown showed increased levels of HIF-1α under normoxia, and an increase in HIF-1α nuclear accumulation and HIF-dependent transcription. HIF-1α steady state accumulation was dependent on the amount of PHD silencing effected by siRNA in HeLa cells and a variety of other human cell lines.However, although it would seem that PHD2 downregulates HIF-1α and thus also tumorigenesis, there have been suggestions of paradoxical roles of PHD2 in tumor proliferation. For example, one animal study showed tumor reduction in PHD2-deficient mice through activation of antiproliferative TGF-β signaling. Other in vivo models showed tumor-suppressing activity for PHD2 in pancreatic cancer as well as liver cancer. A study of 121 human patients revealed PHD2 as a strong prognostic marker in gastric cancer, with PHD2-negative patients having shortened survival compared to PHD2-positive patients.Recent genome-wide association studies have suggested that EGLN1 may be involved in the low hematocrit phenotype exhibited by the Tibetan population and hence that EGLN1 may play a role in the heritable adaptation of this population to live at high altitude.
As a therapeutic target:
HIF's important role as a homeostatic mediator implicates PHD2 as a therapeutic target for a range of disorders regarding angiogenesis, erythropoeisis, and cellular proliferation. There has been interest both in potentiating and inhibiting the activity of PHD2. For example, methylselenocysteine (MSC) inhibition of HIF-1α led to tumor growth inhibition in renal cell carcinoma in a PHD-dependent manner. It is thought that this phenomenon relies on PHD-stabilization, but mechanistic details of this process have not yet been investigated. On the other hand, screens of small-molecule chelators have revealed hydroxypyrones and hydroxypyridones as potential inhibitors for PHD2. Recently, dihydropyrazoles, a triazole-based small molecule, was found to be a potent inhibitor of PHD2 that is active both in vitro and in vivo. Substrate analog peptides have also been developed to exhibit inhibitory selectivity for PHD2 over factor inhibiting HIF (FIH), for which some other PHD-inhibitors show overlapping specificity. Gasotransmitters including carbon monoxide and nitric oxide are also inhibitors of PHD2 by competing with molecular oxygen for binding at the active site Fe(II) ion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Register (phonology)**
Register (phonology):
In phonology, a register, or pitch register, is a prosodic feature of syllables in certain languages in which tone, vowel phonation, glottalization or similar features depend upon one another.
It occurs in Burmese, Vietnamese, Wu Chinese and Zulu.
Burmese:
In Burmese, differences in tone correlate with vowel phonation and so neither exists independently. There are three registers in Burmese, which have traditionally been considered three of the four "tones". (The fourth is not actually a register but is a closed syllable, and is similar to the so-called "entering tone" in Middle Chinese phonetics.) Jones (1986) views the differences as "resulting from the intersection of both pitch registers and voice registers.... Clearly Burmese is not tonal in the same sense as such other languages and therefore requires a different concept, namely that of pitch register."
Vietnamese:
In Vietnamese, which has six tones, two tones are largely distinguished by phonation instead of pitch. Specifically, the ngã and sắc tones are both high rising but ngã is distinguished by the presence of a glottal stop in the middle of the vowel. The nặng and huyền tones are both pronounced as low falling, but distinguished primarily by nặng being short and pronounced with creaky voice, and huyền being noticeably longer and pronounced with breathy voice.
Khmer:
Khmer is sometimes considered to be a register language. It has also been called a "restructured register language" because both its pitch and phonation can be considered allophonic. If they are ignored, the phonemic distinctions that they carry remain as differences in diphthongs and vowel length.
Latvian:
An example of a non-Asian language with register distinctions is Latvian, at least in the central dialects. Long vowels in stressed syllables are often said to take one of three pitch accents that are conventionally called "rising", falling", and "broken". However, the "broken tone" is distinguished not by pitch but by glottalization, and is similar to the ngã register of Northern Vietnamese. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electrofusion**
Electrofusion:
Electrofusion is a method of joining MDPE, HDPE and other plastic pipes using special fittings that have built-in electric heating elements which are used to weld the joint together.
Electrofusion:
The pipes to be joined are cleaned, inserted into the electrofusion fitting (with a temporary clamp if required) and a voltage (typically 40V) is applied for a fixed time depending on the fitting in use. The built in heater coils then melt the inside of the fitting and the outside of the pipe wall, which weld together producing a very strong homogeneous joint. The assembly is then left to cool for a specified time.Electrofusion welding is beneficial because it does not require the operator to use dangerous or sophisticated equipment. After some preparation, the Electrofusion Welder will guide the operator through the steps to take. Welding Heat and Time is dependent on the type and size of the fitting. All Electrofusion Fittings are not created equal - Precise positioning of the energising coils of wire in each fitting ensures uniform melting for a strong joint and the minimisation of welding and cooling time.
Electrofusion:
The operator must be qualified according to the local and national laws. In Australia, an Electrofusion Course can be done within 8 hours. Electrofusion welding training focuses on the importance of accurately fusing EF fittings. Both manual and automatic methods of calculating electrofusion time gives operators the skills they need in the field. There is much to learn about the importance of preparation, timing, pressure, temperature, cool down time and handling, etc.Training and certification are very important in this field of welding, as the product can become dangerous under certain circumstances. There has been cases of major harm and death, including when molten polyethylene spurts out of the edge of a mis-aligned weld, causing skin burns. Another case was due to a tapping saddle being incorrectly installed on a gas line, causing the death of the two welders in the trench due to gas inhalation. There are many critical parts to Electrofusion welding that can cause weld failures, most of which can be greatly reduced by using welding clamps, and correct scraping equipment.To keep their qualification current, a trained operator can get their fitting tested, which involves cutting open the fitting and examining the integrity of the weld. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conceptual dictionary**
Conceptual dictionary:
A conceptual dictionary (also ideographic or ideological dictionary) is a dictionary that groups words by concept or semantic relation instead of arranging them in alphabetical order. Examples of conceptual dictionaries are picture dictionaries, thesauri, and visual dictionaries. Onelook.com and Diccionario Ideológico de la Lengua Española (for Spanish) are specific online and print examples.
Conceptual dictionary:
This is sometimes called a reverse dictionary because it organized by concepts, phrases, or the definitions rather than headwords. This is similar to a thesaurus, where one can look up a concept by some common, general word, and then find a list of near-synonyms of that word. (For example, in a thesaurus one could look up "doctor" and be presented with such words as healer, physician, surgeon, M.D., medical man, medicine man, academician, professor, scholar, sage, master, expert.) In theory, a reverse dictionary might go further than this, allowing you to find a word by its definition only (for example, to find the word "doctor" knowing only that he is a "person who cures disease"). Such dictionaries have become more practical with the advent of computerized information-storage and retrieval systems (i.e. computer databases).
Conceptual dictionary:
An example of this type of reverse dictionary is the Diccionario Ideológico de la Lengua Española (Spanish Language Ideological Dictionary). This allows the user to find words based on a small set of general concepts.
Examples:
(English) Bernstein, Theodore, Bernstein's Reverse Dictionary, Crown, New York, 1975.
Edmonds, David (ed.), The Oxford Reverse Dictionary, Oxford University Press, Oxford, 1999.
Kahn, John, Reader's Digest Reverse Dictionary, Reader's Digest, London, 1989.
https://web.archive.org/web/20180125074625/https://tradisho.com/ Concept based dictionary for the 170+ languages in the Philippines Onelook Reverse Dictionary ReverseDictionary.org | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Open field (animal test)**
Open field (animal test):
Developed by Calvin S. Hall, the open field test is an experimental test used to assay general locomotor activity levels, anxiety, and willingness to explore in animals (usually rodents) in scientific research. However, the extent to which behavior in the open field measures anxiety is controversial. The open field test can be used to assess memory by evaluating the ability of the animal to recognize a stimuli or object. Another animal test that is used to assess memory using that same concept is the novel object recognition test.
Concept:
Animals such as rats and mice display a natural aversion to brightly lit open areas. However, they also have a drive to explore a perceived threatening stimulus. Decreased levels of anxiety lead to increased exploratory behavior. Increased anxiety will result in less locomotion and a preference to stay close to the walls of the field (thigmotaxis).
Experimental design:
The open field is an arena with walls to prevent escape. Commonly, the field is marked with a grid and square crossings. The center of the field is marked with a different color to differentiate from the other squares. In the modern open field apparatus, infrared beams or video cameras with associated software can be used to automate the assessment process.
Experimental design:
Behavioral patterns measured in the open field test include: Line crossings – Frequency with which the rodent crosses grid lines with all four paws (a measure of locomotor activity), sometimes divided into activity near the wall and activity in the center.
Center square entries – Frequency with which the rodent enters the center square with all four paws.
Center square duration – Duration of time spent in the central square.
Rearing – Frequency with which the rodent stands on its hind legs in the field. Rearing-up behavior in which the forepaws of the animal are unsupported and the similar behavior in which the forepaws rest against the walls of the enclosure have different underlying genetic and neural mechanisms and unsupported rearing might be a more direct measure of anxiety.
Stretch attend postures – Frequency with which rodent demonstrated forward elongation of the head and shoulders followed by retraction to the original position. High frequency indicates high levels of anxiety.
Defecation and urination – The frequency of defecation and urination is controversial. Some scientists argue that increase in defecation shows increased anxiety. Other scientists disagree and state that defecation and urination show signs of emotionality but cannot be assumed to be anxiety.
Criticisms:
The assumption that the test is based on conflict has been heavily criticized. Critics point out that when measuring anxiety each choice should have both positive and negative outcomes. This leads to more dependable observations which the OFT does not present.When the test was first developed, it was pharmacologically validated through the use of benzodiazepines, a common anxiety medication. Newer drugs such as 5-HT-1A partial agonists and selective serotonin reuptake inhibitors, which have also been proven to treat anxiety, show inconsistent results in this test.Due to the idiopathic nature of anxiety, animal models have flaws that cannot be controlled. Because of this it is better to do the open field test in conjuncture with other tests such as the elevated plus maze and light-dark box test.Different results can be obtained depending on the strain of the animal. Different equipment and grid lines may cause different results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geoffrey J. Gordon**
Geoffrey J. Gordon:
Geoffrey J. Gordon is a professor at the Machine Learning Department at Carnegie Mellon University in Pittsburgh and director of research at the Microsoft Montréal lab. He is known for his research in statistical relational learning (a subdiscipline of artificial intelligence and machine learning) and on anytime dynamic variants of the A* search algorithm. His research interests include multi-agent planning, reinforcement learning, decision-theoretic planning, statistical models of difficult data (e.g. maps, video, text), computational learning theory, and game theory.
Geoffrey J. Gordon:
Gordon received a B.A. in computer science from Cornell University in 1991, and a PhD at Carnegie Mellon in 1999. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tubulin polymerization promoting protein family member 3**
Tubulin polymerization promoting protein family member 3:
Tubulin polymerization promoting protein family member 3 is a protein that in humans is encoded by the TPPP3 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Affinity diagram**
Affinity diagram:
The affinity diagram is a business tool used to organize ideas and data. It is one of the Seven Management and Planning Tools. People have been grouping data into groups based on natural relationships for thousands of years; however, the term affinity diagram was devised by Jiro Kawakita in the 1960s and is sometimes referred to as the KJ Method.
Affinity diagram:
The tool is commonly used within project management and allows large numbers of ideas stemming from brainstorming to be sorted into groups, based on their natural relationships, for review and analysis. It is also frequently used in contextual inquiry as a way to organize notes and insights from field interviews. It can also be used for organizing other freeform comments, such as open-ended survey responses, support call logs, or other qualitative data.
Process:
The affinity diagram organizes ideas with following steps: Record each idea on cards or notes.
Look for ideas that seem to be related.
Process:
Sort cards into groups until all cards have been used.Once the cards have been sorted into groups the team may sort large clusters into subgroups for easier management and analysis. Once completed, the affinity diagram may be used to create a cause and effect diagram.In many cases, the best results tend to be achieved when the activity is completed by a cross-functional team, including key stakeholders. The process requires becoming deeply immersed in the data, which has benefits beyond the tangible deliverables. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Technoparade**
Technoparade:
A technoparade (taken from the German word "Technoparade") is a parade of vehicles equipped with strong loudspeakers and amplifiers playing electronic dance music. It resembles a carnival parade in some respects, but the vehicles (called lovemobiles) are usually less elaborately decorated. Unlike some carnival parades, a technoparade does not share the tradition of bombarding the spectators with sweets. However, the revellers do occasionally throw confetti (usually larger and more sparkly than that in a carnival parade) and spray foam from the vehicles onto the crowd.
Technoparade:
Nearly all of the vehicles are converted trucks. In order to power the amplifiers, the trucks are frequently equipped with an additional electrical generator. For safety reasons, horse-drawn floats are never used in technoparades: there would be a danger of horses panicking from the noise and chaos. However, there are occasional human-drawn floats equipped with generators, record players, amplifiers and loudspeakers. Some of the vehicles allow people to ride along, for a fee. For those on the sidelines, or travelling alongside on foot or bicycles, attendance is free.
Official program:
The official program of a technoparade is generally not as important as what happens informally. In contrast to a carnival parade, the vehicles are little more than flatbed trucks with sound equipment, rather than elaborately decorated floats. There are usually no fireworks or other traditional elements of large celebrations. Technoparades are rarely linked to anniversaries of historical events: they usually simply take place in the summer to take advantage of the good weather.
Official program:
However, in Germany technoparades are usually officially registered as a political demonstration and thus have an appropriate motto. That way techno fans have a constitutional right to dance in the streets, and any objections from the authorities to noise and traffic obstructions are overruled and the cities also have to pay for security and cleaning up the streets afterwards.
Character:
Technoparades generally have a carnival atmosphere, where social rules (and some laws, or at least their enforcement) are at least loosened, and sometimes broken outright. An atmosphere of chaos and tolerance prevails as bystanders dance to the shifting sounds of successive vehicles rolling by them: the music blasting from one vehicle blends into that from another, which can mean a sudden change of dance style in the area where the spheres of influence overlap. The music coming from two sound trucks overlaps with approximately equal intensity, and people can dance to either of two competing rhythms. In the technoparade subculture they call this the Verwirrungsgebiet ("overlap zone") by analogy to a concept in radio frequency engineering.
Character:
The street allows for a type of dancing that would be literally impossible in cramped German nightclubs, and the breadth of some people's dancing is further exaggerated as they throw their clothes outwards. Some in the crowd generally climb up to any high point that can possibly be scaled, more and more as the event continues. The spirit is usually continued at after-parties in the local nightclubs, sometimes including unofficial after-parties at venues having no official connection to the parade.
Problems:
Technoparades are not without problems: They are regarded by some participants as a license to consume illegal drugs.
A few technoparades have been exploited by makers of pornographic films.
There is an inherent danger of accidents as people climb on and off moving vehicles.
Technoparades usually have big crowds and few toilets, especially because the companies that rent out chemical toilets will not happily rent to technoparades; the crowds tend to climb on the roofs of the toilet structures, which are not built to withstand the weight. Some well-constructed toilets do exist, but not enough to serve the entire technoparade.
Major events:
World's largest Street Parade, Zurich around 1,000,000 participants annually as of 2018 Love Parade, previously the world's largest with 1.5 million in 1999 Germany 5 big technoparades of the 1990s and early 2000s: Love Parade, Berlin / Ruhr area (1989–2010) Union Move, Munich (1995–2001) Generation Move, Hamburg / Kiel (1995–2007) Reincarnation, Hanover (1995–2006) Vision Parade, Bremen (2002–2006)Parades with a political character: Fuckparade, Berlin (since 1997) Krachparade, Munich (since 2014) Zug der Liebe, Berlin (since 2015) Rave the Planet Parade, Berlin (since 2022) Nachttanzdemos (many cities)Small town or onetime moves: Night Move, Cologne (1994) Future Parade, Brockhagen (1997–2003) Beatparade, Empfingen (since 1998) Numerous others Austria Freeparty, Vienna (1994–1999) Unite Parade, Salzburg (1999–2008) Loveparade, Vienna (2000–2006) Freeparade, Vienna (2001–2010) Snow Move, Sölden Switzerland Street Parade, Zurich (since 1992) Jungle Street Groove/Beat on the Street, Basel (since 1995) Antiparade, Zurich (since 1996) Lake Parade, Geneva (1997–2015; 2017; 2023-) Tanz Dich Frei (Dance yourself free), Bern (2011–2013) Other FFWD Heineken Dance Parade, Rotterdam (Netherlands) (1997–2007) Techno Parade, Paris (France) (since 1998) Love Parade, Santiago, Chile / Tel Aviv / Mexico City (2000–2006) Budapest Parade, Budapest (Hungary) (2000–2006) Elektro Parade, Oporto (Portugal) (2001–2003) Cityparade, Ghent/Liège/Brussels/Mons/Charleroi (Belgium) (2001–2006) Freedom Parade, Tartu (Estonia) (2002–2009) LovEvolution, San Francisco/Oakland (United States) (2004–2011)
Similar Events:
Similar to technoparades, electronic dance events have also been organized using other moving vehicles such as boats and trams. In contrast to technoparades which are characterized by free participation on the street, in this case only the passengers on the vessels or inside the trams are part of the event. An example for a boatparade is the Berlin Beats & Boats event which takes place annually since 2009 and involves up to 14 swimming dancefloors. A regular Housetram event has been organized by Monika Kruse in Munich since 1995. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anatomical neck of humerus**
Anatomical neck of humerus:
The anatomical neck of the humerus is obliquely directed, forming an obtuse angle with the body of the humerus. It represents the fused epiphyseal plate.
Structure:
The anatomical neck divides the head of the humerus from the greater and lesser tubercles of the humerus It gives attachment to the capsular ligament of the shoulder joint except at the upper inferior-medial aspects. It is best marked in the lower half of its circumference; in the upper half it is represented by a narrow groove separating the head of the humerus from the two tubercles, the greater tubercle and the lesser tubercle. It affords attachment to the articular capsule of the shoulder-joint, and is perforated by numerous vascular foramina. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biomechanics of sprint running**
Biomechanics of sprint running:
Sprinting involves a quick acceleration phase followed by a velocity maintenance phase. During the initial stage of sprinting, the runners have their upper body tilted forward in order to direct ground reaction forces more horizontally. As they reach their maximum velocity, the torso straightens out into an upright position. The goal of sprinting is to reach and maintain high top speeds to cover a set distance in the shortest possible time. A lot of research has been invested in quantifying the biological factors and mathematics that govern sprinting. In order to achieve these high velocities, it has been found that sprinters have to apply a large amount of force onto the ground to achieve the desired acceleration, rather than taking more rapid steps.
Quantifying sprinting mechanics and governing equations:
Human legs during walking have been mechanically simplified in previous studies to a set of inverted pendulums, while distance running (characterized as a bouncing gait) has modeled the legs as springs. Until recently, it had been long believed that faster sprinting speeds are promoted solely by physiological features that increase stride length and frequency; while these factors do contribute to sprinting velocities, it has also been found that the runner's ability to produce ground forces is also very important.
Quantifying sprinting mechanics and governing equations:
Weyand et al. (2000) came up with the following equation for determining sprint velocity: step avg /WbLc, where V is the sprint velocity (m/s), step the step frequency (1/s), avg the average force applied to the ground (N), Wb the body weight (N), and Lc the contact length (m).
Quantifying sprinting mechanics and governing equations:
In short, sprint velocity is reliant on three main factors: step frequency (how many steps you can take per second), average vertical force applied to the ground, and contact length (distance your center of mass translates over the course of one contact period). The formula was tested by having subjects run on a force treadmill (which is a treadmill that contains a force plate to measure ground reaction forces (GRF)). Figure 1 shows approximately what the force plate readout looks like for the duration of three steps. While this equation has proved to be fairly accurate, the study was limited in the sense that data was collected by a force plate that only measured vertical GRF rather than horizontal GRF. This led some people to the false pretense that simply exerting a greater vertical (perpendicular) force to the ground would lead to greater acceleration, which is far from correct (See Morin studies below).
Quantifying sprinting mechanics and governing equations:
In 2005, Hunter et al. conducted a study that determined relationships between sprint velocity and relative impulses in which gait and ground reaction force data was collected and analyzed. It was found that during accelerated runs, a typical support phase is characterized by a breaking phase followed by a propulsive phase (-FH followed by + FH). A common trend in the fastest subjects tested was that there was only a moderate to low amount of vertical force and a large amount of horizontal forces produced. Post study, it was hypothesized by the author that braking forces are necessary to store elastic energy in muscle and tendon tissue. This study loosely confirmed the importance of horizontal as well as vertical GRF during the acceleration phase of sprinting. Unfortunately, since data were collected at the 16-m mark, it was insufficient to draw definite conclusions regarding the entire acceleration phase.
Quantifying sprinting mechanics and governing equations:
Morin et al. (2011) performed a study to investigate the importance of ground reaction forces by having sprinters run on a force treadmill that measured both horizontal and vertical ground reaction forces. Belt velocity was measured for each step and calculations were performed to find the “index of force application technique”, which determines how well subjects are able to apply force in the horizontal direction.
Quantifying sprinting mechanics and governing equations:
The second half of the test involved subjects performing a 100-m sprint on a man-made track using radar to measure the forward speed of runners to create velocity-time curves. The main result of this study showed that the force application technique (rather than simply the total amount of force applied) is the key determinant factor in predicting a sprinter's velocity. This has yet to be integrated into the governing equation of sprinting.
Kinetics:
The kinetics of running describes the motion of a runner using the effects of forces acting on or out of the body. The majority of contributing factors to internal forces comes from leg muscle activation and arm swing.
Kinetics:
Leg Muscle Activation The muscles responsible for accelerating the runner forward are required to contract with increasing speed to accommodate the increasing velocity of the body. During the acceleration phase of sprinting, the contractile component of muscles is the main component responsible for the power output. Once a steady state velocity has been reached and the sprinter is upright, a sizable fraction of the power comes from the mechanical energy stored in the ‘series elastic elements’ during stretching of the contractile muscles that is released immediately after the positive work phase. As the velocity of the runner increases, inertia and air resistance effects become the limiting factors on the sprinter's top speed.
Kinetics:
It was previously believed that there was an intramuscular viscous force that increased proportionally to the velocity of muscle contraction that opposed the contractile force; this theory has since been disproved.In a study conducted in year 2004, the gait patterns of distance runners, sprinters, and non-runners was measured using video recording. Each group ran a 60-meter run at 5.81 m/s (to represent distance running) and at maximal running speed. The study showed that non-sprinters ran with an inefficient gait for the maximal speed trial while all groups ran with energetically efficient gaits for the distance trial. This indicates that the development of an economical distance running form is a natural process while sprinting is a learned technique that requires practice.
Kinetics:
Arm Swing Contrary to the findings of Mann et al. (1981), arm swing plays a vital role in both stabilizing the torso and vertical propulsion. Regarding torso stabilization, arm swing serves to counterbalance the rotational momentum created by leg swing, as suggested by Hinrichs et al. (1987). In short, the athlete would have a hard time controlling the rotation of their trunk without arm swing.
Kinetics:
The same study also suggested that, as opposed to popular belief, the horizontal force production capabilities of the arms are limited due to the backward swing that follows the forward swing, so the two components cancel each other out. This is not to suggest, however, that arm swing does not contribute to propulsion at all during sprinting; in fact, it can contribute up to 10% of the total vertical propulsive forces that a sprinter can apply to the ground. The reason for this is that, unlike the forward-backward motion, both arms are synchronized in their upward-downward movement. As a result, there is no cancellation of forces. Efficient sprinters have an arm swing that originates from the shoulder and has a flexion and extension action that is of the same magnitude of the flexion and extension occurring at the ipsilateral shoulder and hip.
Energetics:
Di Prampero et al. mathematically quantifies the cost of the acceleration phase (first 30 m) sprint running through experimental testing. The subjects sprinted repeatedly on a track while radar determined their velocity. Additionally, it has been found in previous literature that the energetics of sprinting on flat terrain is analogous to uphill running at a constant speed. The mathematical derivation process is loosely followed below: In the initial phase of sprint running, the total acceleration acting on the body ( g′ ) is the vectoral sum of the forward acceleration and earth's acceleration due to gravity: 0.5 The “Equivalent slope” (ES) when sprinting on flat ground is: ES tan 90 arctan (g+af)) The “Equivalent normalized body mass” (EM) is then found to be: EM 0.5 Following the data collection, the cost of sprinting ( sr ) was found to be: sr 155.4 ES 30.4 ES 43.3 ES 46.3 ES 19.5 ES 3.6 EM The above equation does not take wind resistance into account, so considering the cost of running against wind resistance ( aer ), which is known to be: aer =k′v2 We combine the two equations to arrive at: sr 155.4 ES 30.4 ES 43.3 ES 46.3 ES 19.5 ES 3.6 EM +k′v2 Where g′ is the acceleration of the runner's body, af the forward acceleration, g the acceleration of gravity, k′ a proportionality constant and v the velocity.
Fatigue effects:
Fatigue is a prominent factor in sprinting, and it is already widely known that it hinders maximal power output in muscles, but it also affects the acceleration of runners in the ways listed below.
Fatigue effects:
Submaximal muscle coordination A study on muscle coordination in which subjects performed repeated 6-second cycling sprints, or intermittent sprints of short duration (ISSD) showed a correlation between decrease in maximal power output and changes in motor coordination. In this case, motor coordination refers to the ability to coordinate muscle movements in order to optimize a physical action, so submaximal coordination indicates that the muscles are no longer activating in sync with one another. The results of the study showed that a delay between the vastus lateralis (VL) and biceps femoris (BF) muscles. Since there was a decrease in power during ISSD occurring in tandem with changes in VL-BF coordination, it is indicated that changes in inter-muscle coordination is one of the contributing factors for the reduced power output resulting from fatigue. This was done using bicycle sprinting, but the principles carry over to sprinting from a runner's perspective.
Fatigue effects:
Hindrance of effective force application techniques Morin et al. explored the effects of fatigue on force production and force application techniques in a study where sprinters performed four sets of five 6 second sprints using the same treadmill setup as previously mentioned. Data was collected on their ability to produce ground reaction forces as well as their ability to coordinate the ratio of ground forces (horizontal to vertical) to allow for greater horizontal acceleration. The immediate results showed a significant decrease in performance with each sprint and a sharper decrease in rate of performance depreciation with each subsequent data set. In conclusion, it was obvious that both the total force production capability and technical ability to apply ground forces were greatly affected.
Fatigue effects:
Injury Prevention Running gait (biomechanics) is very important for not only efficiency but also for injury prevention. Approximately between 25 and 65% of all runners experience running related injuries each year. Abnormal running mechanics are often cited as the cause of injuries. However, few suggest altering a person's running pattern in order to reduce the risk of injury. Wearable technology companies like I Measure U are creating solutions using biomechanics data to analyse the gait of a runner in real time and provide feedback on how to change the running technique to reduce injury risk. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acute stress disorder**
Acute stress disorder:
Acute stress disorder (ASD, also known as acute stress reaction, psychological shock, mental shock, or simply shock) is a psychological response to a terrifying, traumatic or surprising experience. It may bring about delayed stress reactions (better known as post-traumatic stress disorder, or PTSD) if not correctly addressed. Acute stress may present in reactions which include but are not limited to: intrusive or dissociative symptoms, and reactivity symptoms such as avoidance or arousal. Reactions may be exhibited for days or weeks post the traumatic event.
Diagnostic criteria:
According to the DSM-V, acute stress disorder requires the exposure to actual or threatened death, serious injury, or sexual violation by either directly experiencing it, witnessing it in person, learning it occurred to a close family or friend, or experiencing repeated exposure to aversive details of a traumatic event. In addition to the initial exposure, individuals may also present with a variety of different symptoms that fall within several clusters including intrusion, negative mood, dissociation, avoidance of distressing memories and emotional arousal. Intrusion symptoms include recurring and distressing dreams, flashbacks, or memories related to the traumatic event and related somatic symptoms. Negative mood refers to ones inability to experience positive emotions such as happiness or satisfaction. Dissociative symptoms include a sense of numbing or detachment from emotional reactions, a sense of physical detachment, decreased awareness of one's surroundings, the perception that one's environment is unreal or dreamlike, and the inability to recall critical aspects of the traumatic event (dissociative amnesia). Emotional arousal symptoms include sleep disturbances, hypervigilance, difficulties with concentration, more common startle response, and irritability. Symptom presentation must last for at least three consecutive days after trauma exposure to be classified as acute stress disorder. If symptoms persist past one month, the diagnosis of PTSD should be assessed for. The presenting symptoms must also cause significant impairment in multiple domains of one's life to be diagnosed.Additional diagnoses that may develop from acute stress disorder include depression, anxiety, mood disorders, and substance abuse problems. Untreated acute stress disorder can also lead to the development of post-traumatic stress disorder.
Diagnostic assessment:
Evaluation of patients is done through close examination of emotional response. Using self-report from patients is a large part of diagnosing acute stress disorder, as acute stress is the result of reactions to stressful situations.
Development and course:
There are several theoretical perspectives on trauma response, including cognitive, biological, and psycho-biological. While PTSD-specific, these theories are still useful in understanding acute stress disorder, as the two disorders share many symptoms. A recent study found that even a single stressful event may have long-term consequences on cognitive function. This result calls the traditional distinction between the effects of acute and chronic stress into question.
Risk factors:
Risk factors for developing acute stress disorder include a previously existing mental health diagnosis, avoidant coping mechanisms, and exaggerated appraisals of events. Additional factors also include prior trauma history and heightened emotional reactivity. The DSM-V specifies that there is a higher prevalence rate of acute stress disorder among females compared to males due to higher risk of experiencing traumatic events and neurobiological gender differences in stress response.
Types:
Sympathetic Sympathetic acute stress disorder is caused by the release of excessive adrenaline and norepinephrine into the nervous system. These hormones may speed up a person's pulse and respiratory rate, dilate pupils, or temporarily mask pain. This type of ASD developed as an evolutionary advantage to help humans survive dangerous situations. The "fight or flight" response may allow for temporarily-enhanced physical output, even in the face of severe injury. However, other physical illnesses become more difficult to diagnose, as ASD masks the pain and other vital signs that would otherwise be symptomatic.
Types:
Parasympathetic Parasympathetic acute stress disorder is characterised by feeling faint and nauseated. This response is fairly often triggered by the sight of blood. In this stress response, the body releases acetylcholine. In many ways, this reaction is the opposite of the sympathetic response, in that it slows the heart rate and can cause the patient to either regurgitate or temporarily lose consciousness. The evolutionary value of this is unclear, although it may have allowed for prey to appear dead to avoid being eaten.
Pathophysiology:
Stress is characterised by specific physiological responses to adverse or noxious stimuli.
Pathophysiology:
Hans Selye was the first to coin the term "general adaptation syndrome" to suggest that stress-induced physiological responses proceed through the stages of alarm, resistance, and exhaustion.The sympathetic branch of the autonomic nervous system gives rise to a specific set of physiological responses to physical or psychological stress. The body's response to stress is also termed a "fight or flight" response, and it is characterised by an increase in blood flow to the skeletal muscles, heart, and brain, a rise in heart rate and blood pressure, dilation of pupils, and an increase in the amount of glucose released by the liver.The onset of an acute stress response is associated with specific physiological actions in the sympathetic nervous system, both directly and indirectly through the release of adrenaline and, to a lesser extent, noradrenaline from the medulla of the adrenal glands. These catecholamine hormones facilitate immediate physical reactions by triggering increases in heart rate and breathing, constricting blood vessels. An abundance of catecholamines at neuroreceptor sites facilitates reliance on spontaneous or intuitive behaviours often related to combat or escape.Normally, when a person is in a serene, non-stimulated state, the firing of neurons in the locus ceruleus is minimal. A novel stimulus, once perceived, is relayed from the sensory cortex of the brain through the thalamus to the brain stem. That route of signalling increases the rate of noradrenergic activity in the locus ceruleus, and the person becomes more alert and attentive to their environment.If a stimulus is perceived as a threat, a more intense and prolonged discharge of the locus ceruleus activates the sympathetic division of the autonomic nervous system. The activation of the sympathetic nervous system leads to the release of norepinephrine from nerve endings acting on the heart, blood vessels, respiratory centres, and other sites. The ensuing physiological changes constitute a major part of the acute stress response. The other major player in the acute stress response is the hypothalamic-pituitary-adrenal axis. Stress activates this axis and produces neuro-biological changes. These chemical changes increase the chances of survival by bringing the physiological system back to homeostasis.The autonomic nervous system controls all automatic functions in the body and contains two subsections within it that aid the response to an acute stress reaction. These two subunits are the sympathetic nervous system and the parasympathetic nervous system. The sympathetic response is colloquially known as the "fight or flight" response, indicated by accelerated pulse and respiration rates, pupil dilation, and a general feeling of anxiety and hyper-awareness. This is caused by the release of epinephrine and norepinephrine from the adrenal glands. The epinephrine and norepinephrine strike the beta receptors of the heart, which feeds the heart's sympathetic nerve fibres to increase the strength of heart muscle contraction; as a result, more blood gets circulated, increasing the heart rate and respiratory rate. The sympathetic nervous system also stimulates the skeletal system and muscular system to pump more blood to those areas to handle the acute stress. Simultaneously, the sympathetic nervous system inhibits the digestive system and the urinary system to optimise blood flow to the heart, lungs, and skeletal muscles. This plays a role in the alarm reaction stage. The parasympathetic response is colloquially known as the "rest and digest" response, indicated by reduced heart and respiration rates, and, more obviously, by a temporary loss of consciousness if the system is fired at a rapid rate. The parasympathetic nervous system stimulates the digestive system and urinary system to send more blood to those systems to increase the process of digestion. To do this, it must inhibit the cardiovascular system and respiratory system to optimise blood flow to the digestive tract, causing low heart and respiratory rates. The parasympathetic nervous system plays no role in acute stress response.Studies have shown that patients with acute stress disorder have overactive right amygdalae and prefrontal cortices; both structures are involved in the fear-processing pathway.
Treatment:
This disorder may resolve itself with time or may develop into a more severe disorder, such as PTSD. However, results of Creamer, O'Donnell, and Pattison's (2004) study of 363 patients suggests that a diagnosis of acute stress disorder had only limited predictive validity for PTSD. Creamer et al. found that re-experiences of the traumatic event and arousal were better predictors of PTSD. Early pharmacotherapy may prevent the development of post-traumatic symptoms. Additionally, early trauma-focused cognitive behavioural therapy (TF-CBT) for those with a diagnosis of ASD can protect an individual from developing chronic PTSD.Studies have been conducted to assess the efficacy of counselling and psychotherapy for people with acute stress disorder. Cognitive behavioural therapy, which includes exposure and cognitive restructuring, was found to be effective in preventing PTSD in patients diagnosed with acute stress disorder with clinically significant results at six-month follow-up appointments. A combination of relaxation, cognitive restructuring, imaginal exposure, and in-vivo exposure was superior to supportive counselling. Mindfulness-based stress reduction programmes also appear to be effective for stress management.The pharmacological approach has made some progress in lessening the effects of ASD. To relax patients and allow for better sleep, Prazosin can be given to patients, which regulates their sympathetic response. Hydrocortisone has shown some success as an early preventative measure following a traumatic event, typically in the treatment of PTSD.In a wilderness context where counselling, psychotherapy, and cognitive behavioural therapy is unlikely to be available, the treatment for acute stress reaction is very similar to the treatment of cardiogenic shock, vascular shock, and hypovolemic shock; that is, allowing the patient to lie down, providing reassurance, and removing the stimulus that prompted the reaction. In traditional shock cases, this generally means relieving injury pain or stopping blood loss. In an acute stress reaction, this may mean pulling a rescuer away from the emergency to calm down or blocking the sight of an injured friend from a patient.
History:
The term "acute stress disorder" (ASD) was first used to describe the symptoms of soldiers during World War I and II, and it was therefore also termed "combat stress reaction" (CSR). Approximately 20% of U.S. troops displayed symptoms of CSR during WWII. It was assumed to be a temporary response of healthy individuals to witnessing or experiencing traumatic events. Symptoms include depression, anxiety, withdrawal, confusion, paranoia, and sympathetic hyperactivity.The American Psychiatric Association officially included ASD in the DSM-IV in 1994. Before that, symptomatic individuals within the first month of trauma were diagnosed with adjustment disorder. According to the DSM-IV, acute stress reaction refers to the symptoms experienced immediately to 48 hours after exposure to a traumatic event. In contrast, acute stress disorder is defined by symptoms experienced 48 hours to one month following the event. Symptoms experienced for longer than one month are consistent with a diagnosis of PTSD.Initially, being able to describe different ASRs was one of the goals of introducing ASD. Some criticisms surrounding ASD's focal point include issues with ASD recognising other distressing emotional reactions, like depression and shame. Emotional reactions similar to these may then be diagnosed as adjustment disorder under the current system of trying to diagnose ASD.Since its addition to the DSM-IV, questions about the efficacy and purpose of the ASD diagnosis have been raised. The diagnosis of ASD was criticized as an unnecessary addition to the progress of diagnosing PTSD, as some considered it more akin to a sign of PTSD than an independent issue requiring diagnosis. Also, the terms ASD and ASR have been criticized for not fully covering the range of stress reactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hereford Hog**
Hereford Hog:
The Hereford Hog or Hereford is an American breed of domestic pig. It is named for its color and pattern, which is similar to that of the Hereford breed of cattle: red with a white face.: 195 It is of medium size, with a curly tail and lop ears.
History:
The first person to breed for the Hereford color pattern in pigs – and the first describe it – was one R.U. Weber of LaPlata, Missouri.: 611 From about 1902 until 1925 a number of farmers in Nebraska and Iowa, among them John Schulte of Norway, Iowa, collaborated in the selection of pigs with this coloration.: 611 : 394 The principal breeds used were the Duroc and the Poland China; there may also have been some Chester White or Hampshire influence.: 394 They bred not only for color, but also conformation and performance. In 1934 a selection of the hundred best animals from five herds was made, and became the foundation stock for the breed.: 611 A breed society, the National Hereford Hog Registry Association, was formed in that year under the sponsorship of the Polled Hereford Cattle Registry Association,: 611 : 394 and a herd-book was opened.: 198 The breed grew in numbers into the mid-twentieth century,: 198 particularly in Iowa, Illinois and Indiana, but from the 1960s, with the move of commercial pork operations to the modern system of three-way cross-breeding using American Yorkshire, Duroc and Hampshire, population numbers fell sharply.: 611 In 2013 the American Livestock Breeds Conservancy (now the Livestock Conservancy) estimated that fewer than 2000 breeding animals remained, and listed the conservation status of the breed as 'watch'; in 2022 its estimate of breed numbers was unchanged, but the conservation status of the breed had changed to 'recovering'. The population reported to DAD-IS for 1970 is 317, while that for 2011, the most recent year listed, is 2045; in 2022 the conservation status of the breed was shown as 'unknown'.
Characteristics:
It is a pig of medium size: mature sows weigh about 270 kg (600 lb) and boars about 360 kg (800 lb).: 611 The only allowable coat coloration is a deep red-brown covering at least two thirds of the body, with a pale face, ears, underbelly, and socks. The ears hang forwards over the face.: 394 : 197 It is hardy and docile, prolific and maternal, and suitable to either extensive or intensive management.
Use:
Hogs for market reach a slaughter weight of about 100 kg (220 lb) in five or six months.: 197 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NHydrate**
NHydrate:
nHydrate is an object-relational mapping (ORM) solution for the Microsoft .NET platform providing a framework for a relational database to be mapped to .NET objects. It is designed to alleviate the drudgery software developers experience writing persistence domains.
nHydrate is free as an open source project on GitHub.com under the MIT License.
nHydrate was originally created in 2003 as a private project to solve the issues with the .NET Framework 1.1. Using ADO.Net Datasets is cumbersome and error prone, so a small generated framework was created to relieve developers of the CRUD work. As a private project it was later inspired by the work of the NHibernate group.
nHydrate was a private project from 2003 until 2009. It was used at various companies in the Atlanta, Georgia, area but never widely released. It was publicly released on September 10, 2009.
nHydrate is built on the .NET Framework 4.0.
From version 5.0 and above, the entire framework has been reworked to use only Entity Framework as it internal data access layer. The modeler is now visual with a main diagram like other modeling products. All code interactions are simply Entity Framework now so there is no learning curve for developers when using the generated output.
Feature summary:
nHydrate's primary feature is mapping .NET objects to an SQL server database. The CRUD layer (create, update, delete) is also implemented. There are numerous retrieval mechanism facilities. nHydrate generates the SQL for all CRUD operations as well as advanced LINQ capabilities. The product is not database portable. The framework is designed to work exclusively with Microsoft SQL Server. There is an internal project to use MySql but this is not yet a public release.
Feature summary:
The tool is entirely sited within Visual Studio.NET and all model maintenance and generation is handled directly from the environment. There are no XML files or other complex configuration scenarios to navigate like almost all other ORM tools. The VS.NET plugin GUI editor provides an interface to interact with a visual model and edit a model.
History:
nHydrate was started by Michael Knight, and later added Chris Davis. By 2006, the platform had much of its current functionality, minus LINQ, and was being used in applications in the Atlanta area. By 2009, the advanced functionality had been added like inheritance, LINQ, and VS.NET integration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microsoft Office Project Portfolio Server**
Microsoft Office Project Portfolio Server:
Microsoft Office Project Portfolio Server is a discontinued project portfolio management application from Microsoft. It was part of its Enterprise Project Management suite which includes Microsoft Office Project Server.
Versions for Windows:
2007 – Project Portfolio Server 2007 2006 – Project Portfolio Server 2006 Previous Years – UMT Portfolio Manager
Description:
Microsoft Office Project Portfolio Server 2007 allows creation of a project portfolio, including workflows, hosted centrally, so that the information is available throughout the enterprise, even from a browser. It also aids in centralized data aggregation regarding the project planning and execution, and in visualizing and analyzing the data to optimize the project plan. It can also support multiple portfolios per project, to track different aspects of it. It also includes reporting tools to create consolidated reports out of the project data.Office Project PortfolioServer is being rolled up into Project Server 2010 with 64 bit requirements all the way around for the 2010 family | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kapitza instability**
Kapitza instability:
In fluid dynamics, the Kapitza instability is an instability that occurs in fluid films flowing down walls. The instability is characterised by the formation of capillary waves on the free surface of the film. The instability is named after Pyotr Kapitsa, who described and analysed the instability in 1948. The free surface waves are known as roll waves.
Roll waves in granular materials:
A similar instability has been observed in granular flow, and this instability can be predicted using the Saint-Venant equations with appropriate modifications accounting for the frictional properties of granular flows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mind uploading**
Mind uploading:
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility.
Mind uploading:
Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control over the body. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by storing and copying that information state into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it in some variants of uploading. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer inside (or either connected to or remotely controlled) a (not necessarily humanoid) robot or a biological or cybernetic body.Among some futurists and within the part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travel, and a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Whole-brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which would not be based on existing brains. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games.
Overview:
Many neuroscientists believe that the human mind is largely an emergent property of the information processing of its neuronal network.Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum: Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.
Overview:
Eminent computer scientists and neuroscientists have predicted that advanced computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinás.Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue.
Overview:
As of December 2022, this kind of technology is almost entirely theoretical. Scientists are yet to discover a way for computers to feel human emotions, and many assert that uploading consciousness is not possible.
Theoretical benefits and applications:
"Immortality" or backup In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby – from a purely mechanistic perspective – reducing or eliminating "mortality risk" of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.
Theoretical benefits and applications:
Space exploration An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.
Relevant technologies and techniques:
The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain. The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.
Relevant technologies and techniques:
Computational complexity Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
In 2004, Henry Markram, lead researcher of the Blue Brain Project, stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have.
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.
Relevant technologies and techniques:
Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that "A detailed, functional artificial human brain can be built within the next 10 years". Less than two years into it, the project was recognized to be mismanaged and its claims overblown, and Markram was asked to step down.Required computational capacity strongly depend on the chosen level of simulation model scale: Scanning and mapping scale of an individual When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses. However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.
Relevant technologies and techniques:
Serial sectioning A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.
Relevant technologies and techniques:
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.
Relevant technologies and techniques:
Brain imaging It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.
Relevant technologies and techniques:
Brain simulation There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.The Blue Brain Project by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.
Issues:
Philosophical issues Underlying the concept of "mind uploading" (more accurately "mind copying") is the broad philosophy that consciousness lies within the brain's information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the "self" and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or "soul" can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a "computer file" from the brain and re-implemented into a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.
Issues:
These issues have a long history. In 1775, Thomas Reid wrote: “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.” A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person's mind. Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created. Neural correlates of consciousness, a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered; therefore it cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved.Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie). Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious? Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question. Numerous scientists, including Kurzweil, strongly believe that the answer as to whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).
Issues:
In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach: Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.
Issues:
Ethical and legal implications The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes: “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.” Chalmers himself has argued that such virtual realities would be genuine realities. However, if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In the book Superintelligence, Nick Bostrom expresses concern that we could build a "Disneyland without children."It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.Research led by cognitive scientist Michael Laakasuo has shown that attitudes towards mind uploading are predicted by an individual's belief in an afterlife; the existence of mind uploading technology may threaten religious and spiritual notions of immortality and divinity.
Issues:
Political and economic implications Emulations might be preceded by a technological arms race driven by first-strike advantages. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against the growing power of emulations, especially if that depresses human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.The book The Age of Em by Robin Hanson poses many hypotheses on the nature of a society of mind uploads, including that the most common minds would be copies of adults with personalities conducive to long hours of productive specialized work.
Issues:
Emulation timelines and AI risk Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. We may also have brain emulations for a brief but significant period on the way to non-emulation based human-level AI. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.Arguments for speeding up brain-emulation research: If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.
Issues:
Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
Issues:
If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.Arguments for slowing down brain-emulation research: Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding some brain components, and it would be easier to tinker with these than to reconstruct the entire brain in its original form. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.
Issues:
Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.Emulations might be easier to control than de novo AI because Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan for.
Issues:
Emulations could more easily inherit human motivations.
Issues:
Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation wouldn't be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.As counterpoint to these considerations, Bostrom notes some downsides: Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
Issues:
Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.
Issues:
Even if there's a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk.Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell in his book Human Compatible rejects creating one, simply calling it "so obviously a bad idea".
Advocates:
Moravec (1979) describes and endorses mind uploading using a brain surgeon. Moravec (1988) uses a similar description and calls it "transmigration".Ray Kurzweil, director of engineering at Google, has long predicted that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as the late Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Advocates:
Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed.
Advocates:
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solid Documents**
Solid Documents:
Solid Documents is a global productivity software company which creates document reconstruction and archival resources for businesses and individual consumers. Most notably, the same technology used by its Solid Framework SDK is licensed by Adobe for Acrobat X.
History:
Established in Redmond, Washington in 2001, Solid Documents, LLC was founded by CEO Michael Cartwright with the goal of becoming the leader in document productivity software. Cartwright leveraged his extensive software development, document management, internationalization, and localization experience to establish an organization with the vision of the Portable Document Format (PDF) as the universally-acceptable, standardized format for document interchange.Partnering with the PDF Association in 2008, Solid Documents assisted in the creation of a series of standardized tests used to ensure compliance of PDF/A validators and converters with ISO 19005-1 archival standards. Shortly after these standardized compliance tests were created, the company began offering a free service to analyze PDF documents and provide feedback on whether or not they comply with ISO 19005-1 archival standards.In 2011, co-founders Tamara and Michael Cartwright relocated to the south island of New Zealand establishing a base of operations as Solid Documents Limited from management offices in Nelson, New Zealand.
Products:
Solid Converter PDF The company's flagship product, Solid Converter PDF, was released in 2003 and has been translated into 15 languages and distributed in over 60 countries. This product allows documents to be converted into and out of PDF and securely archived in accordance with ISO standards. The last major update to this product, version 10.1, was released in May 2021 and included functionality improvements for better reconstruction enhancements and editing of output.Solid PDF to Word for Mac (renamed to Solid Converter Mac in 2015) was released in April 2010 allowing Apple users to manipulate documents out of PDF into Word, Excel, PowerPoint, HTML, text, or iWork formats. An updated version 2 of the tool was released to Mac users in September 2013. Solid Converter Mac version 2.1 was updated in May 2021 to include all the improvements included in Solid Framework.
Products:
Solid PDF Tools In 2007, Solid Documents released Solid Converter PDF version 3.0 and launched Solid PDF Tools, a premium product which included scanning and archival features in addition to conversion. Both products were a departure from the previous dialog box-focused user interface by offering a distinctly WYSIWYG user experience. Version 4.0 released in 2008, adding language localization for French, Spanish, and Chinese users as well conversion to and from additional formats like Excel, PowerPoint, and HTML. In December 2010 version 7.0 was released with continued product enhancements including selective conversion, table and workflow improvements. Version 9.0 released in June 2014 increases the number of languages supported by the OCR technology and includes a number of conversion and reconstruction improvements. Version 10.1 was released in May 2021 and includes all improvements that are in Solid Documents' Solid Framework SDK.
Products:
Solid Framework SDK In September 2010, Solid Documents entered into a strategic partnership with activePDF, Inc. licensing Solid Framework SDK to convert PDFs into a variety of editable formats, in addition to PDF/A validation and conversion already existing in their activePDF DocConverter™ product. Later that year in November 2010, Adobe Inc. licensed Solid Framework SDK for Adobe Acrobat X and has continued this partnership with its most recent product, Acrobat DC. By harnessing the Solid Framework PDF to Word, PowerPoint, and Excel conversion capabilities, Acrobat users are able to reliably repurpose PDF content. In June 2014 an update to version 9.0 was released which provides enhanced flexibility for application development with the SDK. Version 10 of Solid Framework SDK was released in October 2018. The Solid Framework SDK is updated with reconstruction improvements on a continual basis. Details can be found at https://solidframework.net/release-notes/ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**220 (number)**
220 (number):
220 (two hundred [and] twenty) is the natural number following 219 and preceding 221.
In mathematics:
It is a composite number, with its proper divisors being 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, making it an amicable number with 284. Every number up to 220 may be expressed as a sum of its divisors, making 220 a practical number.It is the sum of four consecutive primes (47 + 53 + 59 + 61). It is the smallest even number with the property that when represented as a sum of two prime numbers (per Goldbach's conjecture) both of the primes must be greater than or equal to 23.
In mathematics:
There are exactly 220 different ways of partitioning 64 = 82 into a sum of square numbers.It is a tetrahedral number, the sum of the first ten triangular numbers, and a dodecahedral number. If all of the diagonals of a regular decagon are drawn, the resulting figure will have exactly 220 regions.It is the sum of the sums of the divisors of the first 16 positive integers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glossary of Riemannian and metric geometry**
Glossary of Riemannian and metric geometry:
This is a glossary of some terms used in Riemannian geometry and metric geometry — it doesn't cover the terminology of differential topology.
The following articles may also be useful; they either contain specialised vocabulary or provide more detailed expositions of the definitions given below.
Connection Curvature Metric space Riemannian manifoldSee also: Glossary of general topology Glossary of differential geometry and topology List of differential geometry topicsUnless stated otherwise, letters X, Y, Z below denote metric spaces, M, N denote Riemannian manifolds, |xy| or |xy|X denotes the distance between points x and y in X. Italic word denotes a self-reference to this glossary.
A caveat: many terms in Riemannian and metric geometry, such as convex function, convex set and others, do not have exactly the same meaning as in general mathematical usage.
A:
Alexandrov space a generalization of Riemannian manifolds with upper, lower or integral curvature bounds (the last one works only in dimension 2) Almost flat manifold Arc-wise isometry the same as path isometry.
Autoparallel the same as totally geodesic
B:
Barycenter, see center of mass.
bi-Lipschitz map. A map f:X→Y is called bi-Lipschitz if there are positive constants c and C such that for any x and y in X c|xy|X≤|f(x)f(y)|Y≤C|xy|X Busemann function given a ray, γ : [0, ∞)→X, the Busemann function is defined by lim t→∞(|γ(t)−p|−t)
C:
Cartan–Hadamard theorem is the statement that a connected, simply connected complete Riemannian manifold with non-positive sectional curvature is diffeomorphic to Rn via the exponential map; for metric spaces, the statement that a connected, simply connected complete geodesic metric space with non-positive curvature in the sense of Alexandrov is a (globally) CAT(0) space.
Cartan extended Einstein's General relativity to Einstein–Cartan theory, using Riemannian-Cartan geometry instead of Riemannian geometry. This extension provides affine torsion, which allows for non-symmetric curvature tensors and the incorporation of spin–orbit coupling.
Center of mass. A point q ∈ M is called the center of mass of the points p1,p2,…,pk if it is a point of global minimum of the function f(x)=∑i|pix|2 Such a point is unique if all distances |pipj| are less than radius of convexity.
Christoffel symbol Collapsing manifold Complete space Completion Conformal map is a map which preserves angles.
Conformally flat a manifold M is conformally flat if it is locally conformally equivalent to a Euclidean space, for example standard sphere is conformally flat.
Conjugate points two points p and q on a geodesic γ are called conjugate if there is a Jacobi field on γ which has a zero at p and q.
Convex function. A function f on a Riemannian manifold is a convex if for any geodesic γ the function f∘γ is convex. A function f is called λ -convex if for any geodesic γ with natural parameter t , the function f∘γ(t)−λt2 is convex.
Convex A subset K of a Riemannian manifold M is called convex if for any two points in K there is a shortest path connecting them which lies entirely in K, see also totally convex.
Cotangent bundle Covariant derivative Cut locus
D:
Diameter of a metric space is the supremum of distances between pairs of points.
Developable surface is a surface isometric to the plane.
Dilation of a map between metric spaces is the infimum of numbers L such that the given map is L-Lipschitz.
E:
Exponential map: Exponential map (Lie theory), Exponential map (Riemannian geometry)
F:
Finsler metric First fundamental form for an embedding or immersion is the pullback of the metric tensor.
G:
Geodesic is a curve which locally minimizes distance.
Geodesic flow is a flow on a tangent bundle TM of a manifold M, generated by a vector field whose trajectories are of the form (γ(t),γ′(t)) where γ is a geodesic.
Gromov-Hausdorff convergence Geodesic metric space is a metric space where any two points are the endpoints of a minimizing geodesic.
H:
Hadamard space is a complete simply connected space with nonpositive curvature.
Horosphere a level set of Busemann function.
I:
Injectivity radius The injectivity radius at a point p of a Riemannian manifold is the largest radius for which the exponential map at p is a diffeomorphism. The injectivity radius of a Riemannian manifold is the infimum of the injectivity radii at all points. See also cut locus.
I:
For complete manifolds, if the injectivity radius at p is a finite number r, then either there is a geodesic of length 2r which starts and ends at p or there is a point q conjugate to p (see conjugate point above) and on the distance r from p. For a closed Riemannian manifold the injectivity radius is either half the minimal length of a closed geodesic or the minimal distance between conjugate points on a geodesic.
I:
Infranilmanifold Given a simply connected nilpotent Lie group N acting on itself by left multiplication and a finite group of automorphisms F of N one can define an action of the semidirect product N⋊F on N. An orbit space of N by a discrete subgroup of N⋊F which acts freely on N is called an infranilmanifold. An infranilmanifold is finitely covered by a nilmanifold.
I:
Isometry is a map which preserves distances.
Intrinsic metric
J:
Jacobi field A Jacobi field is a vector field on a geodesic γ which can be obtained on the following way: Take a smooth one parameter family of geodesics γτ with γ0=γ , then the Jacobi field is described by J(t)=∂γτ(t)∂τ|τ=0.
Jordan curve
L:
Length metric the same as intrinsic metric.
Levi-Civita connection is a natural way to differentiate vector fields on Riemannian manifolds.
Lipschitz convergence the convergence defined by Lipschitz metric.
Lipschitz distance between metric spaces is the infimum of numbers r such that there is a bijective bi-Lipschitz map between these spaces with constants exp(-r), exp(r).
Lipschitz map Logarithmic map is a right inverse of Exponential map.
M:
Mean curvature Metric ball Metric tensor Minimal surface is a submanifold with (vector of) mean curvature zero.
N:
Natural parametrization is the parametrization by length.
Net. A subset S of a metric space X is called ϵ -net if for any point in X there is a point in S on the distance ≤ϵ . This is distinct from topological nets which generalize limits.
Nilmanifold: An element of the minimal set of manifolds which includes a point, and has the following property: any oriented S1 -bundle over a nilmanifold is a nilmanifold. It also can be defined as a factor of a connected nilpotent Lie group by a lattice.
Normal bundle: associated to an imbedding of a manifold M into an ambient Euclidean space RN , the normal bundle is a vector bundle whose fiber at each point p is the orthogonal complement (in RN ) of the tangent space TpM Nonexpanding map same as short map
P:
Parallel transport Polyhedral space a simplicial complex with a metric such that each simplex with induced metric is isometric to a simplex in Euclidean space.
Principal curvature is the maximum and minimum normal curvatures at a point on a surface.
Principal direction is the direction of the principal curvatures.
Path isometry Proper metric space is a metric space in which every closed ball is compact. Equivalently, if every closed bounded subset is compact. Every proper metric space is complete.
Q:
Quasigeodesic has two meanings; here we give the most common. A map f:I→Y (where I⊆R is a subsegment) is called a quasigeodesic if there are constants K≥1 and C≥0 such that for every x,y∈I 1Kd(x,y)−C≤d(f(x),f(y))≤Kd(x,y)+C.
Note that a quasigeodesic is not necessarily a continuous curve.
Quasi-isometry. A map f:X→Y is called a quasi-isometry if there are constants K≥1 and C≥0 such that 1Kd(x,y)−C≤d(f(x),f(y))≤Kd(x,y)+C.
and every point in Y has distance at most C from some point of f(X).
Note that a quasi-isometry is not assumed to be continuous. For example, any map between compact metric spaces is a quasi isometry. If there exists a quasi-isometry from X to Y, then X and Y are said to be quasi-isometric.
R:
Radius of metric space is the infimum of radii of metric balls which contain the space completely.
Radius of convexity at a point p of a Riemannian manifold is the largest radius of a ball which is a convex subset.
Ray is a one side infinite geodesic which is minimizing on each interval Riemann curvature tensor Riemannian manifold Riemannian submersion is a map between Riemannian manifolds which is submersion and submetry at the same time.
S:
Second fundamental form is a quadratic form on the tangent space of hypersurface, usually denoted by II, an equivalent way to describe the shape operator of a hypersurface, II (v,w)=⟨S(v),w⟩ It can be also generalized to arbitrary codimension, in which case it is a quadratic form with values in the normal space.
Shape operator for a hypersurface M is a linear operator on tangent spaces, Sp: TpM→TpM. If n is a unit normal field to M and v is a tangent vector then S(v)=±∇vn (there is no standard agreement whether to use + or − in the definition).
Short map is a distance non increasing map.
Smooth manifold Sol manifold is a factor of a connected solvable Lie group by a lattice.
Submetry a short map f between metric spaces is called a submetry if there exists R > 0 such that for any point x and radius r < R we have that image of metric r-ball is an r-ball, i.e.
f(Br(x))=Br(f(x)) Sub-Riemannian manifold Systole. The k-systole of M, systk(M) , is the minimal volume of k-cycle nonhomologous to zero.
T:
Tangent bundle Totally convex. A subset K of a Riemannian manifold M is called totally convex if for any two points in K any geodesic connecting them lies entirely in K, see also convex.
Totally geodesic submanifold is a submanifold such that all geodesics in the submanifold are also geodesics of the surrounding manifold.
U:
Uniquely geodesic metric space is a metric space where any two points are the endpoints of a unique minimizing geodesic.
W:
Word metric on a group is a metric of the Cayley graph constructed using a set of generators. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Testosterone stearate**
Testosterone stearate:
Testosterone stearate, also known as testosterone octadecanoate, testosterone 17β-stearate, and androst-4-en-17β-ol-3-one 17β-stearate, is an injected anabolic-androgenic steroid (AAS) and an androgen ester – specifically, the C17β stearate (octadecanoate) ester of testosterone – which was never marketed. It is a prodrug of testosterone and, when administered via intramuscular injection, is associated with a long-lasting depot effect and extended duration of action. Testosterone stearate may occur naturally in the body.It has been said that with longer-chain esters of testosterone like testosterone stearate, the duration of action may be so protracted that the magnitude of effect with typical doses may be too low to be appreciable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cutoff voltage**
Cutoff voltage:
In electronics, the cut-off voltage is the voltage at which a battery is considered fully discharged, beyond which further discharge could cause harm. Some electronic devices, such as cell phones, will automatically shut down when the cut-off voltage has been reached.
Batteries:
In batteries, the cut-off (final) voltage is the prescribed lower-limit voltage at which battery discharge is considered complete. The cut-off voltage is usually chosen so that the maximum useful capacity of the battery is achieved. The cut-off voltage is different from one battery to the other and it is highly dependent on the type of battery and the kind of service in which the battery is used. When testing the capacity of a NiMH or NiCd battery a cut-off voltage of 1.0 V per cell is normally used, whereas 0.9 V is normally used as the cut-off voltage of an alkaline cell. Devices that have too high cut-off voltages may stop operating while the battery still has significant capacity remaining.
Batteries:
Voltage cut-off in portable electronics Some portable equipment does not fully use the low-end voltage spectrum of a battery. The power to the equipment cuts off before a relatively large portion of the battery life has been used.
A high cut-off voltage is more widespread than perhaps assumed. For example, a certain brand of mobile phone that is powered with a single-cell Lithium-ion battery cuts off at 3.3V. The Li‑ion can be discharged to 3V and lower; however, with a discharge to 3.3V (at room temperature), about 92–98% of the capacity is used.
Batteries:
Importantly, particularly in the case of lithium ion batteries which are used in the vast majority of portable electronics today, a voltage cut-off below 3.2V can lead to chemical instability in the cell, with the result being a reduced battery lifetime. For this reason, electronics manufacturers tend to use higher cut-off voltages, removing the need for consumers to buy battery replacements before other failure mechanisms in a device take effect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diving rebreather**
Diving rebreather:
A Diving rebreather is an underwater breathing apparatus that absorbs the carbon dioxide of a diver's exhaled breath to permit the rebreathing (recycling) of the substantially unused oxygen content, and unused inert content when present, of each breath. Oxygen is added to replenish the amount metabolised by the diver. This differs from open-circuit breathing apparatus, where the exhaled gas is discharged directly into the environment. The purpose is to extend the breathing endurance of a limited gas supply, and, for covert military use by frogmen or observation of underwater life, to eliminate the bubbles produced by an open circuit system. A diving rebreather is generally understood to be a portable unit carried by the user, and is therefore a type of self-contained underwater breathing apparatus (scuba). A semi-closed rebreather carried by the diver may also be known as a gas extender. The same technology on a submersible or surface installation is more likely to be referred to as a life-support system.
Diving rebreather:
Diving rebreather technology may be used where breathing gas supply is limited, or where the breathing gas is specially enriched or contains expensive components, such as helium diluent. Diving rebreathers have applications for primary and emergency gas supply. Similar technology is used in life-support systems in submarines, submersibles, underwater and surface saturation habitats, and in gas reclaim systems used to recover the large volumes of helium used in saturation diving. The recycling of breathing gas comes at the cost of technological complexity and additional hazards, which depend on the specific application and type of rebreather used. Mass and bulk may be greater or less than equivalent open circuit scuba depending on circumstances. Electronically controlled diving rebreathers may automatically maintain a partial pressure of oxygen between programmable upper and lower limits, or set points, and be integrated with decompression computers to monitor the decompression status of the diver and record the dive profile.
Applications:
Diving rebreathers are generally used for scuba applications, where the amount of breathing gas carried by the diver is limited, but are also occasionally used as gas extenders for surface-supplied diving and as bailout systems for scuba or surface-supplied diving. Gas reclaim systems used for deep heliox diving use similar technology to rebreathers, as do saturation diving life support systems, but in these applications the gas recycling equipment is not carried by the diver. Atmospheric diving suits also carry rebreather technology to recycle breathing gas as part of the life-support system.
Applications:
Rebreathers are usually more complex to use than open circuit scuba, and have more potential points of failure, so acceptably safe use requires a greater level of skill, attention and situational awareness, which is usually derived from understanding the systems, diligent maintenance and overlearning the practical skills of operation and fault recovery. Fault tolerant design can make a rebreather less likely to fail in a way that immediately endangers the user, and reduces the task loading on the diver which in turn may lower the risk of operator error.
Applications:
Semi-closed rebreather technology is also used in diver carried surface supplied gas extenders, mainly to reduce helium use. Some units also function as an emergency gas supply using on-board bailout cylinders: The US Navy MK29 rebreather can extend the duration of the Flyaway Mixed Gas System diving operations by five times while retaining the original mixed-gas storage footprint on the support ship. The Soviet IDA-72 semi-closed rebreather has a scrubber endurance of 4 hours on surface supply, and bailout endurance at 200m of 40 minutes on on-board gas. The US Navy Mark V Mod 1 heliox mixed gas helmet has a scrubber canister mounted on the back of the helmet and an inlet gas injection system which recirculates the breathing gas through the scrubber to remove carbon dioxide and thereby conserve helium. The injector nozzle would blow 11 times the volume of the injected gas through the scrubber.
History:
The first attempts at making practical rebreathers were simple oxygen rebreathers, when advances in industrial metalworking made high-pressure gas storage cylinders possible. From 1878 on they were used for work in unbreathable atmospheres in industry and firefighting, at high altitude, for escape from submarines; and occasionally for swimming underwater; but the usual way to work underwater was in standard diving dress, breathing open circuit surface-supplied air. The Italian Decima Flottiglia MAS, the first unit of combat frogmen, was founded in 1938 and went into action in 1940. WWII saw a great expansion of military-related use of rebreather diving. During and after WWII, needs arose in the armed forces to dive deeper than allowed by pure oxygen. That prompted, at least in Britain, design of simple constant-flow "mixture rebreather" variants of some of their diving oxygen rebreathers (= what is now called "nitrox"): SCMBA from the SCBA (Swimmer Canoeist's Breathing Apparatus), and CDMBA from the Siebe Gorman CDBA, by adding an extra gas supply cylinder. Before a dive with such a set, the diver had to know the maximum or working depth of his dive, and how fast his body used his oxygen supply, and from those to calculate what to set his rebreather's gas flow rate to.During this long period before the modern age of automatic sport nitrox rebreathers, there were some sport oxygen diving clubs, mostly in the USA.Eventually the Cold War ended and in 1989 the Communist Bloc collapsed, and as a result the perceived risk of sabotage attacks by combat divers dwindled, and Western armed forces had less reason to requisition civilian rebreather patents, and automatic and semi-automatic recreational diving rebreathers with ppO2 sensors started to appear.
General concept:
As a person breathes, the body consumes oxygen and produces carbon dioxide. Base metabolism requires about 0.25 L/min of oxygen from a breathing rate of about 6 L/min, and a fit person working hard may ventilate at a rate of 95 L/min but will only metabolise about 4 L/min of oxygen The oxygen metabolised is generally about 4% to 5% of the inspired volume at normal atmospheric pressure, or about 20% of the available oxygen in the air at sea level. Exhaled air at sea level contains roughly 13.5% to 16% oxygen.The situation is even more wasteful of oxygen when the oxygen fraction of the breathing gas is higher, and in underwater diving, the compression of breathing gas due to depth makes the recirculation of exhaled gas even more desirable, as an even larger proportion of open circuit gas is wasted. Continued rebreathing of the same gas will deplete the oxygen to a level which will no longer support consciousness, and eventually life, so gas containing oxygen must be added to the recycled breathing gas to maintain the required concentration of oxygen.However, if this is done without removing the carbon dioxide, it will rapidly build up in the recycled gas, resulting almost immediately in mild respiratory distress, and rapidly developing into further stages of hypercapnia, or carbon dioxide toxicity. A high ventilation rate is usually necessary to eliminate the metabolic product carbon dioxide (CO2). The breathing reflex is triggered by carbon dioxide concentration in the blood, not by the oxygen concentration, so even a small buildup of carbon dioxide in the inhaled gas quickly becomes intolerable; if a person tries to directly rebreathe their exhaled breathing gas, they will soon feel an acute sense of suffocation, so rebreathers must chemically remove the carbon dioxide in a component known as a carbon dioxide scrubber.By adding sufficient oxygen to compensate for the metabolic usage, removing the carbon dioxide, and rebreathing the gas, most of the volume is conserved. There will still be minor losses when gas must be vented as it expands during ascent, and additional gas will be needed to make up volume as the gas is compressed during descent.
Design constraints:
The widest variety of rebreather types is used in diving, as the physical and physiological consequences of breathing under pressure complicate the requirements, and a large range of engineering options are available depending on the specific application and available budget. A diving rebreather is safety-critical life-support equipment – some modes of failure can kill the diver without warning, others can require immediate appropriate response for survival.
Design constraints:
General operational requirements include: waterproof and corrosion resistant construction reasonably close to neutrally buoyant after ballasting acceptably streamlined, to minimize added swimming resistance low work of breathing in all diver attitudes and over the full operating depth range the unit should not adversely affect the diver's trim and balance easy and quick release of harness and unaided removal of the unit from the diver accessibility of control and adjustment components unambiguous feedback to the diver of critical information no critical single-point failure modes – The user should be able to deal with any single reasonably foreseeable failure without outside helpSpecial applications may also require: low noise signal low emission of bubbles/small bubbles low electromagnetic signature rugged construction light weight in air minimal additional task-loading for normal operation Oxygen rebreathers As pure oxygen is toxic when inhaled at pressure, recreational diver certification agencies limit oxygen decompression to a maximum depth of 6 metres (20 ft) and this restriction has been extended to oxygen rebreathers; In the past they have been used deeper (up to 20 metres (66 ft)) but such dives were more risky than what is now considered acceptable. Oxygen rebreathers are also sometimes used when decompressing from a deep open-circuit dive, as breathing pure oxygen helps the nitrogen diffuse out of the body tissues more rapidly, and the use of a rebreather may be more convenient for long decompression stops.
Design constraints:
US Navy restrictions on oxygen rebreather use: Normal working limit 25 feet (7.6 m) for 240 minutes. (PO2 = 1.76 bar) Maximum working limit 50 feet (15 m) for 10 minutes. (PO2 = 2.5 bar)Oxygen rebreathers are no longer commonly used in recreational diving because of the depth limit imposed by oxygen toxicity, but are extensively used for military attack swimmer applications where greater depth is not required, due to their simplicity, light weight and compact size.
Design constraints:
Mixed gas rebreathers Semi-closed circuit rebreathers (SCRs) used for diving may use active or passive gas addition, and the gas addition systems may be depth compensated. They use a mixed supply gas with a higher oxygen fraction than the steady state loop gas mixture. Usually only one gas mixture is used, but it is possible to switch gas mixtures during a dive to extend the available depth range of some SCRs.Operational scope and restrictions of SCRs: Non-depth compensated passive addition SCRs reduce the safe range of operating depths in inverse proportion to gas endurance extension. This can be compensated by gas switching, at the expense of complexity and increased number of potential failure points.
Design constraints:
Constant mass flow SCRs provide a gas mixture which is not consistent over variation in diver exertion. This also limits safe operating depth range unless gas composition is monitored, also at the expense of increased complexity and additional potential failure points.
Demand controlled active gas addition provides reliable gas mixtures throughout the potential operating depth range, and do not require oxygen monitoring, but at the cost of more mechanical complexity.
Design constraints:
Depth compensated passive addition provides reliable gas mixture over the potential operating depth range, which is only slightly reduced from the open circuit operational range for the gas in use at the cost of more mechanical complexity.Closed circuit diving rebreathers may be manually or electronically controlled, and use both pure oxygen and a breathable mixed gas diluent.Operational scope and restrictions of CCRs: Closed circuit rebreathers are mainly restricted by physiological limitations on the diver, such as maximum operating depth of the diluent mix while remaining breathable up to the surface, though this can be worked around by switching diluent. Work of breathing at depth can be a constraint, as there is a point where the breathing effort required to counter metabolic carbon dioxide production rate exceeds the work capacity of the diver, after which hypercapnia increases and distress followed by loss of consciousness and death is inevitable. Work of breathing is affected by gas density, so use of a low density helium rich diluent can increase depth range at acceptable work of breathing for a given configuration. WoB is also increased by turbulent flow, which is affected by flow velocity (Reynolds number). To some extent work of breathing can be reduced or limited by breathing circuit design, but there are physiological limits too, and the work of circulating the gas through the breathing loop and scrubber can be a large part of the total work of breathing.
Design constraints:
Recreational rebreathers Some recreational diver certification agencies distinguish a class of rebreather which they deem suitable for recreational diving. These rebreathers are unsuitable for decompression diving, and when electronically controlled, will not allow the diver to do dives with obligatory decompression, thereby allowing an immediate ascent at any point of the planned dive without undue risk of developing symptomatic decompression sickness. This limitation reduces the necessity to carry offboard bailout gas, and the need for the skills to bail out with a staged decompression obligation. This class of rebreather diving provides an opportunity to sell training and certification which omits a large part of the more complex and difficult skills, and reduces the amount of equipment that the diver needs to carry. PADI criteria for "R" class rebreathers include electronic prompts for pre-dive checks, automatic setpoint control, status warnings, a heads up display for warnings, a bailout valve, pre-packed scrubber canisters and a system for estimating scrubber duration. While these constraints do make the recreational class of rebreather inherently less hazardous, they do not reduce the risk to the same level as open circuit equipment for the same dive profile.
Design constraints:
Atmospheric diving suits An atmospheric diving suit is a small one-man articulated submersible of roughly anthropomorphic form, with limb joints which allow articulation under external pressure while maintaining an internal pressure of one atmosphere. Breathing gas supply could be surface supplied by umbilical, but would then have to be exhausted back to the surface to maintain internal pressure below the external ambient pressure, which is possible but presents pressure-hull breach hazards if the umbilical hoses are damaged, or from a rebreather system built into the suit. As there is a similar problem in venting excess gas, the simple and efficient solution is to make up oxygen as it is consumed and scrub out the carbon dioxide, with no change to the inert gas component, which simply recirculates. In effect, a simple closed circuit oxygen rebreather arrangement used as a life-support system. Since there is usually an adequate power supply for other services, powered circulation through the scrubber should not normally be an issue for normal service, and is more comfortable for the operator, as it keeps the face area clear and facilitates voice communication. As the internal pressure is maintained at one atmosphere, there is no risk of acute oxygen toxicity. Endurance depends on the scrubber capacity and oxygen supply. Circulation through the scrubber could be powered by the diver's breathing, and this is an option for an emergency backup rebreather, which may also be fitted to the suit. A breathing driven system requires reduction of mechanical dead space by using a mouthpiece and counterlung to form a closed loop.
Architecture:
Essential components Although there are several design variations of diving rebreather, all types have a gas-tight reservoir to contain the breathing gas at ambient pressure that the diver inhales from and exhales into. The breathing gas reservoir consists of several components connected together by water- and airtight joints. The diver breathes through a mouthpiece or a full-face diving mask with a shut-off valve, the dive/surface valve, which is closed when the diver is not breathing from the unit to prevent flooding if the set is in the water. This is connected to one or two breathing hoses ducting inhaled and exhaled gas between the diver and a counterlung or breathing bag, which expands to accommodate gas when it is not in the diver's lungs. The reservoir also includes a scrubber containing absorbent material to remove the carbon dioxide exhaled by the diver. There will also be at least one valve allowing addition of gas, such as oxygen, and often a diluent gas, from a gas storage container, into the reservoir.There may be valves allowing venting of gas, sensors to measure partial pressure of oxygen and possibly carbon dioxide, and a monitoring and control system. Critical components may be duplicated for engineering redundancy.
Architecture:
Breathing gas passage configuration There are two basic gas passage configurations: The loop and the pendulum.
Architecture:
The loop configuration uses a one directional circulation of the breathing gas which on exhalation leaves the mouthpiece, passes through a non-return valve into the exhalation hose, and then through the counterlung and scrubber, to return to the mouthpiece through the inhalation hose and another non-return valve when the diver inhales.The pendulum configuration uses a two-directional flow. Exhaled gas flows from the mouthpiece through a single hose to the scrubber, into the counterlung, and on inhalation the gas is drawn back through the scrubber and the same hose back to the mouthpiece. The pendulum system is structurally simpler, but inherently contains a larger dead space of unscrubbed gas in the combined exhalation and inhalation tube, which is rebreathed. There are conflicting requirements for minimising the volume of dead space while minimising the flow resistance of the breathing passages.
Architecture:
Counterlung configuration A pendulum rebreather only has one counterlung, on the far side of the scrubber from the single breathing hose. The diver blows exhaled gas through the scrubber, then sucks it back during inhalation. Gas flow rate through the scrubber is forced by the breathing rate of the diver.
Architecture:
A single counterlung in a loop rebreather can be either an exhalation or inhalation counterlung. If it is an exhalation counterlung it is inflated on exhalation, but no gas flows through the scrubber until inhalation starts, at which point the diver sucks the gas through at a rate forced by inhalation rate. If it is an inhalation counterlung, the diver must blow gas through the scrubber during exhalation, but inhales from the full inhalation counterlung, with no further flow through the scrubber. In both these cases there is no buffer, and peak flow rates are relatively high, which means peak flow resistance is relatively high and in one half of the breathing cycle.
Architecture:
A twin counterlung rebreather has two breathing bags, so the exhaled gas inflates the exhalation counterlung while starting to pass through the scrubber and starting to inflate the inhalation counterlung. By the time the diver starts to inhale, the inhalation counterlung has built up a volume buffer, so there is less flow resistance as the gas continues to flow through the scrubber during inhalation at a slower rate than if there was only one counterlung, This decreases work of breathing, and also increases dwell time of the gas in the scrubber, as it flows through the scrubber during both exhalation and inhalation. Most mixed gas diving rebreathers use this arrangement.
Architecture:
General arrangement Many rebreathers have their main components in a hard casing for support, protection and/or streamlining. This casing must be sufficiently vented and drained to let surrounding water or air in and out freely to allow for volume changes as the counterlung inflates and deflates, and to prevent trapping large volumes of buoyant air as the diver submerges, and of water as the diver emerges into air.The components may be mounted on a frame or inside a casing to hold them together. Sometimes the structure of the scrubber canister forms part of the framework, particularly in side-mount configuration. Position of most parts is not critical to function, but the counterlungs must be positioned so that their centroid of volume is at a similar depth to the centroid of the diver's lungs at most times while underwater, and the breathing tubes to the mouthpiece should not encumber the diver more than necessary, and allow free movement of the head as much as possible.Early oxygen rebreathers were often built without frame or casing, and relied on the harness and a strong counterlung to hold the components together.
Architecture:
The parts of a diving rebreather (counterlung, absorbent canister, gas cylinder(s), tubes and hoses linking them), can be arranged on the wearer's body in four basic ways, with the position of the counterlung having a major effect on work of breathing.
Architecture:
Back-mounted rebreathers Back mount is common on the more bulky and heavier units. This is good for support of the weight out of the water, and keeps the front of the diver clear for working underwater. Back mount usually uses back or over the shoulder counterlungs, which have a centroid above the lung in most common orientations of the diver, resulting in slight negative pressure breathing.
Architecture:
Chest-mounted rebreathers Chest mount is fairly common for military oxygen rebreathers, which are usually relatively compact and light. It allows easy reach of the components underwater, and leaves the back free for other equipment for amphibious operations. The rebreather can be unclipped from a common harness without disturbing the load on the back. Front mounted counterlungs have a centroid which is generally slightly below the lung centroid, and result in slight positive pressure breathing for most common orientations of the diver.
Architecture:
Side-mounted rebreathers Sidemount allows a low profile to penetrate tight restrictions in cave and wreck diving, and is convenient for carrying a bailout rebreather. A sidemount rebreather as the main breathing apparatus can be mounted on one side of the diver's body and can be balanced weight-wise and hydrodynamically by a large bailout cylinder sidemounted on the other side. Sidemount rebreathers are sensitive to diver orientation, which can change hydrostatic work of breathing over a larger range than for back or chest mount, and the resisistive work of breathing is also relatively large due to the long breathing hoses and multiple bends necessary to fit the components into a long narrow format. As of 2019, no sidemount rebreather had passed the CE test for work of breathing. Sidemount rebreathers may also be more susceptible to major loop flooding due to lack of a convenient exhalation counterlung position to form a water trap.
System variants:
Rebreathers can be primarily categorised as diving rebreathers, intended for hyperbaric use, and other rebreathers used at pressures from slightly more than normal atmospheric pressure at sea level to significantly lower ambient pressure at high altitudes and in space. Diving rebreathers must often deal with the complications of avoiding hyperbaric oxygen toxicity, while normobaric and hypobaric applications can use the relatively trivially simple oxygen rebreather technology, where there is no requirement to monitor oxygen partial pressure during use providing the ambient pressure is sufficient.
System variants:
Oxygen rebreathers This is the earliest type of rebreather and was commonly used by navies and for mine rescue from the early twentieth century. Oxygen rebreathers can be remarkably simple designs, and they were invented before open-circuit scuba. They only supply oxygen, so there is no requirement to control the gas mixture other than purging before use and removing the carbon dioxide.
System variants:
Oxygen feed options In some rebreathers, e.g. the Siebe Gorman Salvus, the oxygen cylinder has two oxygen supply mechanisms in parallel. One is constant flow, and the other is a manual on-off valve called a bypass valve. Both feed into the counterlung. There is no necessity for a second stage and the gas can be turned on and off at the cylinder valve.
System variants:
Others, such as the USN Mk25 UBA, are supplied automatically via a demand valve on the counterlung, which will add gas at any time that the counterlung is emptied and the diver continues to inhale. Oxygen can also be added manually by a button which activates the demand valve, equivalent to the purge button on an open-circuit demand valve.Some simple oxygen rebreathers had no automatic supply system, only the manual feed valve, and the diver had to operate the valve at intervals to refill the breathing bag as the volume of oxygen decreased below a comfortable level. This is task loading, but the diver cannot remain unaware of the need to top up. Control of the volume in the loop would also control buoyancy.
System variants:
Mixed gas rebreathers All rebreathers other than oxygen rebreathers may be considered mixed gas rebreathers. These can be divided ino semi-closed circuit, where the supply gas is a breathable mixture containing oxygen and inert diluents, usually nitrogen and helium, and which is replenished by adding more of the mixture as the oxygen is used up, sufficient to maintain a breathable partial pressure of oxygen in the loop, and closed circuit rebreathers, where two parallel gas supplies are used: the diluent, to provide the bulk of the gas, and which is recycled, and oxygen, which is metabolically expended.
System variants:
Semi-closed circuit rebreathers These are almost exclusively used for underwater diving, as they are bulkier, heavier, and more complex than closed circuit oxygen rebreathers. Military and recreational divers use these because they provide better underwater duration than open circuit, have a deeper maximum operating depth than oxygen rebreathers and can be fairly simple and cheap. They do not rely on electronics for control of gas composition, but may use electronic monitoring for improved safety and more efficient decompression. An alternative term for this technology is "gas extender". Semi-closed circuit equipment generally supplies one breathing gas such as air, nitrox or trimix at a time. The gas is injected into the loop at a constant rate to replenish oxygen consumed from the loop by the diver. Excess gas must be constantly vented from the loop in small volumes to make space for fresh, oxygen-rich gas. As the oxygen in the vented gas cannot be separated from the inert gas, semi-closed circuit is wasteful of both oxygen and inert components.A gas mix which has a maximum operating depth that is safe for the depth of the dive being planned, and which will provide a breathable mixture at the surface must be used, or it will be necessary to change mixtures during the dive.As the amount of oxygen required by the diver increases with work rate, the gas injection rate must be carefully chosen and controlled to prevent unconsciousness in the diver due to hypoxia. A higher gas addition rate reduces the likelihood of hypoxia but wastes more gas.
System variants:
Passive addition semi-closed circuit This type of rebreather works on the principle of adding fresh gas to compensate for reduced volume in the breathing circuit. A portion of the respired gas is discharged that is in some way proportional to oxygen consumption. Generally it is a fixed volumetric fraction of the respiratory flow, but more complex systems have been developed which exhaust a close approximation of a ratio to the surface respiratory flow rate. These are described as depth compensated or partially depth compensated systems. Gas addition is triggered by low counterlung volume which activates a demand valve.
System variants:
The simple case of a fixed ratio discharge can be achieved by concentric bellows counterlungs, where the exhaled gas expands both the counterlungs, and while the larger volume outer bellows discharges back to the loop when the diver inhales the next breath, the inner bellows discharges its contents to the surroundings, using non return valves to ensure a one-directional flow. The amount processed during each breath depends on the tidal volume of that breath.
System variants:
Towards the end of inhalation the bellows bottoms out and activates an addition valve, in much the way that a regulator diaphragm activates the demand valve, to make up the gas discharged by the inner bellows. This type of rebreather therefore tends to operate at a minimal volume.
System variants:
The fixed ratio systems usually discharge between 10% (1/10) and 25% (1/4) of the volume of each breath overboard. As a result, gas endurance is from 10 times to four times that of open circuit, and depends on breathing rate and depth in the same way as for open circuit. Oxygen fraction in the loop depends on the discharge ratio, and to a lesser extent on the breathing rate and work rate of the diver. As some gas is recycled after breathing, the oxygen fraction will always be lower than that of the make-up gas, but can closely approximate the make-up gas after a loop flush, so the gas is generally chosen to be breathable at maximum depth, which allows it to be used for open circuit bailout. The loop gas oxygen fraction will increase with depth, as the mass rate of metabolic oxygen use remains almost constant with a change in depth. This is the opposite tendency of what is done in a closed circuit rebreather, where the oxygen partial pressure is controlled to be more or less the same within limits throughout the dive. The fixed ratio system has been used in the DC55 and Halcyon RB80 rebreathers. Passive addition rebreathers with small discharge ratios may become hypoxic near the surface when moderate or low oxygen fraction supply gas is used, making it necessary to switch gases between deep and shallow diving.The depth compensating systems discharge a portion of the diver's tidal volume which varies in inverse proportion to the absolute pressure. At the surface they generally discharge between 20% (1/5) and 33% (1/3) of each breath, but that decreases with depth, to keep the oxygen fraction in the loop approximately constant and reduce gas consumption. A fully depth compensated system will discharge a volume of gas, inversely proportional to pressure, so that the volume discharged at 90m depth (10 bar absolute pressure) will be 10% of the surface discharge. This system will provide an approximately fixed oxygen fraction regardless of depth, when used with the same make-up gas, because the effective mass discharge remains constant.
System variants:
Partially depth compensating systems are part way between the fixed ratio and the depth compensating systems. They provide a high discharge ratio near the surface, but the discharge ratio is not fixed either as a proportion of respired volume or mass. Gas oxygen fraction is more difficult to calculate, but will be somewhere between the limiting values for fixed ratio and fully compensated systems. The Halcyon PVR-BASC uses a variable volume inner bellows system to compensate for depth.
System variants:
Active addition semi-closed circuit An active addition system adds feed gas to the breathing circuit and excess gas is dumped to the environment. These rebreathers tend to operate near maximum volume.
System variants:
Constant mass flow gas addition The most common system of active addition of make-up gas in semi-closed rebreathers is by use of a constant mass flow injector, also known as choked flow. This is easily achieved by using a sonic orifice, as provided the pressure drop over the orifice is sufficient to ensure sonic flow, the mass flow for a specific gas will be independent of the downstream pressure. The mass flow through a sonic orifice is a function of the upstream pressure and the gas mixture, so the upstream pressure must remain constant for the working depth range of the rebreather to provide a reliably predictable mixture in the breathing circuit, and a modified regulator is used which is not affected by changes in ambient pressure. Gas addition is independent of oxygen use, and the gas fraction in the loop is strongly dependent on exertion of the diver – it is possible to dangerously deplete the oxygen by excessive physical exertion.
System variants:
Demand controlled gas addition Only one model using this gas mixture control principle has been marketed. This is the Interspiro DCSC.
System variants:
The principle of operation is to add a mass of oxygen that is proportional to the volume of each breath. This approach is based on the assumption that the volumetric breathing rate of a diver is directly proportional to metabolic oxygen consumption, which experimental evidence indicates is close enough to work.The fresh gas addition is made by controlling the pressure in a dosage chamber proportional to the counterlung bellows volume. The dosage chamber is filled with fresh gas to a pressure proportional to bellows volume, with the highest pressure when the bellows is in the empty position. When the bellows fills during exhalation, the gas is released from the dosage chamber into the breathing circuit, proportional to the volume in the bellows during exhalation, and is fully released when the bellows is full. Excess gas is dumped to the environment through the overpressure valve after the bellows is full.The result is the addition of a mass of gas proportional to ventilation volume, and the oxygen fraction is stable over the normal range of exertion.
System variants:
The volume of the dosage chamber is matched to a specific supply gas mixture, and is changed when the gas is changed. The DCSC uses two standard mixtures of nitrox: 28% and 46%.
System variants:
Closed circuit mixed gas rebreathers Military, photographic, and recreational divers use closed circuit rebreathers because they allow long dives and produce no bubbles. Closed circuit rebreathers supply two breathing gases to the loop: one is pure oxygen and the other is a diluent or diluting gas such as air, nitrox, heliox or trimix.A major function of the closed circuit rebreather is to control the oxygen partial pressure in the loop and to warn the diver if it becomes dangerously low or high. Too low a concentration of oxygen results in hypoxia leading to unconsciousness and ultimately death. Too high a concentration of oxygen results in hyperoxia, leading to oxygen toxicity, a condition causing convulsions which can make the diver lose the mouthpiece when they occur underwater, and can lead to drowning. The monitoring system uses oxygen-sensitive electro-galvanic fuel cells to measure the partial pressure of oxygen in the loop. The partial pressure of oxygen in the loop can generally be controlled within reasonable tolerance of a fixed value. This set point is chosen to provide an acceptable risk of both long-term and acute oxygen toxicity, while minimizing the decompression requirements for the planned dive profile.The gas mixture is controlled by the diver in manually controlled closed circuit rebreathers. The diver can manually control the mixture by adding diluent gas or oxygen. Adding diluent can prevent the loop gas mixture becoming too oxygen rich, and adding oxygen is done to increase oxygen concentration.
System variants:
In fully automatic closed-circuit systems, an electronically controlled solenoid valve injects oxygen into the loop when the control system detects that the partial pressure of oxygen in the loop has fallen below the required level. Electronically controlled CCRs can be switched to manual control in the event of some control system failures.Addition of gas to compensate for compression during descent is usually done by an automatic diluent valve.
System variants:
Standard diving dress rebreathers In 1912 the German firm Drägerwerk of Lübeck introduced a version of standard diving dress using a gas supply from an oxygen rebreather and no surface supply. The system used a copper diving helmet and standard heavy diving suit with a back-mounted set of cylinders and scrubber. The breathing gas was circulated by using an injector system in the loop powered by the added gas. This was developed further with the Modell 1915 "Bubikopf" helmet and the DM20 oxygen rebreather system for depths up to 20 m, and the DM40 mixed gas rebreather which used an oxygen cylinder and an air cylinder for the gas supply, producing a nitrox mixture, for depths up to 40 m.The US Navy developed a variant of the Mark V system for heliox diving. These were successfully used during the rescue of the crew and salvage of the USS Squalus in 1939. The US Navy Mark V Mod 1 heliox mixed gas helmet is based on the standard Mark V Helmet, with a scrubber canister mounted on the back of the helmet and a inlet gas injection system which recirculates the breathing gas through the scrubber to remove carbon dioxide and thereby conserve helium. The helium helmet uses the same breastplate as a standard Mark V except that the locking mechanism is relocated to the front, there is no spitcock, there is an additional electrical connection for heated underwear, and on later versions a two or three-stage exhaust valve was fitted to reduce the risk of flooding the scrubber. The gas supply at the diver was controlled by two valves. The "Hoke valve" controlled flow through the injector to the "aspirator" which circulated gas from the helmet through the scrubber, and the main control valve used for bailout to open circuit, flushing the helmet, and for extra gas when working hard or descending. Flow rate of the injector nozzle was nominally 0.5 cubic foot per minute at 100 psi above ambient pressure, which would blow 11 times the volume of the injected gas through the scrubber.Both these systems were semi-closed and did not monitor partial pressures of oxygen. They both used an injector system to recirculate the breathing gas and did not increase work of breathing.
System variants:
Rebreathers using an absorbent that releases oxygen There have been a few rebreather designs (e.g. the Oxylite) which had an absorbent canister filled with potassium superoxide, which gives off oxygen as it absorbs carbon dioxide: 4KO2 + 2CO2 = 2K2CO3 + 3O2; it had a very small oxygen cylinder to fill the loop at the start of the dive. This system is dangerous because of the explosively hot reaction that happens if water gets on the potassium superoxide. The Russian IDA71 military and naval rebreather was designed to be run in this mode or as an ordinary rebreather.
System variants:
Tests on the IDA71 at the United States Navy Experimental Diving Unit in Panama City, Florida showed that the IDA71 could give significantly longer dive time with superoxide in one of the canisters than without.This technology may be applied to both oxygen and mixed gas rebreathers, and can be used for diving and other applications.
System variants:
Rebreathers which use liquid oxygen A liquid oxygen supply can be used for oxygen or mixed gas rebreathers. If used underwater, the liquid-oxygen container must be well insulated against heat transfer from the water. Industrial sets of this type may not be suitable for diving, and diving sets of this type may not be suitable for use out of water due to conflicting heat transfer requirements. The set's liquid oxygen tank must be filled immediately before use.
System variants:
Cryogenic rebreather A cryogenic rebreather removes the carbon dioxide by freezing it out in a "snow box" by the low temperature produced as liquid oxygen evaporates to replace the oxygen used.
System variants:
A cryogenic rebreather prototype called the S-1000 was built by Sub-Marine Systems Corporation. It had a duration of 6 hours and a maximum dive depth of 200 metres (660 ft). Its ppO2 could be set to anything from 0.2 to 2 bars (3 to 30 psi) without electronics, by controlling the temperature of the liquid oxygen, thus controlling the equilibrium pressure of oxygen gas above the liquid. The diluent could be either nitrogen or helium depending on the depth of the dive. The partial pressure of oxygen was controlled by temperature, which was controlled by controlling the pressure at which liquid nitrogen was allowed to boil, which was controlled by an adjustable pressure relief valve. No control valves other than the nitrogen pressure relief valve were required. Low temperature was also used to freeze out up to 230 grams of carbon dioxide per hour from the loop, corresponding to an oxygen consumption of 2 litres per minute as carbon dioxide will freeze out of the gaseous state at -43.3 °C or below. If oxygen was consumed faster due to a high workload, a regular scrubber was needed. No electronics were needed as everything followed the setting of the nitrogen release pressure from the cooling unit, and the refrigeration by evaporation of liquid nitrogen maintained a steady temperature until the liquid nitrogen was exhausted. The loop gas flow was passed through a counterflow heat exchanger, which re-heated the gas returning to the diver by chilling the gas headed for the snow box (the cryogenic scrubber). The first prototype, the S-600G, was completed and shallow-water tested in October 1967. The S1000 was announced in 1969, but the systems were never marketed.Cryogenic rebreathers were widely used in Soviet oceanography in the period 1980 to 1990.
Components and subsystems:
Mouthpiece The diver breathes from the rebreather circuit through a bite-grip mouthpiece or an oro-nasal mask which may be part of a full-face mask or diving helmet.
The mouthpiece is connected to the rest of the rebreather by breathing hoses. The mouthpiece of a diving rebreather will usually include a shutoff valve, and may incorporate a dive/surface valve or a bailout valve or both. On loop-configured rebreathers, the mouthpiece is usually the place where the non-return valves for the loop are fitted.
Components and subsystems:
Dive/surface valve The dive/surface valve (DSV) is a valve on the mouthpiece which can switch between the loop and ambient surroundings. It is used to close the loop at the surface to allow the diver to breathe atmospheric air, and may also be used underwater to isolate the loop so that it will not flood if the mouthpiece is taken out of the mouth.
Components and subsystems:
Bailout valve A dive/surface valve which can be switched to close the loop and simultaneously open a connection to an open circuit demand valve is known as a bailout valve, as its function is to switch over to open circuit bailout without having to remove the mouthpiece.
An important safety device when carbon dioxide poisoning occurs.
Components and subsystems:
Breathing hoses Flexible corrugated synthetic rubber hoses are used to connect the mouthpiece to the rest of the breathing circuit, as these allow free movement of the diver's head. These hoses are corrugated to allow greater flexibility while retaining a high resistance to collapse. The hoses are designed to provide low resistance to flow of the breathing gas. A single breathing hose is used for pendulum (push-pull) configuration, and two hoses for a one-way loop configuration.
Components and subsystems:
Counterlungs The counterlung is a part of the loop which is designed to change in volume by the same amount as the user's tidal volume when breathing. This lets the loop expand and contract when the user breathes, letting the total volume of gas in the lungs and the loop remain constant throughout the breathing cycle. The volume of the counterlung should allow for the maximum likely breath volume of a user, but does not generally need to match the vital capacity of all possible users.Underwater, the position of the counterlung – on the chest, over the shoulders, or on the back – has an effect on the hydrostatic work of breathing. This is due to the pressure difference between the counterlung and the diver's lung caused by the vertical distance between the two.Recreational, technical and many professional divers will spend most of their time underwater swimming face down and trimmed horizontally. Counterlungs should function well with low work of breathing in this position, and with the diver upright.
Components and subsystems:
Front mounted: When horizontal they are under greater hydrostatic pressure than the diver's lungs. Easier to inhale, harder to exhale.
Back mounted: When horizontal they are under less hydrostatic pressure than the diver's lungs. The amount varies, as some are closer to the back than others. Harder to inhale, easier to exhale.
Components and subsystems:
Over the shoulder: The hydrostatic pressure will vary depending on how much gas is in the counterlungs, and increases as the volume increases and the lowest part of the gas space moves downward. The resistive work of breathing often negates the gains of good positioning close to the lung centroid.The design of the counterlungs can also affect the swimming diver's streamlining due to location and shape of the counterlungs, if they are not in a casing.
Components and subsystems:
A rebreather which uses rubber counterlungs which are not in an enclosed casing should be sheltered from sunlight when not in use, to prevent the rubber from perishing due to ultraviolet light.
Concentric bellows counterlungs Most passive addition semi-closed diving rebreathers control the gas mixture by removing a fixed volumetric proportion of the exhaled gas, and replacing it with fresh feed gas from a demand valve, which is triggered by low volume of the counterlung.
Components and subsystems:
This is done by using concentric bellows counterlungs – the counterlung is configured as a bellows with a rigid top and bottom, and has a flexible corrugated membrane forming the side walls. There is a second, smaller bellows inside, also connected to the rigid top and bottom surfaces of the counterlung, so that as the rigid surfaces move towards and away from each other, the volumes of the inner and outer bellows change in the same proportion.
Components and subsystems:
The exhaled gas expands the counterlungs, and some of it flows into the inner bellows. On inhalation, the diver only breathes from the outer counterlung – return flow from the inner bellows is blocked by a non-return valve. The inner bellows also connects to another non-return valve opening to the outside environment, and thus the gas from the inner bellows is dumped from the circuit in a fixed proportion of the volume of the inhaled breath. If the counterlung volume is reduced sufficiently for the rigid cover to activate the feed gas demand valve, gas will be added until the diver finishes that inhalation.
Components and subsystems:
Carbon dioxide scrubber The exhaled gases are directed through the chemical scrubber, a canister full of a suitable carbon dioxide absorbent such as a form of soda lime, which removes the carbon dioxide from the gas mixture and leaves the oxygen and other gases available for re-breathing.Some of the absorbent chemicals are produced in granular format for diving applications, such as Atrasorb Dive, Sofnolime, Dragersorb, or Sodasorb. Other systems use a prepackaged Reactive Plastic Curtain (RPC) based cartridge: The term Reactive Plastic Curtain was originally used to describe Micropore's absorbent curtains for emergency submarine use by the US Navy, and more recently RPC has been used to refer to their Reactive Plastic Cartridges, which are claimed to provide better and more reliable performance than the same volume of granular absorbent material.The carbon dioxide passing through the scrubber absorbent is removed when it reacts with the absorbent in the canister; this chemical reaction is exothermic. This reaction occurs along a "front" which is a region across the flow of gas through the soda-lime in the canister. This front moves through the scrubber canister, from the gas input end to the gas output end, as the reaction consumes the active ingredients. This front would be a zone with a thickness depending on the grain size, reactivity, and gas flow velocity because the carbon dioxide in the gas going through the canister needs time to reach the surface of a grain of absorbent, and then time to penetrate to the middle of each grain of absorbent as the outside of the grain becomes exhausted. Eventually gas with remaining carbon dioxide will reach the far end of the canister and "breakthrough" will occur. After this the carbon dioxide content of the scrubbed gas will tend to rise as the effectiveness of the scrubber falls until it becomes noticeable to the user, then unbreathable.In rebreather diving, the typical effective endurance of the scrubber will be half an hour to several hours of breathing, depending on the granularity and composition of the absorbent, the ambient temperature, the size of the canister, the dwell time of the gas in the absorbent material, and the production of carbon dioxide by the diver.
Components and subsystems:
Scrubber design and size Scrubber design and size is a compromise between bulk, cost of consumables, and work of breathing. Bulk affects the size of the unit and the amount of ballast weight needed, which affect the logistics of the dive. Work of breathing can be safety critical at greater depths, where it can become a significant part of the available work of the diver, and can be overwhelming when it reaches the diver's limit.
Components and subsystems:
Gas venting – Overpressure valve and diffuser During ascent the gas in the breathing circuit will expand, and must have some way of escape before the pressure difference causes injury to the diver or damage to the loop. The simplest way to do this is for the diver to allow excess gas to escape around the mouthpiece or through the nose, but a simple overpressure valve is reliable and can be adjusted to control the permitted overpressure. The overpressure valve is typically mounted on the counterlung and in military diving rebreathers it may be fitted with a diffuser, which helps to conceal the diver's presence by masking the release of bubbles, by breaking them up to sizes which are less easily detected. A diffuser also reduces bubble noise.
Components and subsystems:
Loop drainage Many rebreathers have "water traps" in the counterlungs or scrubber casing, to stop large volumes of water from entering the scrubber media if the diver removes the mouthpiece underwater without closing the valve, or if the diver's lips get slack and let water leak in. Some rebreathers have manual pumps to remove water from the water traps, and a few of the passive addition SCRs automatically pump water out along with the gas during the exhaust stroke of the bellows counterlung. Others use internal pressure to expel water through the manually overridden dump valve when it is in a low position.
Components and subsystems:
Gas sources A rebreather must have a source of oxygen to replenish that which is consumed by the diver. Depending on the rebreather design variant, the oxygen source will either be pure or a breathing gas mixture which is almost always stored in a gas cylinder. In a few cases oxygen is supplied as liquid oxygen or from a chemical reaction.
Components and subsystems:
Diluent gas Pure oxygen is not considered to be safe for recreational diving deeper than 6 meters, so closed circuit rebreathers for deeper use also have a cylinder of diluent gas. This diluent cylinder may be filled with compressed air or another diving gas mix such as nitrox, trimix, or heliox. The diluent reduces the percentage of oxygen breathed and increases the maximum operating depth of the rebreather. The diluent is not normally an oxygen-free gas, such as pure nitrogen or helium, and is breathable so it may be used in an emergency either to flush the loop with breathable gas of a known composition or as a bailout gas. Diluent gas is commonly referred to as diluent, dilutent, or just "dil" by divers.
Components and subsystems:
Gas addition valves Gas must be added to the breathing loop if the volume gets too small or if it is necessary to change the gas composition.
Components and subsystems:
Automatic diluent valve (ADV) This has a similar function to an open circuit demand valve. It adds gas to the circuit if the volume in the circuit is too low. The mechanism is either operated by a dedicated diaphragm like in a scuba second stage, or may be operated by the top of a bellows type counterlung reaching the bottom of its travel.
Components and subsystems:
Manual addition Closed circuit rebreathers usually allow the diver to add gas manually. In oxygen rebreathers this is just oxygen, but mixed gas rebreathers usually have a separate manual addition valve for oxygen and diluent, as either might be required to correct the composition of the loop mixture, either as the standard operating method for manually controlled CCRs, or as a backup system on electronically controlled CCRs. The manual diluent addition is sometimes by a purge button on the ADV.
Components and subsystems:
Constant mass flow Constant mass flow gas addition is used on active addition semi-closed rebreathers, where it is the normal method of addition at constant depth, and in many closed circuit rebreathers, where it is the primary method of oxygen addition, at a rate less than metabolically required by the diver at rest, and the rest is made up by the control system through a solenoid valve, or manually by the diver.
Components and subsystems:
Constant mass flow is achieved by sonic flow through an orifice. The flow of a compressible fluid through an orifice is limited to the flow at sonic velocity in the orifice. This can be controlled by the upstream pressure and the orifice size and shape, but once the flow reached the speed of sound in the orifice, any further reduction of downstream pressure has no influence on the flow rate. This requires a gas source at a fixed pressure, and it only works at depths which have a low enough ambient pressure to provide sonic flow in the orifice.
Components and subsystems:
Regulators which have their control components isolated from the ambient pressure are used to supply gas at a pressure independent of the depth.
Components and subsystems:
Passive addition In passive addition semi-closed rebreathers, gas is usually added by a demand type valve actuated by the bellows counterlung when the bellows is empty. This is the same actuation condition as the automatic diluent valve of any rebreather, but the actual trigger mechanism is slightly different. A passive rebreather of this type does not need a separate ADV as the passive addition valve already serves this function.
Components and subsystems:
Electronically controlled (solenoid valves) Electronically controlled closed circuit mixed gas rebreathers may have part of the oxygen feed provided by a constant mass flow orifice, but the fine control of partial pressure is done by solenoid operated valves actuated by the control circuits. Timed opening of the solenoid valve will be triggered when the oxygen partial pressure in the loop mix drops below the lower set-point.
Components and subsystems:
If the constant mass flow orifice is compromised and does not deliver the correct flow, the control circuit will compensate by firing the solenoid valve more often.
Components and subsystems:
Off-board gas On some technical diving rebreathers it is possible to connect an alternative gas supply into the rebreather, usually using a wet quick-connect system. This is usually a feature of bailout rebreathers and other side-mounted rebreathers, where the rebreather unit is intentionally kept as compact as possible, and the gas supply may be slung on the other side of the diver for convenience and balance. This facility also allows all of the gas carried by a diver to be potentially supplied via a rebreather.
Components and subsystems:
Bailout gas Bailout gas and bailout procedure are closely linked. The procedure must be appropriate for the gas supply configuration. Initial bailout to open circuit is often the first step, even when a bailout rebreather is carried, as it is simple and robust, and some time is needed to get the bailout rebreather ready for use. Bailout gas supply must be sufficient for safe return to the surface from any point in the planned dive, including any required decompression, so it is not unusual for two bailout cylinders to be carried, and the diluent cylinder to be used as the first bailout to get to a depth where the other gas can be used. On a deep dive, or a long penetration, open circuit bailout can easily be heavier and more bulky than the rebreather, and for some dives a bailout rebreather is a more practical option.
Components and subsystems:
Control of the breathing gas mix The fundamental requirements for the control of the gas mixture in the breathing circuit for any rebreather application are that the carbon dioxide is removed, and kept at a tolerable level, and that the partial pressure of oxygen is kept within safe limits. For rebreathers which are used at normobaric or hypobaric pressures, this only requires that there is sufficient oxygen, which is easily achieved in an oxygen rebreather. Hyperbaric applications, as in diving, also require that the maximum partial pressure of oxygen is limited, to avoid oxygen toxicity, which is technically a more complex process, and may require dilution of the oxygen with metabolically inert gas.
Components and subsystems:
If not enough oxygen is added, the concentration of oxygen in the loop may be too low to support life. In humans, the urge to breathe is normally caused by a build-up of carbon dioxide in the blood, rather than lack of oxygen. Hypoxia can cause blackout with little or no warning, followed by death.The method used for controlling the range of oxygen partial pressure in the breathing loop depends on the type of rebreather.
Components and subsystems:
In an oxygen rebreather, once the loop has been thoroughly flushed, the mixture is effectively static at 100% oxygen, and the partial pressure is a function only of ambient pressure.
In a semi-closed rebreather the loop mix depends on a combination of factors:the type of gas addition system and its setting, combined with the gas mixture in use, which control the rate of oxygen added.
work rate, and therefore the oxygen consumption rate, which controls the rate of oxygen depletion, and therefore the resulting oxygen fraction.
Components and subsystems:
ambient pressure, as partial pressure varies in proportion to ambient pressure and oxygen fraction.In manually controlled closed circuit rebreathers (MCCCR), also known as diver-controlled closed-circuit rebreathers (DCCCR), the diver monitors the loop mix using one or more oxygen sensors, and controls the gas mixture and volume in the loop by injecting the appropriate available gases to the loop and by venting the loop. In this application the diver needs to know the partial pressure of oxygen in the loop and correct it as it drifts away from the set point. A common method for increasing the time between corrections is to use a constant mass flow orifice set to the diver's relaxed diving metabolic oxygen consumption rate to add oxygen at a rate that is unlikely to increase the partial pressure at a constant depth.
Components and subsystems:
Most electronically controlled closed-circuit rebreathers (ECCCR) have electro-galvanic oxygen sensors and electronic control circuits, which monitor the ppO2, injecting more oxygen if necessary and issuing an audible, visual and/or vibratory warning to the diver if the ppO2 reaches dangerously high or low levels.The volume in the loop is usually controlled by a pressure or volume triggered automatic diluent valve, and an overpressure relief valve. The automatic diluent valve works on the same principle as a demand valve to add diluent when the pressure in the loop is reduced below ambient pressure, such as during descent or if gas is lost from the loop. The set may also have a manual addition valve, sometimes called a bypass. In some early oxygen rebreathers the user had to manually open and close the valve to the oxygen cylinder to refill the counterlung each time the volume got low.
Components and subsystems:
Instrumentation and displays Instrumentation may vary from the minimal depth, time and remaining gas pressure necessary for a closed circuit oxygen rebreather or semi-closed nitrox rebreather to redundant electronic controllers with multiple oxygen sensors, redundant integrated decompression computers, carbon dioxide monitoring sensors and a head-up display of warning and alarm lights with a sound and vibration alarm.
Alarms for malfunctions Alarms may be provided for a few malfunctions. The alarms are electronically controlled and may rely on input from a sensor and processing by the control circuitry.
These may include: Failure of the control system.
Failure of one or more sensors.
Low partial pressure of oxygen in the loop.
High partial pressure of oxygen in the loop.
Components and subsystems:
Gas other than pure oxygen in the oxygen supply system. (unusual) High carbon dioxide levels in the loop. (unusual) Impending scrubber breakthrough (unusual)Alarm displays: Visible (digital screen displays, flashing LEDs) Audible (buzzer or tone generator) Tactile (Vibrations) Control panel displays (usually with digital readout of the value and status of the measured parameter, often with blinking or flashing display) Head-up displays (usually a colour coded LED display, sometimes providing more information by the rate of flashing.)If a rebreather alarm goes off there is a high probability that the gas mixture is deviating from the set mixture. There is a high risk that the gas in the rebreather loop will soon be unsuitable to support consciousness. A good general response is to add diluent gas to the loop as this is known to be breathable. This will also reduce carbon dioxide concentration if that is high.
Components and subsystems:
Ascending while breathing off the loop without identifying the problem may increase risk of a hypoxia blackout.
If the partial pressure of oxygen is not known the rebreather can not be trusted to be breathable, and the diver should immediately bailout to open circuit to reduce the risk of losing consciousness without warning
Work of breathing:
Work of breathing is the effort required to breathe. Part of the work of breathing is due to inherent physiological factors, part is due to the mechanics of the external breathing apparatus, and part is due to the characteristics of the breathing gas. A high work of breathing may result in carbon dioxide buildup in the diver, and reduces the diver's ability to produce useful physical effort. In extreme cases work of breathing may exceed the aerobic work capacity of the diver, with fatal consequences.Work of breathing of a rebreather has two main components: Resistive work of breathing is due to the flow restriction of the gas passages causing resistance to flow of the breathing gas, and exists in all applications where there is no externally powered ventilation. Hydrostatic work of breathing is only applicable to diving applications, and is due to difference in pressure between the lungs of the diver and the counterlungs of the rebreather. This pressure difference is generally due to a difference in hydrostatic pressure caused by a difference in depth between lung and counterlung, but can be modified by ballasting the moving side of a bellows counterlung.Resistive work of breathing is the sum of all the restrictions to flow due to bends, corrugations, changes of flow direction, valve cracking pressures, flow through scrubber media, etc., and the resistance to flow of the gas, due to inertia and viscosity, which are influenced by density, which is a function of molecular weight and pressure. Rebreather design can limit the mechanical aspects of flow resistance, particularly by the design of the scrubber, counterlungs and breathing hoses. Diving rebreathers are influenced by the variations of work of breathing due to gas mixture choice and depth. Helium content reduces work of breathing, and increased depth increases work of breathing. Work of breathing can also be increased by excessive wetness of the scrubber media, usually a consequence of a leak in the breathing loop, or by using a grain size of absorbent that is too small.The semi-closed rebreather systems developed by Drägerwerk in the early 20th century as a scuba gas supply for Standard diving dress, using oxygen or nitrox, and the US Navy Mark V Heliox helmet developed in the 1930s for deep diving, circulated the breathing gas through the helmet and scrubber by using an injector system where the added gas entrained the loop gas and produced a stream of scrubbed gas past the diver inside the helmet, which eliminated external dead space and resistive work of breathing, but was not suitable for high breathing rates.
Safety:
There are safety issues specific to rebreather equipment, and these tend to be more severe in diving rebreathers. Methods of addressing these issues can be categorised as engineering and operational approaches. Development of engineering solutions to these issues is ongoing and has been relatively rapid, but depends on the affordable availability of suitable technology, and some of the engineering problems, such as reliability of oxygen partial pressure measurement, have been relatively intractable. Other problems, such as scrubber breakthrough monitoring and automated control of gas mixture have advanced considerably in the 21st century, but remain relatively expensive. Work of breathing is another issue that has room for improvement, and is a severe limitation on acceptable maximum depth of operation, as the circulation of gas through the scrubber is almost always powered by the lungs of the diver. Fault tolerant design can help with making failures survivable.
Safety:
Hazards Some of the hazards are due to the way the equipment works, while others are related to the environment in which the equipment is used.
Hypoxia Hypoxia can occur in any rebreather which contains enough inert gas to allow breathing without triggering automatic gas addition.
In an oxygen rebreather this can occur if the loop is not sufficiently purged at the start of use. Purging should be done while breathing off the unit so that the inert gas from the user's lungs is also removed from the system.
Hyperoxia A dangerously high partial pressure of oxygen can occur in the breathing loop for several reasons: Descent below the maximum operating depth with an oxygen rebreather or a semi-closed rebreather.
Failure to correctly maintain the loop mixture within tolerance of the set point. This may be due to: Oxygen sensor malfunction: If the cell fails current limited, it will register a partial pressure lower than reality, and the control system may attempt to correct by continuous injection of oxygen.
Safety:
Voting logic error Where there are three of more oxygen cells, in the system, the voting logic will assume that the two with most similar outputs are correct. This may not be the case – there have been cases where two cells with almost identical history have failed in the same way at the same time, and the voting logic has dismissed the one remaining correctly functioning cell, with fatal consequences.
Safety:
Power supply malfunction Use of a diluent with too high oxygen fraction for the planned depth in a CCR. In this case a diluent flush will not produce a breathable gas in the loop.
Safety:
Carbon dioxide buildup Carbon dioxide buildup will occur if the scrubber medium is absent, badly packed, inadequate or exhausted. The normal human body is fairly sensitive to carbon dioxide partial pressure, and a buildup will be noticed by the user. However, there is not often much that can be done to rectify the problem except changing to another breathing gas supply until the scrubber can be repacked. Continued use of a rebreather with an ineffective scrubber is not possible for very long, as the levels will become toxic and the user will experience extreme respiratory distress, followed by loss of consciousness and death. The rate at which these problems develop depends on the volume of the circuit and the metabolic rate of the user.
Safety:
Excessive work of breathing Carbon dioxide buildup can also occur when a combination of exertion and work of breathing exceeds the capacity of the user. If this occurs where the user cannot reduce exertion sufficiently, it may be impossible to correct. In this case it is not the scrubber that fails to remove carbon dioxide, but the inability of the diver to circulate gas efficiently through the scrubber against the frictional resistance of the circuit causing the problem. This is more likely to occur with diving rebreathers at depths where the density of the breathing gas is severely elevated, or when water in the scrubber obstructs gas flow.
Safety:
Fire hazards of high concentration of oxygen High partial pressures of oxygen greatly increase fire hazard, and many materials which are self-extinguishing in atmospheric air will burn continuously in a high oxygen concentration. This is more of a risk for terrestrial applications such as rescue and firefighting than for diving, where the ignition risk is relatively low.
Caustic cocktail Caused by a loop flood reaching the absorbent canister, so only applicable in immersed applications.
Failure modes Diving rebreathers are susceptible to some failure modes which cannot occur in other breathing apparatus.
Safety:
Scrubber failure The term "breakthrough" means the failure of the scrubber to continue removing sufficient carbon dioxide from the gas circulating in the loop. This will inevitably happen if the scrubber is used too long, but can happen prematurely in some circumstances. There are several ways that the scrubber may fail or become less efficient: Complete consumption of the active ingredient in a "general breakthrough". Depending on scrubber design and diver workload, this may be gradual, allowing the diver to become aware of the problem in time to make a controlled bailout to open circuit, or relatively sudden, triggering an urgent or emergency response.
Safety:
Bypassing the absorbent. The absorbent granules must be packed closely so that all exhaled gas comes into contact with the surface of soda lime and the canister is designed to avoid any spaces or gaps between the absorbent granules or between the granules and the canister walls that would let gas bypass contact with the absorbent. If any of the seals, such as O-rings, or spacers that prevent bypassing of the scrubber, are not present or not fitted properly, or if the scrubber canister has been incorrectly packed or fitted, it may allow the exhaled gas to bypass the absorbent, and the scrubber will be less effective. This failure mode is also called "tunneling" when absorbent settles to form void spaces inside the canister. Bypass will cause an unexpected early breakthrough.
Safety:
When the gas mix is under pressure at depth, the gas molecules are more densely packed, and the carbon dioxide molecules' mean path between collisions is shorter, so they are not so free to move around to reach the absorbent surface, and require a longer dwell time. Because of this effect, the scrubber must be bigger for deep diving than is needed for a shallow-water, industrial or high altitude rebreather.
Safety:
Carbon dioxide absorbent, can be caustic and can cause burns to the eyes, mucous membranes and skin. A mixture of water and absorbent occurs when the scrubber floods and depending on the chemicals used, can produce a chalky taste or a burning sensation if the contaminated water reaches the mouthpiece, which should prompt the diver to switch to an alternative source of breathing gas and rinse their mouth out with water. This is known to rebreather divers as a caustic cocktail. The excessive wetting of the sorb also reduces the rate of carbon dioxide removal and can cause premature breakthrough even if no caustic liquid reaches the diver. Work of breathing may also increase. Many modern diving rebreather absorbents are designed not to produce this caustic fluid if they get wet.
Safety:
In below-freezing surface conditions while preparing for diving, wet scrubber chemicals can freeze while there is a pause in the exothermic reaction of taking up the carbon dioxide, thus preventing carbon dioxide from reaching the scrubber material, and slowing the reaction when used again.
Safety:
Flooding of the loop Flooding of the breathing loop can occur due to a leak at a low point in the loop where internal gas pressure is less than the external water pressure. One of the more common ways this can happen is if the mouthpiece is dislodged or removed from the diver's mouth without first closing the dive/surface valve or switching to bailout. This can happen due to accidental impact or through momentary inattention. Depending on the layout of the loop and the attitude of the rebreather in the water, the amount of water ingress can vary, as can the distance it travels into the air passages of the breathing loop. In some models of rebreather a moderate amount of water will be trapped at a low point in a counterlung or the scrubber housing, and prevented from reaching the absorbent in the scrubber. Some rebreathers have a system to expel water trapped in this way, either automatically through the vent valve, such as in the Halcyon RB80 and the Interspiro DCSC, or manually by using a small pump.
Safety:
Gas leakage There are several places on a rebreather where gas leakage can cause problems. Leakage can occur from the high and intermediate pressure components, and from the loop, at pressure slightly above ambient. The effects on system integrity depend on severity of the leak. If only small volumes of gas are lost the leak may be tolerable for the rest of the dive, but a leak may become more severe, depending on the cause, and may in some cases deteriorate catastrophically.
Safety:
Oxygen monitoring failure Failure of electro-galvanic oxygen sensors.
Failure of voting logic Failure of displayOxygen monitoring failure can lead to incorrect partial pressure of oxygen in the breathing gas. The consequences can include hypoxia, hyperoxia, and incorrect decompression information, all three of which are potentially life-threatening.
Safety:
Gas injection system failure Constant mass flow orifice blockage:– In a CCR, blockage of a CMF oxygen injection orifice will increase the frequency of manual or solenoid valve injection, which is an inconvenience rather than an emergency. In active addition SCRs the unnoticed failure of gas injection will lead to the mix becoming hypoxic. If there is instrumentation monitoring the partial pressure of oxygen in the loop, the diver can compensate by manual injection or forcing automatic injection via the ADV by dumping gas into the environment by exhaling through the nose.
Safety:
Injector control circuit malfunction:– If the control circuit for oxygen injection fails, the usual mode of failure results in the oxygen injection valves being closed. Unless action is taken, the breathing gas will become hypoxic with potentially fatal consequences. An alternative mode of failure is one in which the injection valves are kept open, resulting in an increasingly hyperoxic gas mix in the loop, which may pose the danger of oxygen toxicity. Two basic approaches for preventing loss of availability are possible. Either a redundant independent control system may be used, or the risk of the single system failing may be accepted, and the diver takes the responsibility for manual gas mixture control in the event of failure. Both methods depend on continued reliable oxygen monitoring. Most (possibly all) electronically controlled CCRs have manual injection override. If the electronic injection fails, the user can take manual control of the gas mixture provided that the oxygen monitoring is still reliably functioning. Alarms are usually provided to warn the diver of failure.
Safety:
Automatic diluent valve malfunction:– The ADV is the same technology as an open circuit demand valve, and as such is generally very reliable if maintained correctly. Two failure modes are possible, Free flow, where the valve sticks open, and the less likely failure of the valve to open. As diluent is usually chosen to be breathable at all or most depths of the planned dive, this is not usually immediately dangerous, but a free flow will use up the diluent rapidly and unless rectified soon the diver will have to abort the dive and bail out. There may be a manual diluent valve which the diver can use to add gas if the valve fails closed.
Safety:
Scrubber monitoring The methods available for monitoring the condition of the scrubber and predicting and identifying imminent breakthrough include: Dive planning and scheduled replacement. Divers are trained to monitor and plan the exposure time of the absorbent material in the scrubber and replace it within the recommended time limit. This method is necessarily very conservative, as actual carbon dioxide produced during a dive is not accurately predictable and is not measured. Manufacturers recommendations for replacement periods tend to allow for worst cases to reduce risk, and this is relatively uneconomical in absorbent usage.
Safety:
An indicator dye that changes colour when the active ingredient is consumed may be included in the absorbent. For example, a rebreather absorbent called "Protosorb" supplied by Siebe Gorman had a red indicator dye, which was said to go white when the absorbent was exhausted. With a transparent canister, this may show the position of the reaction front. This is useful where the canister is visible to the user, which is seldom possible on diving equipment, where the canister is often inside the counterlung or a back mounted casing. Colour indicating dye was removed from US Navy fleet use in 1996 when it was suspected of releasing chemicals into the circuit.
Safety:
Temperature monitoring. As the reaction between carbon dioxide and soda lime is exothermic, temperature sensors along the length of the scrubber can be used to measure the position of the reaction front and therefore the estimated remaining life of the scrubber.
While carbon dioxide gas sensors exist, they are not useful as a tool for predicting remaining scrubber endurance as they measure the carbon dioxide in the scrubbed gas, and the onset of scrubber break through generally occurs quite rapidly. Such systems are fitted as a safety device to warn divers to bail off the loop immediately.
Safety:
Fault tolerant design Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly important in high availability or safety-critical systems. The ability to maintain functionality when portions of a system break down is referred to as graceful degradation.The basic closed circuit oxygen rebreather is a very simple and mechanically reliable device, but it has severe operational limitations due to oxygen toxicity. The approaches to safely extending the depth range necessitate a variable breathing gas mixture. Semi-closed rebreathers tend to be inefficient for decompression, and not entirely predictable for gas composition in comparison with a precisely controlled closed circuit rebreather. Monitoring the gas composition in the breathing loop can only be done by electrical sensors, bringing the underwater reliability of the electronic sensing system into the safety critical component category.There are no formal statistics on underwater electronics failure rates, but it is likely that human error is more frequent than the error rate of electronic dive computers, which are the basic component of rebreather control electronics, which process information from multiple sources and have an algorithm for controlling the oxygen injection solenoid. The sealed dive computer package has been around for long enough for the better quality models to have become reliable and robust in design and construction.An electronically controlled rebreather is a complex system. The control unit receives input from several sensors, evaluates the data, calculates the appropriate next action or actions, updates the system status and displays, and performs the actions, in some cases using real-time feedback to adapt the control signal. The inputs include signals from one or more of pressure, oxygen and temperature sensors, a clock, and possibly helium and carbon dioxide sensors. There is also a battery power source, and a user interface in the form of a visual display, user input interface in the form of button switches, and possibly audio and vibratory alarms.In a minimal eCCR the system is very vulnerable. A single critical fault can necessitate manual procedures for fault recovery or the need to bail out to an alternative breathing gas supply. Some faults may have fatal consequences if not noticed and managed very quickly. Critical failures include power supply, non-redundant oxygen sensor, solenoid valves or control unit.The purely mechanical components are relatively robust and reliable and tend to degrade non-catastrophically, and are bulky and heavy, so the electronic sensors and control systems have been the components where improved fault tolerance has generally been sought. Oxygen cell failures have been a particular problem, with predictably serious consequences, so the use of multiple redundancy in oxygen partial pressure monitoring has been an important area of development for improving reliability. A problem in this regard is the cost and relatively short lifespan of oxygen sensors, along with their relatively unpredictable time to failure, and sensitivity to the environment.To automatically detect and identify oxygen sensor malfunction, either the sensors must be calibrated with a known gas, which is very inconvenient at most times during a dive, but is possible as an occasional test when a fault is suspected, or several cells can be compared and the assumption made that cells with near identical output are functioning correctly. This voting logic requires a minimum of three cells, and reliability increases with number. To combine cell redundancy with monitoring circuit, control circuit and display redundancy, the cell signals should all be available to all monitoring and control circuits in normal conditions. This can be done by sharing signals at the analog or digital stage – the cell output voltage can be supplied to the input of all monitoring units, or the voltages of some cells can be supplied to each monitor, and the processed digital signals shared. The sharing of digital signals may allow easier isolation of defective components if short circuits occur. The minimum number of cells in this architecture is two per monitoring unit, with two monitoring units for redundancy, which is more than the minimum three for basic voting logic capability.The three aspects of a fault tolerant rebreather are hardware redundancy, robust software and a fault detection system. The software is complex and comprises several modules with their own tasks, such as oxygen partial pressure measurement, ambient pressure measurement, Oxygen injection control, decompression status calculation and the user interface of status and information display and user inputs. It is possible to separate the user interface hardware from the control and monitoring unit, in a way that allows the control system to continue to operate if the relatively vulnerable user interface is compromised.
Operation:
Rebreathers are more complex to use than open circuit scuba, and have more potential points of failure, so acceptably safe use requires a greater level of skill, attention and situational awareness, which is usually derived from understanding the systems, diligent maintenance and overlearning the practical skills of operation and fault recovery. Fault tolerant design can make a rebreather less likely to fail in a way that immediately endangers the user, and reduces the task loading on the diver which in turn may lower the risk of operator error.
Recreational and scientific diving rebreather technological innovations:
Rebreather technology has advanced considerably, often driven by the growing market in recreational diving equipment. Innovations include: Bailout valves – a device in the mouthpiece of the loop which connects to a bailout demand valve and can be switched to provide gas from either the loop or the demand valve without the diver taking the mouthpiece from their mouth. An important safety device when carbon dioxide poisoning occurs.
Recreational and scientific diving rebreather technological innovations:
Closed circuit bailout.
Active and passive oxygen sensor validation.
Hyperoxic linearity test.
Integrated decompression computers – input to a dive computer from the oxygen sensors of the rebreather allow divers to take advantage of the actual partial pressure of oxygen to generate an optimised schedule for decompression.
Gas integrated decompression computers – these allow divers to take advantage of the actual gas mixture, as measured by one or more oxygen cells in real time, to generate a schedule for decompression in real time.
Carbon dioxide scrubber life monitoring systems – temperature sensors monitor the progress of the reaction of the soda lime and provide an indication of when the scrubber will be exhausted.
Carbon dioxide level monitoring systems – Gas sensing cell and interpretive electronics which detect the concentration of carbon dioxide in the rebreather loop downstream from the scrubber.
Multiple set-points automatically selected by depth – Electronic rebreather control systems can be programmed to change set-point above and below selectable limiting depths to limit oxygen exposure during the working dive, but increase the limit during decompression above the limiting depth to accelerate decompression.
Automated pre-dive checklists and systems checks.
Head-up displays for status and alarms.
Data logging.
Recreational and scientific diving rebreather technological innovations:
Active and passive oxygen sensor validation Accurate and reliable oxygen partial pressure measurement is one of the most problematic factors in rebreather diving safety. Control systems using this data have developed to the extent that they are robust and reliable, and the use of an independent backup improves the reliability to about as good as for any other component. The weakest point is the sensors, which are prone to several modes of failure, some of which are relatively insidious as the cell may pass a normobaric calibration and fail when the partial pressure is near the high end of the acceptable working range, which is also the range in which constant partial pressure diving has the maximum benefit. When it has been possible to infer the cause, the leading cause of rebreather fatalities is hypoxia, at approximately 17%, with hyoperoxia assumed in an additional 4% of cases. If these trends extend into the range of indeterminate cases, it is possible that inappropriate oxygen content is involved in 30% of rebreather fatalities.The standard method for improving reliability of oxygen monitoring has been multiple redundancy – the use of 3 or more sensors – and using the multiple data inputs with a voting logic system to try to identify failure of a sensor in time to make a controlled and safe termination of the dive. Voting logic normally assumes that if one sensor produces a reading significantly differing from two or more others when exposed to the same environment, the outlier is faulty, and the input of the others is assumed accurate. Unfortunately this is not always the case, and there have been cases where the outlier sensor was most correct. It has been shown that the reliability of this system is lower than originally expected due to a lack of sufficient statistical independence of the three sensors, and that outcomes are not symmetrical – the effects of faulty low or high partial pressure readings are also depth dependent.If a sensor gives relatively static output with little response to variations in depth and temperature, and changes in gas composition due to use, gas addition, incomplete mixing or loop turbulence, it is likely that the sensor may not be responding correctly, and when two sensors follow a similar pattern of response this is a warning that both may be defective. Algorithms that track sensor output against expected output taking known changes into account can indicate reliability of the sensors. This method of monitoring sensors is known as passive sensor validation (PSV), can be used to improve reliability of sensor integrity assessment, and can be used in the control system to make more reliable decisions on which sensors are most likely to be giving trustworthy output in comparison with voting logic based only on calibration values for the sensors. PSV is an improvement on simple voting logic but is still susceptible to errors related to statistical independence of components.Early work on design of an automatic sensor validation system, in which the rebreather control system would periodically inject gas of known composition onto the oxygen sensors during the dive and use the output to determine the viability of the sensor response with greater precision and accuracy than a human diver, was started in 2002, and further developed to be used on the Poseidon/Cis-Lunar MK-VI rebreather.
Recreational and scientific diving rebreather technological innovations:
This "Active Sensor Validation" (ASV) system has been refined over thousands of hours of field test diving in varied conditionsThe ASV system has become more sophisticated than the manual implementation in the Cis-Lunar MK-5P. It involves more than comparing the measured PO2 value from the sensor with the calculated PO2 of the diluent at the current depth. In the implementation in the Poseidon rebreathers the computer automatically injects either diluent or oxygen directly onto a single primary oxygen sensor every five minutes during a dive. The algorithm takes into account current depth, FO2 of the injected gas, ambient temperature, duration of gas injection, and calibration values for the sensor for that dive to predict how the sensor should respond over the next few seconds after each gas injection, and compares that with the measured results to produce a confidence level for correct sensor performance.This type of sensor validation test can identify several modes of failure by the ways the measured values deviate from expected values with variations of calculated partial pressure of the test gas, and is capable of detecting failures due to incorrect temperature readings, incorrect input of the FO2 of the diluent condensation on the oxygen sensor, a defective oxygen sensor, validation gas supply failure and other reasons that would not be detected by voting logic.
Recreational and scientific diving rebreather technological innovations:
Hyperoxic linearity test The oxygen sensors for most rebreathers are calibrated at the surface before the dive using air or 100% oxygen at normal atmospheric pressure. These are reliable calibration points but the range of operational partial pressures may extend beyond these calibration points, and if the sensors are calibrated for a linear response between these conditions and the response is extrapolated, for set points above 1 bar, which is standard practice, the control system must operate outside of the range for which response is known to be linear. One of the most common modes of failure is for a cell to become current-limited as it ages. The internal impedance changes as the anode is consumed by the reaction which produces the output current, and the response becomes non-linear at higher oxygen partial pressures. The signal may indicate a lower partial pressure and does not increase proportionately as oxygen is added, leading to a loop oxygen partial pressure that may increase to dangerous levels without warning. A way of validating the sensors at high partial pressures is to expose the sensor to higher PO2 than the upper set point by exposing it to pure oxygen at a depth of 6 m, for a PO2 of 1.6 bar during the dive, or at 1.6 bar or more in a calibration pressure pot. Both these methods are cumbersome and the in-water method may cause spiking of the PO2 during descent. A variation of the ASV system using oxygen, called a hyperoxic linearity test (HLT), uses oxygen as the flushing gas at 6 m, which can check that the sensor is linear to 1.6 bar PO2, and if it fails, the set point can be automatically reduced to within the linear range established during calibration. A single sensor with PSV and ASV has been shown to be more reliable than three sensors with conventional voting logic. The effectiveness of cell validation algorithms is expected to improve with the acquisition of more field data gathered by the rebreather control systems.
Recreational and scientific diving rebreather technological innovations:
Carbon dioxide monitoring Hypercapnia has been identified as one of the most prevalent factors in rebreather diving fatalities. This is generally a consequence of scrubber failure to remove carbon dioxide as fast as it is produced, which may be caused by any one or a combination of spent, wet, or inadequately packed, absorbent material, incorrectly designed or assembled canisters, mismatch of absorbent and canister design, or absorbent used beyond its operational range. Higher carbon dioxide partial pressure in the loop leads to higher levels of carbon dioxide in the blood and tissue, which can have a range of symptoms including respiratory distress, increased susceptibility to CNS oxygen toxicity, disorientation, and loss of consciousness.Most rebreather designs have relied on very conservative time-based limits for absorbent duration based on experimental testing, using cold conditions and high workloads and high depth pressures. The usually unnecessarily high conservatism encourages divers to stretch the absorbent duration, which works well enough until it doesn't, often without warning, which can have serious consequences. A more sophisticated method is to base absorbent duration limits on metabolic oxygen consumption, as a proxy for metabolic carbon dioxide production, which is reasonably stable for most people most of the time, and can compensate fairly well for variations in exertion and base metabolism, but does not compensate reliably for depth and pressure effects on absorbent function.A more direct and empirical approach is to take advantage of the production of heat and rise in temperature of the active zone of the absorbent in the scrubber. More carbon dioxide is absorbed by the first zone of relatively unused absorbent that it reaches as the breathing gas passes through the scrubber, and this relatively active zone progresses through the canister as the zone first reached by the gas is exhausted, and more reaction occurs further along. This reaction front is at a higher temperature than the spent absorbent, and the absorbent not yet exposed to high carbon dioxide levels, and the front progresses along the scrubber until part of it reaches the end of the absorbent, and unscrubbed gas breaks through to the other side of the loop, after which there is a fairly constant and irreversible increase in inspired carbon dioxide. Some rebreather manufacturers have developed linear temperature probes which identify the position of the reactive front, allowing the user to estimate the remaining duration of the canister.
Recreational and scientific diving rebreather technological innovations:
None of these methods can detect canister bypass and they have little ability to identify completely spent absorbent, channeling, badly packed, or inappropriate absorbent material, but this can be done by a direct measurement of carbon dioxide partial pressure in the inhalation side of the loop.Research and development of carbon dioxide sensors goes back at least as far as the early 1990s when Teledyne Analytical Instruments and Cis-Lunar Development Laboratories worked on a sensor for the Cis-Lunar MK-III rebreather, which was accurate in laboratory conditions but in the field susceptible to high humidity and condensation causing unreliable readings, which was a recurring problem with real-time carbon dioxide measurement. High pressures also caused problems for depth compensation. In 2009 VR Technologies released a commercial CO2 sensor using hydrophobic membranes to keep the sensors dry without excessively reducing gas flow to the sensors.Since then other manufacturers have introduced their products to the market but they have not gained widespread use. They are relatively expensive, give unreliable readings in some circumstances, can only detect failure of the scrubber, and do not predict remaining duration. A combination of temperature measurement and post scrubber CO2 measurement can give both prediction and failure warning, for increased cost and complexity.Placement of the sensor in the loop can affect sensitivity to actual CO2 content of inspired gas. Measuring gas in the mouthpiece has problems due to dead space, and mounting in the inhalation hose near the mouthpiece makes the sensor sensitive to small leaks in the inhalation check valve, while also able to detect high CO2 due to major check valve leaks which would cause a big increase in dead space, which would not be detected if the sensor is further upstream in the loop.Furthermore, increased levels of CO2 in inspired gas is only one cause of hypercapnia. It is also affected by work of breathing, diver fitness, respiratory ventilation patterns, and other behavioural, physiological, and mechanical factors. A better option would be to measure both inhaled and exhaled CO2 levels, and this would require sensors that are fast and reliable in wet conditions, and reasonably inexpensive Automated pre-dive checklists Following the strong endorsement by Rebreather Forum 3 of the use of written checklists to improve safety, Cis-Lunar Development Laboratories programmed an electronic pre-dive checklist into their MK-5P rebreather operating system, as a way to prevent the user from neglecting to carry out the recommended checks before use. This was considered successful and implemented on later generations in the Poseidon MK-VI and SE7EN rebreathers, and developed to include robust internal diagnostics for the core electronic components and software, and automatic calibration of the oxygen sensor cells at normobaric pressures. Failure to complete the full checklist results in a range of alarms if the user attempts to dive with the unit. While not entirely foolproof – Oxygen cells are not calibrated at hyperbaric working pressures – a number of safety critical errors will be picked up and the diver made aware of them. The software also logs the steps and data from the pre-dive check and this has been valuable for accident analysis. The pre-dive checks also take less time and require no paper or user logging effort. This system has been shown to reduce risk and has been adopted by several manufacturers.
Recreational and scientific diving rebreather technological innovations:
Head-up displays The user interface of the rebreather control system is where information is exchanged between the diver and the electronic control system, and is an area with several possibilities for errors, both of user input and data interpretation, some of which could have serious or fatal consequences. The intrinsically higher risk of mechanical failure due to high complexity can be compensated by engineering redundancy, both of the control system and bailout gas supply, and appropriate training. The design of the human–machine interface (HMI) can be improved to reduce the risk of misunderstanding and error, and training can focus on correct interpretation of the information and appropriate response. The HMI usually has two main components, displays and alarms, and many of the alarms are associated with specific visual information.A challenge of designing effective alarms is to ensure that the diver is not distracted by irrelevant information and that they are not triggered too easily, which habituates the diver to paying less attention, and while possibly fulfilling legal requirements regarding warnings and alarms, may make the equipment functionally less safe to use. One strategy to avoid this problem is to target different senses – auditory, visual and tactile – sometimes based on a vibratory output to the mouthpiece.An effective display ensures that the user gets the information they need when they need it, and the information they want when they want it, in a form that is immediately recognised and unambiguously understood. When too much information is presented at a time of stress, the user may be confused or unable to distinguish the useful information in time to use it effectively. At other times more detailed information may be useful or necessary to make a correct decision. Multiple displays, or multiple views on the same display can help with this.A trend in rebreather displays that is predicted to become more widespread, is the use of advanced head-up displays, which can provide a wider range of information by using an array of coloured lights or more complex graphical or alphanumeric displays that remain peripherally visible to the diver at all times, and only require eye movement to become fully readable.
Recreational and scientific diving rebreather technological innovations:
Closed circuit bailout A major logistical problem for long and deep rebreather dives is the volume of bailout equipment that must be carried to allow a safe return to the surface from any point of the dive after irrecoverable failure of the primary system. The open circuit option can become extremely bulky and awkward to manage, and while more compact and efficient, the rebreather option has its own set of logistic challenges.One of the main design challenges in developing a closed circuit bailout system for rebreathers is to maintain the bailout set in a condition ready for use at all depths. This implies breathable gas for the depth, though not necessarily optimised, as the mix can be brought to set point quite rabidly after bailout, and a gas volume that does not vary excessively, so that buoyancy control is not unduly complicated. The bulk of the system must be manageable, and the bailout set mouthpiece must be easily accessible, but secure. Since bailout rebreathers are most likely to be used on dives with large decompression obligations, the switch to bailout must be accommodated by the decompression management system. If real-time monitoring of oxygen partial pressure is included in decompression computation, it must be possible to transfer this facility between units, without compromising their independence. Task-loading of the diver in managing the two loops must not be excessive, as the diver is recognised as the least reliable aspect of the operation, and may be under significant stress when bailout becomes necessary.
Recreational and scientific diving rebreather technological innovations:
Data logging Data logged from rebreather dives is useful for accident analysis, testing and development of rebreathers, and for diver educational purposes. Dive profile logging by integrated decompression computers is also of value for research into effectiveness of decompression schedules.
Recreational and scientific diving rebreather technological innovations:
Aggregation of such data can provide insights into diving patterns across the population of users and help in analysing risk.The control systems of electronic rebreathers have continued to increase in processing and storage capacity, and in parallel, their capacity for capturing data at increased granularity and precision has increased. In 1994 the Cis-Lunar Mk-IV data logging system recorded data at several hundred points per hour of dive time, and by 1997 the Cis-Lunar Mk-5P was logging over a thousand points per hour. By 2007 the Poseidon MK-VI Discovery was logging between 15,000 and 25,000 points per hour, and in 2016 the Poseidon SE7EN recorded more than double that quantity, in alignment with the recommendations of Rebreather Forum 3, which states: The forum recommends that all rebreathers incorporate data-logging systems that record functional parameters relevant to the particular unit and dive data and that allow download of these data. Diagnostic reconstruction of dives with as many relevant parameters as possible is the goal of this initiative. An ideal goal would be to incorporate redundancy in data-logging systems and, as much as practical, to standardize the data to be collected Some of the logged data is specific to the rebreather model, and is not appropriate for general analysis, but some data is useful for external analysis of user population and diving practices which could improve understanding of behaviour and safety analysis.
Manufacturers and models:
Oxygen rebreathers Aqua Lung/La Spirotechnique – French company manufacturing breathing apparatus and diving equipment FROGS (Full Range Oxygen Gas System) – a model of closed circuit oxygen rebreather for intensive shallow water work and clandestine special forces operations made by AquaLung, which has been used in France since October 2002. The unit can be worn on the chest, or with an adaptor frame, on the back. The scrubber has an endurance of about 4 hours at 4°C and respiratory minute volume of 40 litres per minute, and a 2.1 litre 207 bar cylinder. It is manufactured in non-magnetic and magnetic versions and can use either 2.6kg of granular sorb or a moulded carbon dioxide absorbent insert.
Manufacturers and models:
Dräger – German manufacturer of breathing equipment LAR-5 – Military oxygen tebreather by Drägerwerk LAR-6 – Military oxygen rebreather by Drägerwerk LAR-V – Military oxygen rebreather by Drägerwerk Lambertsen Amphibious Respiratory Unit – Early closed circuit oxygen diving rebreather Porpoise – Australian scuba manufacturer Porpoise (rebreather) – Australian oxygen rebreather – Ted Eldred's oxygen rebreather.
Siebe Gorman – British manufacturer of diving equipment and salvage contractor Mark IV Amphibian – British military oxygen rebreather CDBA – Type of diving rebreather used by the Royal Navy Davis Submerged Escape Apparatus – Early submarine escape oxygen rebreather also used for shallow water diving. – One of the first rebreathers to be produced in quantity.
Salvus – Industrial rescue and shallow water oxygen rebreather The "Universal" rebreather was a long-dive derivative of the Davis Submerged Escape Apparatus, intended to be used with the Sladen Suit.
IDA71 – Russian military rebreather for underwater and high altitude use SDBA – Special duty oxygen breathing apparatus, a military rebreather. – A type of frogman's oxygen rebreather. It has a nitrox variant called ONBA.
Manufacturers and models:
Mixed gas rebreathers AP Diving – British manufacturer of underwater diving equipment Inspiration series – one of the first electronic closed circuit rebreathers to be mass produced for the recreational market.: Aqua Lung/La Spirotechnique – French company manufacturing breathing apparatus and diving equipment DC55 BioMarineBioMarine CCR 1000 BioMarine Mk-15 military rebreather BioMarine Mk-16 military rebreather Carleton Life Support – British defense industry manufacturing companyPages displaying short descriptions of redirect targets Siva – Range of military rebreathers military rebreather, and Viper military rebreather.
Manufacturers and models:
Viper E – made by Carleton and Juergensen Defense Corporation Carleton CDBA – Military rebreather by Cobham plc – Clearance Diver's Breathing Apparatus.
Cis-Lunar – Manufacturer of electronically controlled closed-circuit rebreathers for scuba diving Divex – Scottish provider of diving equipment and related services.
Clearance Divers' Life Support Equipment (CDLSE) – An electronic closed circuit rebreather allowing diving to 60 metres (200 ft).
Manufacturers and models:
Dräger – German manufacturer of breathing equipment Dräger Dolphin – Semi-closed circuit recreational diving rebreather Dräger Ray – Semi-closed circuit diving rebreather Halcyon Dive Systems Halcyon PVR-BASC – Semi-closed circuit depth compensated passive addition diving rebreather Halcyon RB80 – Non-depth-compensated passive addition semi-closed circuit rebreather IDA71 – Russian military rebreather for underwater and high altitude use Interspiro DCSC – Military semi-closed circuit passive addition diving rebreather Jetsam Technologies KISS – line of manually operated closed circuit rebreathers designed by Gordon Smith .
Manufacturers and models:
JJ CCR – A technical diving rebreather built to allow mounting of large cylinders to enable carrying larger quantities of bailout gas on the rebreather frame.
Liberty rebreathers Back and sidemount mixed gas technical diving rebreathers.
Mark 29 Underwater Breathing Apparatus Poseidon Diving Systems Poseidon MkVI – the world's first fully automatic closed circuit rebreather for recreational use, based on the Cis-Lunar MK5 design and further developed into, Poseidon SE7EN.
Prism 2 – Notable for a radial scrubber and high-current oxygen cells from the Navy MK15 unit enabling an analogue gauge to read the oxygen levels.
ScubaForce SF2 (rebreather) – Back or sidemount ECCR with bellows counterlung.
Manufacturers and models:
Siebe Gorman – British manufacturer of diving equipment and salvage contractor Siebe Gorman CDBA – Type of diving rebreather used by the Royal Navy – also CDMBA, SCBA, SCMBA, UBA A type introduced in 1999 in the British Navy, being an update of the BioMarine/Carleton MK16:Some military rebreathers (for example the US Navy MK-25 and the MK-16 mixed-gas rebreather), and the Phibian CCS50 and CCS100 rebreathers, were developed by Oceanic.
Manufacturers and models:
The current US Navy Mark 16 Mod 2 (Explosive Ordnance Disposal) and Mark 16 Mod 3 (Naval Special Warfare) units use the Juergensen Defense Corporation Mark V Control System.
The Orca ECR is a CCR design that has both carbon dioxide and oxygen monitoring The Megalodon The rEvo III The O2ptima CM | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Free box**
Free box:
A free box is a box or location used to allow for people to rid themselves of excess items without the inconvenience of a garage sale. When someone has items they wish to be rid of, but which might be useful to another person, they are set out and given to whoever wants them. If, after a period, no one has claimed the items, the contents of the box may be donated to a charity like Goodwill or The Salvation Army.
Free box:
Free boxing is implemented at the University of Saskatchewan, in Victoria, British Columbia, in Isla Vista, California, and in Crestone and Telluride, Colorado.An online version of a similar concept is The Freecycle Network.
The University of Guelph, in Guelph, Ontario, runs a free table for several weeks of the school year. New College of Florida is home to a student-run free table (open throughout the school year). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speedplay (bicycle pedal)**
Speedplay (bicycle pedal):
Speedplay (stylized as SPEEDPLAY) is a brand of clipless road cycling pedals owned by Wahoo Fitness. Wahoo Fitness re-engineered the lollipop pedals for increased durability and easier setup and maintenance. The pedal features dual-sided entry, 0-15 degrees of free float and 3-axis adjustability, which contributes to enhanced comfort and performance.Speedplay pedals are used by many professional racers and triathletes, such as Jan Frodeno, Fabian Cancellara, Lucy Charles-Barclay, Daniela Ryf and by american professional cycling team EF Education–EasyPost.
History:
The first Speedplay pedal, the X, was patented in July 1989 by Richard Byrne. The company was founded in July, 1991, in San Diego, California. In September, 2019 Wahoo Fitness announced the acquisition of Speedplay.
Technology:
Speedplay Advanced Road Pedal System is characterized by double-sided entry, rotational free float and 3-axis adjustability to allow a rider to step on either side to engage the cleat, and keep the foot free to rotate from side to side about the center of the pedal. The float and micro-adjustability is reported to be easier on the knee joints to increase comfort and optimize fit on the bike.In March, 2021 Wahoo Fitness launched a redesigned Speedplay pedal range, which sealed the bearings to streamline setup and maintenance of the pedals. Simultaneously, Wahoo Fitness announced the POWRLINK ZERO power meter pedal set to be released in summer 2021. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Infiltration tactics**
Infiltration tactics:
In warfare, infiltration tactics involve small independent light infantry forces advancing into enemy rear areas, bypassing enemy frontline strongpoints, possibly isolating them for attack by follow-up troops with heavier weapons. Soldiers take the initiative to identify enemy weak points and choose their own routes, targets, moments and methods of attack; this requires a high degree of skill and training, and can be supplemented by special equipment and weaponry to give them more local combat options.
Infiltration tactics:
Forms of these infantry tactics were used by skirmishers and irregulars dating back to classical antiquity, but only as a defensive or secondary tactic; decisive battlefield victories were achieved by shock combat tactics with heavy infantry or heavy cavalry, typically charging en masse against the primary force of the opponent. By the time of early modern warfare, defensive firepower made this tactic increasingly costly. When trench warfare developed to its height in World War I, most such attacks were complete failures. Raiding by small groups of experienced soldiers, using stealth and cover was commonly employed and often successful, but these could not achieve decisive victory.
Infiltration tactics:
Infiltration tactics developed slowly through World War I and early World War II, partially as a way of turning these harassing tactics into a decisive offensive doctrine. At first, only special units were trained in these tactics, typified by German Stoßtruppen (storm troops). By the end of World War II, almost all regular ground forces of the major powers were trained and equipped to employ forms of infiltration tactics, though some specialize in this, such as commandos, long-range reconnaissance patrols, US Army Rangers, airborne and other special forces, and forces employing irregular warfare.While a specialist tactic during World War I, infiltration tactics are now regularly fully integrated as standard part of the modern maneuver warfare, down to basic fire and movement at the squad and section level, so the term has little distinct meaning today. Infiltration tactics may not be standard in modern combat where training is limited, such as for militia or rushed conscript units, or in desperate attacks where an immediate victory is required. Examples are German Volkssturm formations at the end of World War II, and Japanese banzai attacks of the same period.
Development during World War I:
These tactics emerged gradually during World War I. Several nations modified their existing tactics in ways that supported ideas that were later called infiltration tactics, with the German developments having the most impact, both during the war and afterwards.
Development during World War I:
Germany As far back as the 18th century, Prussian military doctrine stressed maneuver and force concentration to achieve a decisive battle (Vernichtungsgedanke). The German military searched for ways to apply this in the face of trench warfare. Captain Willy Rohr fought in the long Battle of Hartmannswillerkopf (1914–1915), starting with two Pionier (combat engineer) companies. Such engineers were often employed in assaulting fortifications, using non-standard weapons and tactics compared to the regular infantry. Rohr's initial efforts to use these as special advanced strike teams, to break French trench lines for following troops to exploit, achieved only limited success, with heavy losses. Rohr, working with his superiors, saw equipment improved, including the new Stahlhelme (steel helmets), ample supplies of hand grenades, flamethrowers, light mortars and light machine guns. Rohr's analysis was that much more training was needed to incorporate the new weapons and to coordinate separate attacks as needed to achieve the overall operational goals. His analysis got the attention of the Oberste Heeresleitung (OHL, German army high command). In December 1915, Rohr was given the task of training the army in "modern close combat", and soon promoted to major. During the next two years, special Stoßtruppen (stormtrooper) detachments were created in divisions throughout the army; select men were sent to Rohr for training, who became trainers when they returned to their units. These tactics were expanded and refined by many in the German military command, extending the Prussian military doctrine down to the smallest units – specially trained troops maneuvered and organised to strike selected positions, wherever opportunities were found.
Development during World War I:
German infiltration tactics are sometimes called Hutier tactics, after German General Oskar von Hutier, even though his role in developing the tactics was limited. Hutier, along with his artillery commander Colonel Georg Bruchmüller, improved the use of artillery in ways that suited infiltration tactics. Conventional mass-wave tactics were typically preceded by days of constant bombardment of all defender positions, attempting to gain advantage by attrition. Hutier favoured brief but intense hurricane bombardments that allow the opponent little time to react and reinforce their line. The bombardment targeted the opponents' rear areas to destroy or disrupt roads, artillery, and command centres. This was done to suppress and confuse the defenders and reduce their capability to counterattack from their rearward defence lines. For maximum effect, the exact points of attack remained concealed until the last possible moment, and the infantry attacked immediately following the short bombardment.: 158–160 The German stormtrooper methods involve men rushing forward in small but mutually supporting groups, using whatever cover is available, and then laying down covering fire for the other groups as they moved. The tactics aim to avoid attacking any strongpoints directly, by first breaching the weakest points of the defender's line, and using those to gain positional advantages on other points. Additionally, they acknowledge the futility of managing a grand detailed plan of operations from afar, opting instead for junior officers on the spot to exercise initiative, expanding the earlier Prussian doctrine of Auftragstaktik (mission-based tactics).Due to the extensive training needed, stormtroopers remained small elite forces. Regular infantry with heavy weapons would follow up, using more standard tactics, reducing isolated and weakened opposing strongpoints with flank attacks, as the stormtroopers continued the advance beyond them. Reserve troops following these had to consolidate gains against counterattacks.: 157 The Germans employed and improved infiltration tactics with increasing success, at first defensively in counterattacks as part of Germany's defence in depth and then offensively, leading up to the Battle of Caporetto against the Italians in 1917 and finally the massive German spring offensive in 1918 against the British and French.: 489 Initial German successes were stunning; of these, Hutier's 18th Army gained more than 50 km (30 mi) in less than a week – the farthest advance in the Western Front since the Race to the Sea had ended the war of movement in 1914. This advance would hereafter associate Hutier's name with infiltration tactics in Western Europe. The German armies began to stall after outrunning their supply, artillery and reinforcements, which could not catch up over the shell-torn ground left ruined by Allied attacks in the Battle of the Somme in 1916; the offensives failed to achieve a war-winning breakthrough dividing the French and British armies.: 137 The exhausted German forces lost the initiative and were soon pushed back in the Allied Hundred Days Offensive, ending in the German surrender.Though far more successful tactically than traditional attacks, infiltration tactics did not address supporting any resulting advances operationally, so they tended to bog-down over time and allow the defender time to regroup. German artillery, critical during the initial assault, lagged far behind afterwards. The elite stormtroopers took notable casualties on the initial attacks, which could not be readily replaced. German forces lacked mobile forces such as cavalry to exploit and secure deep advances. Most importantly, German logistical capabilities, designed for a static front, failed to sustain troops advancing far into devastated enemy territory.The German military did not use the term infiltration tactics as a distinct new manner of warfare but more as a continuous improvement to their wide array of military tactics. When the "new" German tactics made headlines in Allied nations in 1918, the French published articles on the "Hutier tactics" as they saw it; this focused more on the operational surprise of the start of the attack and the effective hurricane bombardment, rather than the low-level tactics. In post-war years, although information on "Hutier tactics" was widely distributed in France, the US and Britain, most generals were skeptical about these new tactics, given the German defeat. In Germany, infiltration tactics were integrated into the Reichswehr and the Wehrmacht. Felix Steiner, former officer of the Reichswehr, introduced the principle of stormtroopers into the formation of the Waffen-SS, in order to shape it into a new type of army using this tactic. When combined with armoured fighting vehicles and aircraft to extend the tactics' operational capabilities, this contributed to what would be called Blitzkrieg in the Second World War.
Development during World War I:
France New French tactics that included an initial step for infiltration were published by the Grand Quartier Général (GQG, French General Headquarters) on 16 April 1915, in But et conditions d’une action offensive d’ensemble (Goal and Conditions for a General Offensive Action), its widely circulated version being Note 5779. It states that the first waves of infantry should penetrate as far as possible and leave enemy strongpoints to be dealt with by follow-up nettoyeurs de tranchée (trench cleaner) waves. The note covers weapons and close-combat tactics for the trench cleaners, but the tactics and weapons of preceding waves are unchanged, and there is little mention of any additional support for the now-detached advanced waves. The note contains annexes covering different subjects, including artillery, infantry defense, and infantry attacks. For attacks, Note 5779 continued to promote the pre-war French doctrine of la percée (the breakthrough), where an offensive is driven by a grand, single plan with continuous waves of reserves targeting the operation's distant and static objectives. It does not cover methods of adapting to local success or setbacks, nor the small-unit initiative, coordination and additional training this would require. The tactics were employed with some success on the opening day of the Second Battle of Artois, 9 May 1915, by the French XXXIII Corps; they advanced 4.5 kilometres (2.8 mi) in the first hour and a half of the attack but were unable to reinforce and consolidate to hold onto all these gains against German counterattacks. The battle was costly and inconclusive, taking a heavy toll in French troops and matériel. Later French infantry tactics moved away from the costly la percée towards a more practical grignotage (literally nibbling, taking in small bits) doctrine, which employed a series of smaller and more methodical operations with limited objectives; each of these were still planned at headquarters, rather than from immediate local initiative. Note 5779 also describes an early form of rolling barrage in its artillery annex; this was employed with success and continued to be developed by the French as well as by most other nations throughout the war.In August 1915, a young French infantry officer, Captain André Laffargue, put forward additional ideas in a pamphlet titled Étude sur l’attaque dans la période actuelle de la guerre (Study of the Attack in the Current Period of the War). Laffargue based his proposals in particular on his experiences in the initially successful but ultimately disappointing results of employing the tactics of Note 5779 at the Second Battle of Artois; he commanded a company of the 153rd Infantry Regiment, attacking immediately south of Neuville-Saint-Vaast on 9 May 1915. Laffargue was left wounded on the German front line but his regiment advanced another 1.5 kilometres (0.93 mi), only to be held up by two German machine guns. Laffargue's pamphlet focused primarily on the small-unit perspective, calling for mobile firepower to deal with local resistance such as machine guns, advocating that the first waves of an attack advance through the intervals or gaps between centres of resistance, which should be temporarily neutralised on the edges by fire or heavy smoke. The points of resistance would then be encircled and dealt with by successive waves. This promotes coordinating local forces to deal with local resistance as it is encountered, an important second step in infiltration tactics. Laffargue suggests that had these methods been followed the attack could have resulted in a complete breakthrough of the German defences and the capture of Vimy Ridge.
Development during World War I:
The French Army published Laffargue's pamphlet in 1915 and the following year a commercial edition found wide circulation, but as informational rather than being officially adopted by the French military. The British translated and published Laffargue's pamphlet in December 1915 and, like others, continued to make frequent use of wave attacks. The US Infantry Journal published a translation in 1916.In contrast to the infiltration tactics then under development in the German army, the tactics of Note 5779 and as expanded by Laffargue remained firmly wedded to the use of the attack by waves, despite the high casualties which could ensue. Laffargue maintained that the psychological support of the attack in line was necessary to enable men to advance against heavy fire.In 1916, captured copies of Laffargue's pamphlet were translated and distributed by the German army. How much this may have influenced German infiltration tactics is not known; such influence has been dismissed by Gudmundsson. The Germans had started developing their own infiltration tactics in the spring of 1915, months before Laffargue's pamphlet was even published.
Development during World War I:
Russia The vast Eastern Front of World War I, much less confined than the Western Front, was much less affected by trench warfare, but trench lines still tended to take hold whenever the front became static. Still, about a third of all Russian divisions remained cavalry, including Cossack divisions.General Aleksei Brusilov, commanding the Russian Southwestern Front, promoted large-scale simultaneous attacks along a wide front in order to limit defenders' ability to respond to any one point, thus allowing the collapse of the entire defending line and returning to maneuver warfare. For the Brusilov Offensive of 1916, he meticulously prepared a massive surprise attack on a very wide 400 km (250 mi) front stretching from the Pripet Marshes to the Carpathian Mountains, with the objective of Lemburg, Galicia (now Lviv, Ukraine), 100 km (60 mi) behind the well-fortified Austro-Hungarian line. The Austro-German military command was confident that these deep and extensive entrenchments, equal to those of the Germans on Western Front, could not be broken without significant Russian reinforcements.: 33–36 After a thorough reconnaissance, Brusilov directed preparations for several months. Forward trenches were dug as bridgeheads for the attack, which approached the Austro-Hungarian trench lines as closely as 70 m (230 ft). Select forces were trained and tasked with breaking through the defending lines, creating gaps to be widened by 8 total successive waves of infantry, allowing deep penetration. Brusilov committed all his reserves into the initial assault.: 51 Although Brusilov favoured shorter bombardments, the bombardment preparation for this offensive was more than two days long, from 3am on June 4 (May 22 old style) to 9am on June 6 (May 24). This bombardment disrupted the first defense zone and partially neutralized the defending artillery. The first infantry attacks made breakthroughs at 13 points, which were soon increased in width and depth. Austro-Hungarian response to the unexpected offensive was slow and limited, believing that their existing forces and defenses would prove sufficient; instead, reserve units sent forward to counterattack often found their routes already overrun by the Russians.: 40 The Russian 8th Army, commanded by Brusilov himself just a few months before his promotion to command the Southwestern Front, achieved the greatest success, advancing 48 km (30 mi) in less than a week. The 7th and 9th Army achieved lesser gains, though the remaining 11th Army in the center made no initial breakthroughs. The performance of individual Austro-Hungarian units during the campaign, each raised from separate diverse societies within the Empire, was highly variable, with some units long standing firm despite the odds, like the Polish Legions at the Battle of Kostiuchnówka, whereas others readily retreated in panic or surrendered, as at the Battle of Lutsk.: 78–79 : 42–44 Though the campaign was devastating to the Austro-Hungarian Army, Russian losses were very high. German forces were sent to reinforce, and the initial Russian advantages waned. Though Russian attacks continued for months, their cost in Russian men and materiel increased while gains diminished. In the end, much like the bold French tactics of la percée at the Second Battle of Artois, these tactics were too costly to maintain. The Imperial Russian Army never fully recovered, and the monumental losses of so many Russian soldiers helped fuel the Russian Revolution of 1917, leading to disbanding the Imperial Russian Army.: 53 Though the Brusilov Campaign impressed the German Army High Command, how this may have influenced their further development of infiltration tactics is not known. Elements of Brusilov's tactics were eventually used by the Red Army in developing their Deep Battle doctrine for World War II.
Development during World War I:
Britain The British Army pursued a doctrine of integrating new technologies and updating old ones to find advantages in trench warfare.At the Battle of Neuve Chapelle, March 1915, a well-planned British attack on German trenches, coordinated with short but effective artillery bombardment, achieved a local breakthrough. Though ammunition shortages and command and control issues prevented exploiting the gains, this demonstrated the importance of a combined infantry-artillery doctrine.Initial experiences in trench warfare, shared between British and French, led both to increase pre-bombardment (requiring dramatically increased artillery munitions production), and also to supply infantry with more firepower, such as light mortars, light machine guns, and rifle grenades. While the British hoped that this new combination of arms, once improved and properly executed, could achieve decisive breakthroughs, the French moved from their pre-war grand la percée doctrine to more limited and practical tactical objectives. At this same time, the Germans were learning the value of deep trenches, defense in depth, defensive artillery, and quick counter-attacks.This came to a head with the British Somme Offensive on 1 July 1916. Douglas Haig, commanding the British Expeditionary Force (BEF), planned on an ambitious large-scale quick breakthrough, with an extensive artillery bombardment targeting the German front-line defenses, followed by a creeping barrage leading a mass infantry assault.: 106–10 : 117–21 Despite planning, execution was flawed, perhaps resulting from the rapid expansion of the British Army. The British losses on the first days were horrific. British operations improved over the next several months of the campaign, however.: 113–6 : 183–282 Learning the limits of battle planning and bombardment, they abandoned single grand objectives, and adopted a "bite and hold" doctrine (equivalent to the French grignotage) of limited, local objectives to what could be supported by available artillery in close cooperation.: 345–84 Combining this with new arms was still promoted; Britain's new secret weapon, the tank, made its first appearance midway through the Somme operations. Though not yet effective, their promise of breakthroughs in the future was held out.The British Third Army employed tactics giving platoons more independence at the Battle of Arras in April 1917 (most notably the capture of Vimy Ridge by the Canadian Corps), following the reorganisation of British infantry platoons according to the new Manual SS 143. This still advocated wave attacks, taking strongpoints and consolidating before advancing, part of "bite-and-hold" tactics, but this did allow for more local flexibility, and set groundwork for low-level unit initiative, an important aspect of infiltration tactics.
Development during World War I:
Hurricane bombardment A new method of artillery use evolved during World War I, colloquially called hurricane bombardment. This is a very quick but intense artillery bombardment, in contrast to the prevailing artillery tactic of long bombardments. Various forms of quick bombardments were employed at several times and places during the war, but the most successful use of hurricane bombardment was when it was combined with German infiltration tactics in which local forces take immediate advantage of any enemy weak points they find.
Development during World War I:
After the start of trench warfare in World War I, and artillery moved from direct fire to indirect fire, the standard use of artillery preceding any friendly infantry attack became a very long artillery bombardment, often lasting several days, to destroy the opponent's defences and kill the defenders. But trenches were very soon extended to avoid this; they were dug deeper and connected by deep or even underground passages to bunkers far behind the lines, where defenders could safely wait out bombardments. When the bombardment stopped, this signalled the start of the attack to the defenders, and they quickly moved back to their forward positions.: 13–14 This practice of very long bombardments expanded over the course of the war, in the hopes of at least causing a few casualties, damaging surface defences like barbed wire lines and machine gun nests, and exhausting and demoralizing the defenders by the stress of being forced underground for so long, the noise and the vibrations.
Development during World War I:
The Allies, led by the British, developed alternative artillery tactics using shorter bombardments; these sought to achieve success by surprise. (This was also to use limited ammunition supplies more efficiently.) The effectiveness of short bombardments was dependent on local conditions: the targets had to be identified and located beforehand, many artillery pieces were needed, each with ample ammunition, and the preparations had to be kept hidden from the defenders.
Development during World War I:
To further increase the chance of success, these short bombardments were sometimes followed by barrages. Many variations were devised, including moving barrages, block barrages, creeping barrages, standing and box barrages. The goal of an artillery barrage was to target a line of impact points repeatedly so as to impede infantry movement; these lines could be held in position or slowly moved to inhibit movement by the opponent or even force them into poor positions. The barrage plans were often quite complicated and could be very effective.
Development during World War I:
The Germans also experimented with short bombardments and barrages. German Colonel Georg Bruchmüller tailored these significantly to integrate well with infiltration tactics. He began to perfect this while serving as a senior artillery officer on the Eastern Front in 1916. Hurricane bombardments avoided giving the defender several days' warning of an impending attack – vital for infiltration tactics. Barrages had to be carefully limited for use with infiltration tactics, as both the barrage movement and infantry advance must keep to a timetable, necessarily very methodical and slow to avoid casualties from friendly artillery; this takes away almost all initiative from the advancing infantry. Barrages with infiltration tactics had to be more intense and precise, and quickly moved to deeper targets. Bruchmüller enforced having artillery aim from the map, avoiding the typical practise of firing several registering shells before a bombardment to adjust the aim of each gun by trial-and-error, alerting the defenders before the full bombardment. Precise aiming without registering shells requires expertise in ballistics with angles and elevation calculated from accurate maps expressly designed for artillery use, knowledge of the effects of altitude and local weather conditions, and also reliable and consistent manufacturing of guns and ammunition to eliminate uncontrolled variation.: 12–13 Bruchmüller devised intricate, centrally-controlled firing plans for intense bombardments with minimal delays. These plans typically had several bombardment phases. The first phase might be bombardment against enemy communications, telegraph lines, and headquarters, roads and bridges, to isolate and confuse the defenders and delay their reinforcements. The second phase might be against the defenders' artillery batteries and the third against their front-line trenches to drive them back just before the infantry assault on those positions. The last phase was typically a creeping barrage that moved forward of the advancing infantry to quickly bombard positions just before they are attacked. The phases were usually much more complicated, quickly switching between targets to catch defenders off guard; each bombardment plan was carefully tailored to local conditions. The type of shells depended on the target, such as shrapnel, high explosive, smoke, illumination, short-term or lingering gas shells. The total bombardment time was usually from a couple hours down to just minutes.: 74–76 The creeping barrage phase is often held out as a key part of infiltration tactics, but its use in infiltration attacks is limited by the fact that the rate of infantry advance cannot be predicted. The quickness, intensity, accuracy, and careful selection of targets for maximum effect is more important.Allied and German styles of bombardments could use tricks of irregular pauses and switching suddenly between targets for short periods of time to avoid being predictable to the defenders.
Development during World War I:
Bruchmüller's hurricane bombardment tactics in close cooperation with infiltration tactics matured by the time of the German victory at Riga on 3 September 1917, where he served under General Hutier. These bombardment tactics were disseminated throughout the German Army. Hutier and Bruchmüller were transferred together to the Western Front to take part in the Spring Offensive of 1918, where Bruchmüller's artillery tactics had great effect on quickly breaking the British lines for Hutier's 18th Army. Following that initial attack, the artillery had less effect, as infantry forces advanced faster than artillery and munitions could keep up.
Development during World War I:
After World War I, use of radios to quickly redirect artillery fire as needed removed any exclusive reliance on time-table driven artillery bombardment.
Dien Bien Phu:
At the Battle of Dien Bien Phu, Major Marcel Bigeard, commander of the French 6th Colonial Parachute Battalion (6th BPC), used infiltration tactics to defend the besieged garrison against Viet Minh trench warfare tactics. Bigeard's parachute assault companies were supported by concentrated artillery and air support and received help from tanks, allowing two companies (the 1st under Lieutenant René Le Page and the 2nd under Lieutenant Hervé Trapp) numbering no more than 180 men to recapture the important hilltop position of Eliane 1 from a full Viet Minh battalion, on the early morning of 10 April 1954. Other parachute battalion and company commanders also used similar tactics during the battle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Snpstr**
Snpstr:
A SNPSTR is a compound genetic marker composed of one or more SNPs and one microsatellite (STR). Autosomal SNPSTRs, which contain an SNP and a microsatellite within 500 base pairs of one another, were discovered in 2002. More recently a database that contains all SNPSTRs in five model genomes, including human, has been created.
Usage and importance:
There has been widespread and growing interest in genetic markers suitable for drawing population genetic inferences about past demographic events and to detect the effects of selection. Single nucleotide polymorphisms (SNPs) and microsatellites (or short tandem repeats, STRs) have received great attention in the analysis of human population history, even though they have both disadvantages. It was thus suggested that the combination of these two markers could give rise to better conclusions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fine-grained reduction**
Fine-grained reduction:
In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems.
Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine.
If problem A can be solved in time a(n) and problem B can be solved in time b(n) , then the existence of an (a,b) -reduction from problem A to problem B implies that any significant speedup for problem B would also lead to a speedup for problem A
Definition:
Let A and B be computational problems, specified as the desired output for each possible input.
Definition:
Let a and b both be time-constructible functions that take an integer argument n and produce an integer result. Usually, a and b are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as n2 .Then A is said to be (a,b) -reducible to B if, for every real number ϵ>0 , there exists a real number δ>0 and an algorithm that solves instances of problem A by transforming it into a sequence of instances of problem B , taking time O(a(n)1−δ) for the transformation on instances of size n , and producing a sequence of instances whose sizes ni are bounded by ∑ib(ni)1−ϵ<a(n)1−δ .An (a,b) -reduction is given by the mapping from ϵ to the pair of an algorithm and δ
Speedup implication:
Suppose A is (a,b) -reducible to B , and there exists ϵ>0 such that B can be solved in time O(b(n)1−ϵ) Then, with these assumptions, there also exists δ>0 such that A can be solved in time O(a(n)1−δ) . Namely, let δ be the value given by the (a,b) -reduction, and solve A by applying the transformation of the reduction and using the fast algorithm for B for each resulting subproblem.Equivalently, if A cannot be solved in time significantly faster than a(n) , then B cannot be solved in time significantly faster than b(n)
History:
Fine-grained reductions were defined, in the special case that a and b are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010.
History:
They also showed the existence of (n3,n3) -reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sin Nombre orthohantavirus**
Sin Nombre orthohantavirus:
Sin Nombre orthohantavirus (SNV) (from Spanish, meaning "without a name"), a member of the genus Orthohantavirus, is the prototypical etiologic agent of hantavirus cardiopulmonary syndrome (HCPS).Discovered in 1993 near the Cañon de la Muerte on the Navajo Reservation, it was originally named the Muerto Canyon hantavirus, in keeping with the convention for naming new pathogens. However, the Navajo Nation objected to the name in 1994. It was also near the Four Corners point in the United States, so the virologists then tried naming it the "Four Corners virus". The name was changed after local residents raised objections. In frustration, the virologists changed it to Sin Nombre, meaning "without a name" in Spanish.
History:
It was first isolated in 1993 from rodents collected near the home of one of the initial patients with hantavirus pulmonary syndrome (HPS) in the Four Corners region of the western United States. Isolation was achieved through a blind passage in Peromyscus maniculatus (deer mouse) and subsequent adaptation to growth in Vero E6 cells. Additional viral strains have also been isolated from P. maniculatus associated with a fatal case in California and P. leucopus from the vicinity of probable infection of a New York case. Black Creek Canal virus was isolated from S. hispidus collected near the residence of a human case in Dade County, Florida. Another etiologic agent of HCPS, Bayou virus, was first isolated from the vicinity of Monroe, Louisiana.
Epidemiology:
SNV occurs wherever its reservoir rodent carrier, the deer mouse Peromyscus maniculatus, is found, which includes essentially the entire populated area of North America, except for the far southeastern region from eastern Texas through Florida, Alaska, and the far northern reaches of Canada. SNV and HCPS are especially common in western states; peak incidences for HCPS have been reported in regions in which there is a lot of contact between humans and mice (New Mexico, Arizona) and in states with exceptionally large rural populations such as California. All of the western provinces of Canada have also reported cases. SNV can be contracted through the inhalation of virus-contaminated deer mouse excreta.While transmission from the deer mouse carrier to humans is understood to occur primarily through contact with mouse urine and feces, transmission within the vector population is believed to occur through direct contact, in contrast to the understood vector transmission for other species in the Orthohantavirus genus. The case fatality ratio of SNV-induced HCPS in the USA was reported to be about 66.7% (CDC, 1993). However, since that time the case fatality ratio has steadily declined as more mild cases came to be recognized. By 2007 the CFR had declined to about 35%.
Virus sequencing:
As with other Orthohantavirus species, SNV has a tripartite single-stranded negative-sense RNA genome. The entire genomic sequence of SNV has subsequently been determined by using RNA extracted from autopsy material as well as RNA extracted from cell culture-adapted virus. The L RNA is 6562 nucleotides (nt) in length; the M RNA is 3696 nt long; and the S RNA is 2059 to 2060 nt long. When the prototype sequence (NMH15) of SNV detected in tissues from an HPS case was compared with the sequence of the SNV isolate (NMR11; isolated in Vero E6 cells from Peromyscus maniculatus trapped in the residence of the same case), only 16 nucleotide changes were found, and none of these changes resulted in alterations in amino acid sequences of viral proteins. It had been assumed that in the process of adaptation to cell culture, selection of SNV variants which grow optimally in cell culture would occur, and selected variants would differ genetically from the parental virus. Though NMH10 and NMR11 are identical in protein sequence, nucleotide substitutions in nontranslated regions of the genome could be responsible for altered viral phenotypes, as could changes in protein glycosylation or virus membrane components.The nested RT-PCR assay developed during the initial HCPS outbreak provided a rapid method for the genetic characterization of novel hantaviruses that did not require a virus isolate. Numerous new hantaviruses have been detected by RT-PCR in rodent tissues but have yet to be associated with human disease. These include El Moro Canyon virus associated with the western harvest mouse, Reithrodontomys megalotis, Tula virus with Microtus arvalis and M. rossiaemeridionalis, Rio Segundo virus with the Mexican harvest mouse, R. mexicanus, Isla Vista virus with the California vole, M. californicus, and Prospect Hill-like viruses in Microtus species.
Virion morphology:
In contrast to members of the Orthohantavirus genus endemic outside of the Americas, whose virions are predominately round or pleomorphic, SNV virions have a greater propensity for tubular and irregular virion morphologies. This finding suggests that the genus is more diverse in terms of morphology than previously assumed, which may help explain differences in epidemiology between species. Within the Sin Nombre species, morphologic variability exists between strains, with virions of an elongated phenotype associated with higher virulence. Sin Nombre virions have an average diameter of 90 nm for round particles and 85 nm for tubular particles, with an average length of 180 nm for tubular particles, making them somewhat smaller than closely related members of the genus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fracking hose**
Fracking hose:
Fracking hoses are used in hydraulic fracturing (fracking). Hydraulic fracturing uses between 1.2 and 3.5 million US gallons (4.5 and 13 Ml) of water per well, with large projects using up to 5 million US gallons (19 Ml). Additional water is used when wells are refractured; this may be done several times.
Hose materials:
The traditional solution is to use metal pipes to transfer water but these are costly to deliver and assemble. Thermoplastic polyurethane (TPU) covered hoses and Nitrile rubber (NBR) covered hoses have a lot of advantages compared to metal pipes. It is easy to deliver, assemble and disassemble hoses and long sections of NBR- and TPU-covered hoses reduce the possibility of leakage in connecting parts. These large-diameter NBR- and TPU-covered hoses are called fracking hoses.
Hose selection:
Fracking hoses are high-pressure, high-strength lay-flat hoses designed for pumping aggressive water around mine sites. Safety and efficiency are important factors when choosing products for these applications.
History:
Fracking hoses are solutions derived from the fire hose industries. The fire hose manufacturers manufacture NBR or TPU covered hoses for fire fighting, water discharge, irrigation etc. The fracking industries borrowed the idea from the fire hose industries and use these hoses in fracking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of entheogenic drugs**
History of entheogenic drugs:
Entheogenic drugs have been used by various groups for thousands of years. There are numerous historical reports as well as modern, contemporary reports of indigenous groups using entheogens, chemical substances used in a religious, shamanic, or spiritual context.
Early history:
One of the oldest known entheogens is peyote, which has been used for at least 5,700 years by Native Americans in Mexico.
20th century:
In 1912 Merck laboratories sintetized the drug 3,4-Methylenedioxymethamphetamine (MDMA), currently known as ecstasy, It was used to enhance psychotherapy beginning in the 1970s and became popular as a street drug in the 1980s.In 1928 Heinrich Klüver was the first to systematically study its psychological effects in a small book called Mescal and Mechanisms of Hallucinations. The book stated that the drug could be used to research the unconscious mind. He coined the term "cobweb figure" in the 1920s to describe one of the four form constant geometric visual hallucinations experienced in the early stage of a mescaline trip: "Colored threads running together in a revolving center, the whole similar to a cobweb". The other three are the chessboard design, tunnel, and spiral. Klüver wrote that "many 'atypical' visions are upon close inspection nothing but variations of these form-constants."In 1953 Aldous Huxley published The Doors of Perception, in it elaborates on his psychedelic experience under the influence of mescaline in May 1953. Huxley recalls the insights he experienced, ranging from the "purely aesthetic" to "sacramental vision".
20th century:
MK Ultra R. Gordon Wasson had a funded expedition in 1956 by the CIA's MK-Ultra subproject 58, as was revealed by documents obtained by John Marks under the Freedom of Information Act. The documents state that Wasson was an 'unwitting' participant in the project.The funding was provided under the cover name of the Geschickter Fund for Medical Research (credited by Wasson at the end of his subsequent Life piece about the expedition).
20th century:
On May 13, 1957, Life magazine published an article titled "Seeking the Magic Mushroom", which introduced psychoactive mushrooms to a wide audience for the first time. Just a few days later a personal account by his wife about their research in Mexico was published in the magazine This Week., it was written by R. Gordon Wasson, ethnomycologist, and Vice President for Public Relations at J.P. Morgan & Co. about the use of psilocybin mushrooms in religious rites of the indigenous Mazatec people of Mexico.
20th century:
In 1957 Gastón Guzmán Huerta, a Mexican mycologist and anthropologist, was invited by the University of Mexico to assist Rolf Singer, who would arrive to Mexico in 1958 to study the hallucinogenic mushroom genus Psilocybe. While they were in the Huautla de Jiménez region, in their last day of the expeditions, they met R. Gordon Wasson. Guzman became an authority on the genus Psilocybe.
20th century:
In 1958, the French mycologist Roger Heim brought psilocybin tablets to Mazatec curandera María Sabina, that was the first velada using the active principle of the mushrooms rather than the raw mushrooms themselves took place.
20th century:
In 1962, R. Gordon Wasson and Albert Hofmann went to Mexico to visit her. They also brought a bottle of psilocybin pills. Sandoz was marketing them under the brand name Indocybin—"indo" for both Indian and indole (the nucleus of their chemical structures) and "cybin" for the main molecular constituent, psilocybin. Hofmann gave his synthesized entheogen to the curandera. "Of course, Wasson recalled, Albert Hofmann is so conservative he always gives too little a dose, and it didn't have any effect." Hofmann had a different interpretation: "activation of the pills, which must dissolve in the stomach, takes place after 30 to 45 minutes. In contrast, the mushrooms when chewed, work faster as the drug is absorbed immediately". To settle her doubts about the pills, more were distributed. María, her daughter, and the shaman, Don Aurelio, ingested up to 30 mg each, a moderately high dose by current standards but not perhaps by the more experienced practitioners. At dawn, their Mazatec interpreter reported that María Sabina felt there was little difference between the pills and the mushrooms. She thanked Hofmann for the bottle of pills, saying that she would now be able to serve people even when no mushrooms were available.
20th century:
Timothy Leary and Richard Alpert Timothy Leary and Richard Alpert studied and researched the effects of LSD and hallucinogenic mushrooms in the United States and Mexico.
20th century:
Harvard psilocybin project The Harvard Psilocybin Project was a series of experiments in psychology conducted by Timothy Leary and Richard Alpert. The founding board of the project consisted of Leary, Aldous Huxley, David McClelland (Leary's and Alpert's superior at Harvard University), Frank Barron, Ralph Metzner, and two graduate students who were working on a project with mescaline.In 1960, Timothy Leary and Richard Alpert ordered psilocybin from Swiss-based company Sandoz with the intent to test if different administration modes lead to different experiences. To a greater extent, they believed that psilocybin could be the solution for the emotional problems of the Western man.
20th century:
Zihuatanejo Project The Zihuatanejo Project was a psychedelic training center and intentional community created during the beginning of the counterculture of the 1960s by Timothy Leary and Richard Alpert under the umbrella of their nonprofit group, the International Federation for Internal Freedom (IFIF). The community was located in Zihuatanejo, Guerrero, Mexico, and took up residence at the Hotel Catalina in the summers of 1962 and 1963.Leary and Alpert first discovered the town of Zihuatanejo in 1960. After the Marsh Chapel Experiment in 1962 they decided the area would make a good location for a training center. The idea for the community was influenced by Aldous Huxley's fictional novel, Island (1962).Thousands of people applied to the IFIF in the hopes of joining the project in Zihuatanejo. Out of this pool of applicants, a small, select group of people were chosen. Amenities cost $200 a month per person, including food and lodging in bungalows near a secluded beach. Fishermen supplied a bounty of fresh fish from the bay. During the first training session in 1962, Leary and 35 guests rented the Catalina Hotel for a month using their own version of the Tibetan Book of the Dead as a guide book for LSD sessions; Ralph Metzner and Richard Alpert helped manage the group. Group LSD sessions began in the morning with the consumption of liquid LSD, with a dosage of 100 to 500 micrograms ingested by participating individuals; the experience would usually last until late afternoon.
20th century:
Concord Prison Experiment The Concord Prison Experiment was designed to evaluate whether the experiences produced by the psychoactive drug psilocybin, derived from psilocybin mushrooms, combined with psychotherapy, could inspire prisoners to leave their antisocial lifestyles behind once they were released. How well it worked was to be judged by comparing the recidivism rate of subjects who received psilocybin with the average for other Concord inmates.
20th century:
The experiment was conducted between February 1961 and January 1963 in Concord State Prison, a maximum-security prison for young offenders, in Concord, Massachusetts by a team of Harvard University researchers. The team were under the direction of Timothy Leary and included Michael Hollingshead, Allan Cohen, Alfred Alschuder, George Litwin, Ralph Metzner, Gunther Weil, and Ralph Schwitzgebel, with Madison Presnell as the medical and psychiatric adviser. The original study involved the administration of psilocybin manufactured by Sandoz Pharmaceuticals to assist group psychotherapy for 32 prisoners in an effort to reduce recidivism rates. The researchers would administer psilocybin to themselves along with the prisoners, on the grounds of "[creating] a sense of equality and shared experience, and to dispel the fear that often accompanies relationships between [experimenters] and [subjects]".
20th century:
Alexander Shulgin Alexander Shulgin obtained a DEA Schedule I license for an analytical laboratory, which allowed him to synthesize and possess any otherwise ilcit drug, in order to work with scheduled psychoactive chemicals and set up a chemical synthesis laboratory in a small building behind his house, which gave him a great deal of career autonomy. Shulgin used this freedom to synthesize and test the effects of potentially psychoactive drugs. In 1970s developed new methods of synthesis of MDMA, a drug commonly associated with dance parties, raves, and electronic dance music. It may be mixed with other substances such as ephedrine, amphetamine, and methamphetamine. In 2016, about 21 million people between the ages of 15 and 64 used ecstasy (0.3% of the world population). This was broadly similar to the percentage of people who use cocaine or amphetamines, but lower than for cannabis or opioids. In the United States, as of 2017, about 7% of people have used MDMA at some point in their lives and 0.9% have used it in the last year.
20th century:
Terence McKenna Terence McKenna, along with his brother Dennis, developed a technique for cultivating psilocybin mushrooms using spores they brought to America from the Amazon. In 1976, the brothers published what they had learned in the book Psilocybin: Magic Mushroom Grower's Guide, under the pseudonyms "O.T. Oss" and "O.N. Oeric". McKenna and his brother were the first to come up with a reliable method for cultivating psilocybin mushrooms at home. As ethnobiologist Jonathan Ott explains, "[the] authors adapted San Antonio's technique (for producing edible mushrooms by casing mycelial cultures on a rye grain substrate; San Antonio 1971) to the production of Psilocybe [Stropharia] cubensis. The new technique involved the use of ordinary kitchen implements, and for the first time the layperson was able to produce a potent entheogen in his [or her] own home, without access to sophisticated technology, equipment, or chemical supplies." When the 1986 revised edition was published, the Magic Mushroom Grower's Guide had sold over 100,000 copies.
20th century:
Heffter Research Institute In 1993 by David E. Nichols, Mark Geyer, George Greer, Charles Grob, and Dennis McKenna founded the Heffter Research Institute, a 501(c)(3) nonprofit organization that promotes research with classic hallucinogens and psychedelics, predominantly psilocybin, to contribute to a greater understanding of the mind and to alleviate suffering. Founded in 1993 as a virtual institute, Heffter primarily funds academic and clinical scientists and made more than $3.1 million in grants between 2011 and 2014. Heffter's recent clinical studies have focused on psilocybin-assisted treatment for end-of-life anxiety and depression in cancer patients, as well as alcohol and nicotine addiction. Several Heffter-supported studies on Spiritual experiences and practices involving ayahuasca and psilocybin have been published.
Influence on artistic movements:
Some of the figures are were-eagle, bat-like features, and were-jaguars, most common, however, is the jaguar transformation figurine which show a wide variety of styles, ranging from human-like figurines to those that are almost completely jaguar, and several where the subject appears to be in a stage of transformation.
Influence on religion:
The Maya displayed characteristic Mesoamerican mythology, with a strong emphasis on an individual being a communicator between the physical world and the spiritual world. Mushroom stone effigies, dated to 1000 BCE, give evidence that mushrooms were at least revered in a religious way. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Saccharopine dehydrogenase (NADP+, L-lysine-forming)**
Saccharopine dehydrogenase (NADP+, L-lysine-forming):
In enzymology, a saccharopine dehydrogenase (NADP+, L-lysine-forming) (EC 1.5.1.8) is an enzyme that catalyzes the chemical reaction N6-(L-1,3-dicarboxypropyl)-L-lysine + NADP+ + H2O ⇌ L-lysine + 2-oxoglutarate + NADPH + H+The 3 substrates of this enzyme are N6-(L-1,3-dicarboxypropyl)-L-lysine, NADP+, and H2O, whereas its 4 products are L-lysine, 2-oxoglutarate, NADPH, and H+.
Saccharopine dehydrogenase (NADP+, L-lysine-forming):
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-NH group of donors with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is N6-(L-1,3-dicarboxypropyl)-L-lysine:NADP+ oxidoreductase (L-lysine-forming). Other names in common use include lysine-2-oxoglutarate reductase, lysine-ketoglutarate reductase, L-lysine-alpha-ketoglutarate reductase, lysine:alpha-ketoglutarate:TPNH oxidoreductase, (epsilon-N-[gultaryl-2]-L-lysine forming), saccharopine (nicotinamide adenine dinucleotide phosphate,, lysine-forming) dehydrogenase, 6-N-(L-1,3-dicarboxypropyl)-L-lysine:NADP+ oxidoreductase, and (L-lysine-forming). This enzyme participates in lysine biosynthesis and lysine degradation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exonuclease 5**
Exonuclease 5:
Exonuclease 5 is a protein that in humans is encoded by the EXO5 gene.
Function:
The protein encoded by this gene is a single-stranded DNA (ssDNA)-specific exonuclease that can slide along the DNA before cutting it. However, human replication protein A binds ssDNA and restricts sliding of the encoded protein, providing a 5'-directionality to the enzyme. This protein localizes to nuclear repair loci after DNA damage. [provided by RefSeq, Nov 2016]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Levomethamphetamine**
Levomethamphetamine:
Levomethamphetamine is the levorotatory (L-enantiomer) form of methamphetamine. Levomethamphetamine is a sympathomimetic vasoconstrictor that is the active ingredient in some over-the-counter (OTC) nasal decongestant inhalers in the United States.
Pharmacology:
Levomethamphetamine crosses the blood-brain-barrier and acts as a norepinephrine transporter inhibitor and TAAR1 agonist, functioning as a selective norepinephrine releasing agent (with limited effects on the release of dopamine), thus levomethamphetamine affects the central nervous system, although its effects are qualitatively distinct relative to those of dextromethamphetamine.It does not possess the same potential for euphoria or addiction that dextromethamphetamine possesses. Among its physiological effects are the vasoconstriction that makes it useful for nasal decongestion. The elimination half-life of levomethamphetamine is between 13.3 and 15 hours, whereas dextromethamphetamine has a half-life of about 10.5 hours.
Pharmacology:
Selegiline Levomethamphetamine is an active metabolite of the antiparkinson's drug selegiline. Selegiline, a selective monoamine oxidase B (MAOB) inhibitor at low doses, is also metabolized into levomethamphetamine and levoamphetamine. This has caused users to test positive for amphetamines.Selegiline itself has neuroprotective and neuro-rescuing effects, but concern over the resulting levomethamphetamine's neurotoxicity led to development of alternative MAOB inhibitors, such as rasagiline, that do not produce toxic metabolites.
Side effects:
When the nasal decongestant is taken in excess, levomethamphetamine has potential side effects. These would be similar to those of other decongestants.
Non-medicinal usage:
As of 2006, there were no studies demonstrating "drug liking" scores of oral levomethamphetamine that are similar to racemic methamphetamine or dextromethamphetamine in either recreational users or medicinal users.In recent years, tighter controls in Mexico on certain methamphetamine precursors like ephedrine and pseudoephedrine has led to a greater percentage of illicit meth from Mexican drug cartels consisting of a higher ratio of levomethamphetamine to dextromethamphetamine within batches of racemic meth.
Detection in body fluids:
Levomethamphetamine can register on urine drug screens as either methamphetamine, amphetamine, or both, depending on the subject's metabolism and dosage. L-methamphetamine metabolizes completely into L-amphetamine after a period of time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemical clock**
Chemical clock:
A chemical clock (or clock reaction) is a complex mixture of reacting chemical compounds in which the onset of an observable property (discoloration or coloration) occurs after a predictable induction time due to the presence of clock species at a detectable amount. In cases where one of the reagents has a visible color, crossing a concentration threshold can lead to an abrupt color change after a reproducible time lapse.
Types:
Clock reactions may be classified into three or four types: Substrate-depletive clock reaction The simplest clock reaction featuring two reactions: A → C (rate k1) B + C → products (rate k2, fast)When substrate (B) is present, the clock species (C) is quickly consumed in the second reaction. Only when substrate B is all used up or depleted, species C can build up in amount causing the color to change. An example for this clock reaction is the sulfite/iodate reaction or iodine clock reaction, also known as Landolt's reaction.
Types:
Sometimes, a clock reaction involves the production of intermediate species in three consecutive reactions. P + Q → R R + Q → C P + C → 2RGiven that Q is in excess, when substrate (P) is depleted, C builds up resulting in the change in color.
Autocatalysis-driven clock reaction The basis of the reaction is similar to substrate-depletive clock reaction, except for the fact that rate k2 is very slow leading to the co-existing of substrates and clock species, so there is no need for substrate to be depleted to observe the change in color. The example for this clock is pentathionate/iodate reaction.
Pseudoclock behavior The reactions in this category behave like a clock reaction, however they are irreproducible, unpredictable and hard to control. Examples are chlorite/thiosulfate and iodide/chlorite reactions.
Types:
Crazy clock reaction The reaction is irreproducible in each run due to the initial inhomogeneity of the mixture which result from variation in stirring rate, overall volume as well as geometry of the reactors. Repeating the reaction in the statistically meaningful manners leads to the reproducible cumulative probability distribution curve. The example for this clock is iodate/arsenous acid reaction.One reaction may fall into more than one classification above depending on the circumstance. For example, iodate−arsenous acid reaction can be substrate-depletive clock reaction, autocatalysis-driven clock reaction and crazy clock reaction.
Examples:
One class of example is the iodine clock reactions, in which an iodine species is mixed with redox reagents in the presence of starch. After a delay, a dark blue color suddenly appears due to the formation of a triiodide-starch complex. Additional reagents can be added to some chemical clocks to build a chemical oscillator. For example, the Briggs–Rauscher reaction is derived from an iodine clock reaction by adding perchloric acid, malonic acid and manganese sulfate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Courant–Friedrichs–Lewy condition**
Courant–Friedrichs–Lewy condition:
In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.
Heuristic description:
The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (as determined by initial conditions and the parameters of the approximation scheme) must include the analytical domain of dependence (wherein the initial conditions have an effect on the exact value of the solution at that point) to assure that the scheme can access the information required to form the solution.
Statement:
To make a reasonably formally precise statement of the condition, it is necessary to define the following quantities: Spatial coordinate: one of the coordinates of the physical space in which the problem is posed Spatial dimension of the problem: the number n of spatial dimensions, i.e., the number of spatial coordinates of the physical space where the problem is posed. Typical values are n=1 , n=2 and n=3 Time: the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinatesThe spatial coordinates and the time are discrete-valued independent variables, which are placed at regular distances called the interval length and the time step, respectively. Using these names, the CFL condition relates the length of the time step to a function of the interval lengths of each spatial coordinate and of the maximum speed that information can travel in the physical space.
Statement:
Operatively, the CFL condition is commonly prescribed for those terms of the finite-difference approximation of general partial differential equations that model the advection phenomenon.
The one-dimensional case For the one-dimensional case, the continuous-time model equation (that is usually solved for w ) is: ∂w∂t=u∂w∂x.
Statement:
The CFL condition then has the following form: max where the dimensionless number C is called the Courant number, u is the magnitude of the velocity (whose dimension is length/time) Δt is the time step (whose dimension is time) Δx is the length interval (whose dimension is length).The value of max changes with the method used to solve the discretised equation, especially depending on whether the method is explicit or implicit. If an explicit (time-marching) solver is used then typically max =1 . Implicit (matrix) solvers are usually less sensitive to numerical instability and so larger values of max may be tolerated.
Statement:
The two and general n-dimensional case In the two-dimensional case, the CFL condition becomes max with the obvious meanings of the symbols involved. By analogy with the two-dimensional case, the general CFL condition for the n -dimensional case is the following one: max .
The interval length is not required to be the same for each spatial variable Δxi,i=1,…,n . This "degree of freedom" can be used to somewhat optimize the value of the time step for a particular problem, by varying the values of the different interval to keep it not too small. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sony Vaio C1 series**
Sony Vaio C1 series:
The Vaio C1 PictureBook was a series of subnotebooks from Sony's Vaio lineup, branded 'PictureBook' for its webcam and video capture capabilities, a first for portable computers. PictureBooks were lightweight computers, weighing 1kg (2.2 lb). They featured 8.9" LCD displays, and were notable for being the first consumer laptop with a built-in webcam.The original model, the PCG-C1, was first released on September 19, 1998, in Japan only, with an initial production run of 5,000 units.Subsequent revisions were released through the early 2000's, with improved display resolution, CPU, RAM and hard drive.
Sony Vaio C1 series:
The C1 PictureBook series was succeeded by several other subnotebooks and UMPCs, most notably the UX series, and perhaps a direct successor to the C1's form factor, the P series.
Models:
Original revision These original models had a single built-in mono speaker by the keyboard. The Intel Pentium and Microsoft Windows stickers were affixed beside the top left corner of the screen.
PCG-C1 - Mobile Pentium MMX 233MHz CPU, 3.2GB hard drive, 64MB RAM, ultra-wide 8.9" 1024x480 TFT display, 0.27MP webcam, integrated modem, Windows 98 (September 1998). Japan market only.
PCG-C1X - Mobile Pentium MMX 266MHz CPU, 4.3GB hard drive, Windows 98 (1999). First model one available in the U.S.
PCG-C1F - same as C1X, but with U.K. localization PCG-C1R and PCG-C1S - same as C1X, but with Japan localization 2nd revision This revision featured built-in stereo speakers by the keyboard. The Intel and Microsoft Windows stickers were affixed beside the top right corner of the screen.
PCG-C1XE - 8.1GB hard drive, 64MB RAM, 266MHz Intel Pentium 2, Windows 98 (1999). Japan market only.
PCG-C1XN - 12GB hard drive, 64MB RAM, 233MHz Intel Celeron, Windows 98 (January 2000).
PCG-C1XS - 12GB hard drive, 64MB RAM, 400MHz Pentium 2, Windows 98 (January 2000).
PCG-C1XD - Same as the C1XS, but with German localization.
PCG-C1XG/BP - Same as the C1XS, but with Japan localization.
3rd revision Beginning this revision, the C1 PictureBook shipped with Transmeta Crusoe processors.
Models:
PCG-C1VN - first machine with Transmeta Crusoe CPU - TM5600 600MHz CPU, 128MB RAM, 12GB hard drive, ATI Rage Mobility 8MB, Windows ME (September 2000) PCG-C1VE - same as C1VN (September 2000) PCG-C1VP - same as C1VN, except Crusoe TM5600 667MHz CPU, 15GB hard drive, Windows 2000 Professional option (March 2001) PCG-C1VFK - same as C1VP, First Model with integrated Bluetooth 1.0 Adapter, Windows 2000 Pro (March 2001) PCG-C1VSX - same as C1VP, except choice of 15 or 30GB hard drive, no external monitor support and Bluetooth 1.0 and Windows 2000 Pro only PCG-C1VS/BW - same as C1VSX, except Crusoe 600MHz, 15GB hard drive, included PC Card CD-RW drive, support for external monitors, no Bluetooth and Windows ME only with Office XP preinstalled 4th revision Beginning this revision, the C1 PictureBook shipped with improved displays, bumping the resolution to 1280x600.
Models:
PCG-C1MV - Crusoe TM5800 733MHz CPU, 256MB RAM, 20GB hard drive, ATI Mobility Radeon M 8MB, updated 1280x600 resolution, Windows XP Home or Professional (September 2001) - PCG-C1MW - same as C1MV, but with Crusoe 867MHz and 30GB hard drive (August 2002) PCG-C1MGP - Same as C1MV, but with 30GB hard drive, built-in Bluetooth 1.1, Windows XP Pro PCG-C1MRX - Same as C1MV, but with 30GB hard drive, built-in Bluetooth 1.1, bundled 802.11b Wi-Fi PC card and XP Home only PCG-C1MR/BP - Same as C1MRX, but with Crusoe TM5600 667MHz CPU, 128MB memory, 20GB hard drive and forgoes the built-in Bluetooth and bundled Wi-Fi card.
Models:
PCG-C1MSX - Same as C1MW, but with Japanese localization.
PCG-C1MEL - Same as C1MW, but with Korean localization.
PCG-C1MAH - Same as C1MW, but with Hong Kong localization.
PCG-C1MHP - Same as C1MW, but with European localization.
PCG-C1MZX - Same as C1MSX, but with Crusoe 933MHz. This was the last and most powerful model of the C1 series, and it was available only in Japan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Variable star**
Variable star:
A variable star is a star whose brightness as seen from Earth (its apparent magnitude) changes with time. This variation may be caused by a change in emitted light or by something partly blocking the light, so variable stars are classified as either: Intrinsic variables, whose luminosity actually changes; for example, because the star periodically swells and shrinks.
Variable star:
Extrinsic variables, whose apparent changes in brightness are due to changes in the amount of their light that can reach Earth; for example, because the star has an orbiting companion that sometimes eclipses it.Many, possibly most, stars have at least some variation in luminosity: the energy output of the Sun, for example, varies by about 0.1% over an 11-year solar cycle.
Discovery:
An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago may be the oldest preserved historical document of the discovery of a variable star, the eclipsing binary Algol. Aboriginal Australians are also known to have observed the variability of Betelgeuse and Antares, incorporating these brightness changes into narratives that are passed down through oral tradition.Of the modern astronomers, the first variable star was identified in 1638 when Johannes Holwarda noticed that Omicron Ceti (later named Mira) pulsated in a cycle taking 11 months; the star had previously been described as a nova by David Fabricius in 1596. This discovery, combined with supernovae observed in 1572 and 1604, proved that the starry sky was not eternally invariable as Aristotle and other ancient philosophers had taught. In this way, the discovery of variable stars contributed to the astronomical revolution of the sixteenth and early seventeenth centuries.
Discovery:
The second variable star to be described was the eclipsing variable Algol, by Geminiano Montanari in 1669; John Goodricke gave the correct explanation of its variability in 1784. Chi Cygni was identified in 1686 by G. Kirch, then R Hydrae in 1704 by G. D. Maraldi. By 1786, ten variable stars were known. John Goodricke himself discovered Delta Cephei and Beta Lyrae. Since 1850, the number of known variable stars has increased rapidly, especially after 1890 when it became possible to identify variable stars by means of photography.
Discovery:
The latest edition of the General Catalogue of Variable Stars (2008) lists more than 46,000 variable stars in the Milky Way, as well as 10,000 in other galaxies, and over 10,000 'suspected' variables.
Detecting variability:
The most common kinds of variability involve changes in brightness, but other types of variability also occur, in particular changes in the spectrum. By combining light curve data with observed spectral changes, astronomers are often able to explain why a particular star is variable.
Detecting variability:
Variable star observations Variable stars are generally analysed using photometry, spectrophotometry and spectroscopy. Measurements of their changes in brightness can be plotted to produce light curves. For regular variables, the period of variation and its amplitude can be very well established; for many variable stars, though, these quantities may vary slowly over time, or even from one period to the next. Peak brightnesses in the light curve are known as maxima, while troughs are known as minima.
Detecting variability:
Amateur astronomers can do useful scientific study of variable stars by visually comparing the star with other stars within the same telescopic field of view of which the magnitudes are known and constant. By estimating the variable's magnitude and noting the time of observation a visual lightcurve can be constructed. The American Association of Variable Star Observers collects such observations from participants around the world and shares the data with the scientific community.
Detecting variability:
From the light curve the following data are derived: are the brightness variations periodical, semiperiodical, irregular, or unique? what is the period of the brightness fluctuations? what is the shape of the light curve (symmetrical or not, angular or smoothly varying, does each cycle have only one or more than one minima, etcetera)?From the spectrum the following data are derived: what kind of star is it: what is its temperature, its luminosity class (dwarf star, giant star, supergiant, etc.)? is it a single star, or a binary? (the combined spectrum of a binary star may show elements from the spectra of each of the member stars) does the spectrum change with time? (for example, the star may turn hotter and cooler periodically) changes in brightness may depend strongly on the part of the spectrum that is observed (for example, large variations in visible light but hardly any changes in the infrared) if the wavelengths of spectral lines are shifted this points to movements (for example, a periodical swelling and shrinking of the star, or its rotation, or an expanding gas shell) (Doppler effect) strong magnetic fields on the star betray themselves in the spectrum abnormal emission or absorption lines may be indication of a hot stellar atmosphere, or gas clouds surrounding the star.In very few cases it is possible to make pictures of a stellar disk. These may show darker spots on its surface.
Detecting variability:
Interpretation of observations Combining light curves with spectral data often gives a clue as to the changes that occur in a variable star. For example, evidence for a pulsating star is found in its shifting spectrum because its surface periodically moves toward and away from us, with the same frequency as its changing brightness.About two-thirds of all variable stars appear to be pulsating. In the 1930s astronomer Arthur Stanley Eddington showed that the mathematical equations that describe the interior of a star may lead to instabilities that cause a star to pulsate. The most common type of instability is related to oscillations in the degree of ionization in outer, convective layers of the star.When the star is in the swelling phase, its outer layers expand, causing them to cool. Because of the decreasing temperature the degree of ionization also decreases. This makes the gas more transparent, and thus makes it easier for the star to radiate its energy. This in turn makes the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. This makes the gas more opaque, and radiation temporarily becomes captured in the gas. This heats the gas further, leading it to expand once again. Thus a cycle of expansion and compression (swelling and shrinking) is maintained.The pulsation of cepheids is known to be driven by oscillations in the ionization of helium (from He++ to He+ and back to He++).
Nomenclature:
In a given constellation, the first variable stars discovered were designated with letters R through Z, e.g. R Andromedae. This system of nomenclature was developed by Friedrich W. Argelander, who gave the first previously unnamed variable in a constellation the letter R, the first letter not used by Bayer. Letters RR through RZ, SS through SZ, up to ZZ are used for the next discoveries, e.g. RR Lyrae. Later discoveries used letters AA through AZ, BB through BZ, and up to QQ through QZ (with J omitted). Once those 334 combinations are exhausted, variables are numbered in order of discovery, starting with the prefixed V335 onwards.
Classification:
Variable stars may be either intrinsic or extrinsic.
Intrinsic variable stars: stars where the variability is being caused by changes in the physical properties of the stars themselves. This category can be divided into three subgroups.
Pulsating variables, stars whose radius alternately expands and contracts as part of their natural evolutionary ageing processes.
Eruptive variables, stars who experience eruptions on their surfaces like flares or mass ejections.
Cataclysmic or explosive variables, stars that undergo a cataclysmic change in their properties like novae and supernovae.
Extrinsic variable stars: stars where the variability is caused by external properties like rotation or eclipses. There are two main subgroups.
Eclipsing binaries, double stars where, as seen from Earth's vantage point the stars occasionally eclipse one another as they orbit.
Classification:
Rotating variables, stars whose variability is caused by phenomena related to their rotation. Examples are stars with extreme "sunspots" which affect the apparent brightness or stars that have fast rotation speeds causing them to become ellipsoidal in shape.These subgroups themselves are further divided into specific types of variable stars that are usually named after their prototype. For example, dwarf novae are designated U Geminorum stars after the first recognized star in the class, U Geminorum.
Intrinsic variable stars:
Examples of types within these divisions are given below.
Pulsating variable stars Pulsating stars swell and shrink, affecting their brightness and spectrum. Pulsations are generally split into: radial, where the entire star expands and shrinks as a whole; and non-radial, where one part of the star expands while another part shrinks.
Intrinsic variable stars:
Depending on the type of pulsation and its location within the star, there is a natural or fundamental frequency which determines the period of the star. Stars may also pulsate in a harmonic or overtone which is a higher frequency, corresponding to a shorter period. Pulsating variable stars sometimes have a single well-defined period, but often they pulsate simultaneously with multiple frequencies and complex analysis is required to determine the separate interfering periods. In some cases, the pulsations do not have a defined frequency, causing a random variation, referred to as stochastic. The study of stellar interiors using their pulsations is known as asteroseismology.
Intrinsic variable stars:
The expansion phase of a pulsation is caused by the blocking of the internal energy flow by material with a high opacity, but this must occur at a particular depth of the star to create visible pulsations. If the expansion occurs below a convective zone then no variation will be visible at the surface. If the expansion occurs too close to the surface the restoring force will be too weak to create a pulsation. The restoring force to create the contraction phase of a pulsation can be pressure if the pulsation occurs in a non-degenerate layer deep inside a star, and this is called an acoustic or pressure mode of pulsation, abbreviated to p-mode. In other cases, the restoring force is gravity and this is called a g-mode. Pulsating variable stars typically pulsate in only one of these modes.
Intrinsic variable stars:
Cepheids and cepheid-like variables This group consists of several kinds of pulsating stars, all found on the instability strip, that swell and shrink very regularly caused by the star's own mass resonance, generally by the fundamental frequency. Generally the Eddington valve mechanism for pulsating variables is believed to account for cepheid-like pulsations. Each of the subgroups on the instability strip has a fixed relationship between period and absolute magnitude, as well as a relation between period and mean density of the star. The period-luminosity relationship was first established for Delta Cepheids by Henrietta Leavitt, and makes these high luminosity Cepheids very useful for determining distances to galaxies within the Local Group and beyond. Edwin Hubble used this method to prove that the so-called spiral nebulae are in fact distant galaxies.
Intrinsic variable stars:
Note that the Cepheids are named only for Delta Cephei, while a completely separate class of variables is named after Beta Cephei.
Intrinsic variable stars:
Classical Cepheid variables Classical Cepheids (or Delta Cephei variables) are population I (young, massive, and luminous) yellow supergiants which undergo pulsations with very regular periods on the order of days to months. On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of Cepheid variables. However, the namesake for classical Cepheids is the star Delta Cephei, discovered to be variable by John Goodricke a few months later.
Intrinsic variable stars:
Type II Cepheids Type II Cepheids (historically termed W Virginis stars) have extremely regular light pulsations and a luminosity relation much like the δ Cephei variables, so initially they were confused with the latter category. Type II Cepheids stars belong to older Population II stars, than do the type I Cepheids. The Type II have somewhat lower metallicity, much lower mass, somewhat lower luminosity, and a slightly offset period versus luminosity relationship, so it is always important to know which type of star is being observed.
Intrinsic variable stars:
RR Lyrae variables These stars are somewhat similar to Cepheids, but are not as luminous and have shorter periods. They are older than type I Cepheids, belonging to Population II, but of lower mass than type II Cepheids. Due to their common occurrence in globular clusters, they are occasionally referred to as cluster Cepheids. They also have a well established period-luminosity relationship, and so are also useful as distance indicators. These A-type stars vary by about 0.2–2 magnitudes (20% to over 500% change in luminosity) over a period of several hours to a day or more.
Intrinsic variable stars:
Delta Scuti variables Delta Scuti (δ Sct) variables are similar to Cepheids but much fainter and with much shorter periods. They were once known as Dwarf Cepheids. They often show many superimposed periods, which combine to form an extremely complex light curve. The typical δ Scuti star has an amplitude of 0.003–0.9 magnitudes (0.3% to about 130% change in luminosity) and a period of 0.01–0.2 days. Their spectral type is usually between A0 and F5.
Intrinsic variable stars:
SX Phoenicis variables These stars of spectral type A2 to F5, similar to δ Scuti variables, are found mainly in globular clusters. They exhibit fluctuations in their brightness in the order of 0.7 magnitude (about 100% change in luminosity) or so every 1 to 2 hours.
Rapidly oscillating Ap variables These stars of spectral type A or occasionally F0, a sub-class of δ Scuti variables found on the main sequence. They have extremely rapid variations with periods of a few minutes and amplitudes of a few thousandths of a magnitude.
Long period variables The long period variables are cool evolved stars that pulsate with periods in the range of weeks to several years.
Intrinsic variable stars:
Mira variables Mira variables are Asymptotic giant branch (AGB) red giants. Over periods of many months they fade and brighten by between 2.5 and 11 magnitudes, a 6 fold to 30,000 fold change in luminosity. Mira itself, also known as Omicron Ceti (ο Cet), varies in brightness from almost 2nd magnitude to as faint as 10th magnitude with a period of roughly 332 days. The very large visual amplitudes are mainly due to the shifting of energy output between visual and infra-red as the temperature of the star changes. In a few cases, Mira variables show dramatic period changes over a period of decades, thought to be related to the thermal pulsing cycle of the most advanced AGB stars.
Intrinsic variable stars:
Semiregular variables These are red giants or supergiants. Semiregular variables may show a definite period on occasion, but more often show less well-defined variations that can sometimes be resolved into multiple periods. A well-known example of a semiregular variable is Betelgeuse, which varies from about magnitudes +0.2 to +1.2 (a factor 2.5 change in luminosity). At least some of the semi-regular variables are very closely related to Mira variables, possibly the only difference being pulsating in a different harmonic.
Intrinsic variable stars:
Slow irregular variables These are red giants or supergiants with little or no detectable periodicity. Some are poorly studied semiregular variables, often with multiple periods, but others may simply be chaotic.
Intrinsic variable stars:
Long secondary period variables Many variable red giants and supergiants show variations over several hundred to several thousand days. The brightness may change by several magnitudes although it is often much smaller, with the more rapid primary variations are superimposed. The reasons for this type of variation are not clearly understood, being variously ascribed to pulsations, binarity, and stellar rotation.
Intrinsic variable stars:
Beta Cephei variables Beta Cephei (β Cep) variables (sometimes called Beta Canis Majoris variables, especially in Europe) undergo short period pulsations in the order of 0.1–0.6 days with an amplitude of 0.01–0.3 magnitudes (1% to 30% change in luminosity). They are at their brightest during minimum contraction. Many stars of this kind exhibits multiple pulsation periods.
Slowly pulsating B-type stars Slowly pulsating B (SPB) stars are hot main-sequence stars slightly less luminous than the Beta Cephei stars, with longer periods and larger amplitudes.
Very rapidly pulsating hot (subdwarf B) stars The prototype of this rare class is V361 Hydrae, a 15th magnitude subdwarf B star. They pulsate with periods of a few minutes and may simultaneous pulsate with multiple periods. They have amplitudes of a few hundredths of a magnitude and are given the GCVS acronym RPHS. They are p-mode pulsators.
PV Telescopii variables Stars in this class are type Bp supergiants with a period of 0.1–1 day and an amplitude of 0.1 magnitude on average. Their spectra are peculiar by having weak hydrogen while on the other hand carbon and helium lines are extra strong, a type of Extreme helium star.
Intrinsic variable stars:
RV Tauri variables These are yellow supergiant stars (actually low mass post-AGB stars at the most luminous stage of their lives) which have alternating deep and shallow minima. This double-peaked variation typically has periods of 30–100 days and amplitudes of 3–4 magnitudes. Superimposed on this variation, there may be long-term variations over periods of several years. Their spectra are of type F or G at maximum light and type K or M at minimum brightness. They lie near the instability strip, cooler than type I Cepheids more luminous than type II Cepheids. Their pulsations are caused by the same basic mechanisms related to helium opacity, but they are at a very different stage of their lives.
Intrinsic variable stars:
Alpha Cygni variables Alpha Cygni (α Cyg) variables are nonradially pulsating supergiants of spectral classes Bep to AepIa. Their periods range from several days to several weeks, and their amplitudes of variation are typically of the order of 0.1 magnitudes. The light changes, which often seem irregular, are caused by the superposition of many oscillations with close periods. Deneb, in the constellation of Cygnus is the prototype of this class.
Intrinsic variable stars:
Gamma Doradus variables Gamma Doradus (γ Dor) variables are non-radially pulsating main-sequence stars of spectral classes F to late A. Their periods are around one day and their amplitudes typically of the order of 0.1 magnitudes.
Intrinsic variable stars:
Pulsating white dwarfs These non-radially pulsating stars have short periods of hundreds to thousands of seconds with tiny fluctuations of 0.001 to 0.2 magnitudes. Known types of pulsating white dwarf (or pre-white dwarf) include the DAV, or ZZ Ceti, stars, with hydrogen-dominated atmospheres and the spectral type DA; DBV, or V777 Her, stars, with helium-dominated atmospheres and the spectral type DB; and GW Vir stars, with atmospheres dominated by helium, carbon, and oxygen. GW Vir stars may be subdivided into DOV and PNNV stars.
Intrinsic variable stars:
Solar-like oscillations The Sun oscillates with very low amplitude in a large number of modes having periods around 5 minutes. The study of these oscillations is known as helioseismology. Oscillations in the Sun are driven stochastically by convection in its outer layers. The term solar-like oscillations is used to describe oscillations in other stars that are excited in the same way and the study of these oscillations is one of the main areas of active research in the field of asteroseismology.
Intrinsic variable stars:
BLAP variables A Blue Large-Amplitude Pulsator (BLAP) is a pulsating star characterized by changes of 0.2 to 0.4 magnitudes with typical periods of 20 to 40 minutes.
Fast yellow pulsating supergiants A fast yellow pulsating supergiant (FYPS) is a luminous yellow supergiant with pulsations shorter than a day. They are thought to have evolved beyond a red supergiant phase, but the mechanism for the pulsations is unknown. The class was named in 2020 through analysis of TESS observations.
Eruptive variable stars Eruptive variable stars show irregular or semi-regular brightness variations caused by material being lost from the star, or in some cases being accreted to it. Despite the name, these are not explosive events.
Protostars Protostars are young objects that have not yet completed the process of contraction from a gas nebula to a veritable star. Most protostars exhibit irregular brightness variations.
Herbig Ae/Be stars Variability of more massive (2–8 solar mass) Herbig Ae/Be stars is thought to be due to gas-dust clumps, orbiting in the circumstellar disks.
Orion variables Orion variables are young, hot pre–main-sequence stars usually embedded in nebulosity. They have irregular periods with amplitudes of several magnitudes. A well-known subtype of Orion variables are the T Tauri variables. Variability of T Tauri stars is due to spots on the stellar surface and gas-dust clumps, orbiting in the circumstellar disks.
Intrinsic variable stars:
FU Orionis variables These stars reside in reflection nebulae and show gradual increases in their luminosity in the order of 6 magnitudes followed by a lengthy phase of constant brightness. They then dim by 2 magnitudes (six times dimmer) or so over a period of many years. V1057 Cygni for example dimmed by 2.5 magnitude (ten times dimmer) during an eleven-year period. FU Orionis variables are of spectral type A through G and are possibly an evolutionary phase in the life of T Tauri stars.
Intrinsic variable stars:
Giants and supergiants Large stars lose their matter relatively easily. For this reason variability due to eruptions and mass loss is fairly common among giants and supergiants.
Intrinsic variable stars:
Luminous blue variables Also known as the S Doradus variables, the most luminous stars known belong to this class. Examples include the hypergiants η Carinae and P Cygni. They have permanent high mass loss, but at intervals of years internal pulsations cause the star to exceed its Eddington limit and the mass loss increases hugely. Visual brightness increases although the overall luminosity is largely unchanged. Giant eruptions observed in a few LBVs do increase the luminosity, so much so that they have been tagged supernova impostors, and may be a different type of event.
Intrinsic variable stars:
Yellow hypergiants These massive evolved stars are unstable due to their high luminosity and position above the instability strip, and they exhibit slow but sometimes large photometric and spectroscopic changes due to high mass loss and occasional larger eruptions, combined with secular variation on an observable timescale. The best known example is Rho Cassiopeiae.
Intrinsic variable stars:
R Coronae Borealis variables While classed as eruptive variables, these stars do not undergo periodic increases in brightness. Instead they spend most of their time at maximum brightness, but at irregular intervals they suddenly fade by 1–9 magnitudes (2.5 to 4000 times dimmer) before recovering to their initial brightness over months to years. Most are classified as yellow supergiants by luminosity, although they are actually post-AGB stars, but there are both red and blue giant R CrB stars. R Coronae Borealis (R CrB) is the prototype star. DY Persei variables are a subclass of R CrB variables that have a periodic variability in addition to their eruptions.
Intrinsic variable stars:
Wolf–Rayet variables Classic population I Wolf–Rayet stars are massive hot stars that sometimes show variability, probably due to several different causes including binary interactions and rotating gas clumps around the star. They exhibit broad emission line spectra with helium, nitrogen, carbon and oxygen lines. Variations in some stars appear to be stochastic while others show multiple periods.
Gamma Cassiopeiae variables Gamma Cassiopeiae (γ Cas) variables are non-supergiant fast-rotating B class emission line-type stars that fluctuate irregularly by up to 1.5 magnitudes (4 fold change in luminosity) due to the ejection of matter at their equatorial regions caused by the rapid rotational velocity.
Intrinsic variable stars:
Flare stars In main-sequence stars major eruptive variability is exceptional. It is common only among the flare stars, also known as the UV Ceti variables, very faint main-sequence stars which undergo regular flares. They increase in brightness by up to two magnitudes (six times brighter) in just a few seconds, and then fade back to normal brightness in half an hour or less. Several nearby red dwarfs are flare stars, including Proxima Centauri and Wolf 359.
Intrinsic variable stars:
RS Canum Venaticorum variables These are close binary systems with highly active chromospheres, including huge sunspots and flares, believed to be enhanced by the close companion. Variability scales ranges from days, close to the orbital period and sometimes also with eclipses, to years as sunspot activity varies.
Intrinsic variable stars:
Cataclysmic or explosive variable stars Supernovae Supernovae are the most dramatic type of cataclysmic variable, being some of the most energetic events in the universe. A supernova can briefly emit as much energy as an entire galaxy, brightening by more than 20 magnitudes (over one hundred million times brighter). The supernova explosion is caused by a white dwarf or a star core reaching a certain mass/density limit, the Chandrasekhar limit, causing the object to collapse in a fraction of a second. This collapse "bounces" and causes the star to explode and emit this enormous energy quantity. The outer layers of these stars are blown away at speeds of many thousands of kilometers per second. The expelled matter may form nebulae called supernova remnants. A well-known example of such a nebula is the Crab Nebula, left over from a supernova that was observed in China and elsewhere in 1054. The progenitor object may either disintegrate completely in the explosion, or, in the case of a massive star, the core can become a neutron star (generally a pulsar) or a black hole.
Intrinsic variable stars:
Supernovae can result from the death of an extremely massive star, many times heavier than the Sun. At the end of the life of this massive star, a non-fusible iron core is formed from fusion ashes. This iron core is pushed towards the Chandrasekhar limit till it surpasses it and therefore collapses. One of the most studied supernovae of this type is SN 1987A in the Large Magellanic Cloud.
Intrinsic variable stars:
A supernova may also result from mass transfer onto a white dwarf from a star companion in a double star system. The Chandrasekhar limit is surpassed from the infalling matter. The absolute luminosity of this latter type is related to properties of its light curve, so that these supernovae can be used to establish the distance to other galaxies.
Luminous red nova Luminous red novae are stellar explosions caused by the merger of two stars. They are not related to classical novae. They have a characteristic red appearance and very slow decline following the initial outburst.
Intrinsic variable stars:
Novae Novae are also the result of dramatic explosions, but unlike supernovae do not result in the destruction of the progenitor star. Also unlike supernovae, novae ignite from the sudden onset of thermonuclear fusion, which under certain high pressure conditions (degenerate matter) accelerates explosively. They form in close binary systems, one component being a white dwarf accreting matter from the other ordinary star component, and may recur over periods of decades to centuries or millennia. Novae are categorised as fast, slow or very slow, depending on the behaviour of their light curve. Several naked eye novae have been recorded, Nova Cygni 1975 being the brightest in the recent history, reaching 2nd magnitude.
Intrinsic variable stars:
Dwarf novae Dwarf novae are double stars involving a white dwarf in which matter transfer between the component gives rise to regular outbursts. There are three types of dwarf nova: U Geminorum stars, which have outbursts lasting roughly 5–20 days followed by quiet periods of typically a few hundred days. During an outburst they brighten typically by 2–6 magnitudes. These stars are also known as SS Cygni variables after the variable in Cygnus which produces among the brightest and most frequent displays of this variable type.
Intrinsic variable stars:
Z Camelopardalis stars, in which occasional plateaux of brightness called standstills are seen, part way between maximum and minimum brightness.
SU Ursae Majoris stars, which undergo both frequent small outbursts, and rarer but larger superoutbursts. These binary systems usually have orbital periods of under 2.5 hours.
Intrinsic variable stars:
DQ Herculis variables DQ Herculis systems are interacting binaries in which a low-mass star transfers mass to a highly magnetic white dwarf. The white dwarf spin period is significantly shorter than the binary orbital period and can sometimes be detected as a photometric periodicity. An accretion disk usually forms around the white dwarf, but its innermost regions are magnetically truncated by the white dwarf. Once captured by the white dwarf's magnetic field, the material from the inner disk travels along the magnetic field lines until it accretes. In extreme cases, the white dwarf's magnetism prevents the formation of an accretion disk.
Intrinsic variable stars:
AM Herculis variables In these cataclysmic variables, the white dwarf's magnetic field is so strong that it synchronizes the white dwarf's spin period with the binary orbital period. Instead of forming an accretion disk, the accretion flow is channeled along the white dwarf's magnetic field lines until it impacts the white dwarf near a magnetic pole. Cyclotron radiation beamed from the accretion region can cause orbital variations of several magnitudes.
Intrinsic variable stars:
Z Andromedae variables These symbiotic binary systems are composed of a red giant and a hot blue star enveloped in a cloud of gas and dust. They undergo nova-like outbursts with amplitudes of up to 4 magnitudes. The prototype for this class is Z Andromedae.
AM CVn variables AM CVn variables are symbiotic binaries where a white dwarf is accreting helium-rich material from either another white dwarf, a helium star, or an evolved main-sequence star. They undergo complex variations, or at times no variations, with ultrashort periods.
Extrinsic variable stars:
There are two main groups of extrinsic variables: rotating stars and eclipsing stars.
Rotating variable stars Stars with sizeable sunspots may show significant variations in brightness as they rotate, and brighter areas of the surface are brought into view. Bright spots also occur at the magnetic poles of magnetic stars. Stars with ellipsoidal shapes may also show changes in brightness as they present varying areas of their surfaces to the observer.
Non-spherical stars Ellipsoidal variables These are very close binaries, the components of which are non-spherical due to their tidal interaction. As the stars rotate the area of their surface presented towards the observer changes and this in turn affects their brightness as seen from Earth.
Stellar spots The surface of the star is not uniformly bright, but has darker and brighter areas (like the sun's solar spots). The star's chromosphere too may vary in brightness. As the star rotates we observe brightness variations of a few tenths of magnitudes.
Extrinsic variable stars:
FK Comae Berenices variables These stars rotate extremely rapidly (~100 km/s at the equator); hence they are ellipsoidal in shape. They are (apparently) single giant stars with spectral types G and K and show strong chromospheric emission lines. Examples are FK Com, V1794 Cygni and UZ Librae. A possible explanation for the rapid rotation of FK Comae stars is that they are the result of the merger of a (contact) binary.
Extrinsic variable stars:
BY Draconis variable stars BY Draconis stars are of spectral class K or M and vary by less than 0.5 magnitudes (70% change in luminosity).
Magnetic fields Alpha-2 Canum Venaticorum variables Alpha-2 Canum Venaticorum (α2 CVn) variables are main-sequence stars of spectral class B8–A7 that show fluctuations of 0.01 to 0.1 magnitudes (1% to 10%) due to changes in their magnetic fields.
SX Arietis variables Stars in this class exhibit brightness fluctuations of some 0.1 magnitude caused by changes in their magnetic fields due to high rotation speeds.
Optically variable pulsars Few pulsars have been detected in visible light. These neutron stars change in brightness as they rotate. Because of the rapid rotation, brightness variations are extremely fast, from milliseconds to a few seconds. The first and the best known example is the Crab Pulsar.
Extrinsic variable stars:
Eclipsing binaries Extrinsic variables have variations in their brightness, as seen by terrestrial observers, due to some external source. One of the most common reasons for this is the presence of a binary companion star, so that the two together form a binary star. When seen from certain angles, one star may eclipse the other, causing a reduction in brightness. One of the most famous eclipsing binaries is Algol, or Beta Persei (β Per).
Extrinsic variable stars:
Algol variables Algol variables undergo eclipses with one or two minima separated by periods of nearly constant light. The prototype of this class is Algol in the constellation Perseus.
Double Periodic variables Double periodic variables exhibit cyclical mass exchange which causes the orbital period to vary predictably over a very long period. The best known example is V393 Scorpii.
Beta Lyrae variables Beta Lyrae (β Lyr) variables are extremely close binaries, named after the star Sheliak. The light curves of this class of eclipsing variables are constantly changing, making it almost impossible to determine the exact onset and end of each eclipse.
W Serpentis variables W Serpentis is the prototype of a class of semi-detached binaries including a giant or supergiant transferring material to a massive more compact star. They are characterised, and distinguished from the similar β Lyr systems, by strong UV emission from accretions hotspots on a disc of material.
W Ursae Majoris variables The stars in this group show periods of less than a day. The stars are so closely situated to each other that their surfaces are almost in contact with each other.
Planetary transits Stars with planets may also show brightness variations if their planets pass between Earth and the star. These variations are much smaller than those seen with stellar companions and are only detectable with extremely accurate observations. Examples include HD 209458 and GSC 02652-01324, and all of the planets and planet candidates detected by the Kepler Mission. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**(2-Hydroxyethyl) dimethylsulfoxonium chloride**
(2-Hydroxyethyl) dimethylsulfoxonium chloride:
(2-Hydroxyethyl) dimethylsulfoxonium chloride is an organic salt found in sea chervils and sponges that causes the Dogger Bank itch.
Properties:
(2-Hydroxyethyl) dimethylsulfoxonium chloride is colourless. It dissolves in dioxane, methanol, chloroform or water.(2-Hydroxyethyl) dimethylsulfoxonium chloride is a sulfoxonium compound, with a sulfur atom having a positive charge. Attached to the sulfur are two methyl groups, and oxygen atom, and an ethoxy group attached at the number 2 carbon.As a solid, its crystal structure is orthorhombic, with unit cell dimensions a = 11.033, b = 13.847 and c = 9.871 Å. The space group is Pbca. There are eight formulae per unit cell. The density is 1.251 g/cm3.
Natural occurrence:
(2-Hydroxyethyl) dimethylsulfoxonium chloride has been discovered so far in Alcyonidium gelatinosum and a sea sponge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bretagnolle–Huber inequality**
Bretagnolle–Huber inequality:
In information theory, the Bretagnolle–Huber inequality bounds the total variation distance between two probability distributions P and Q by a concave and bounded function of the Kullback–Leibler divergence DKL(P∥Q) . The bound can be viewed as an alternative to the well-known Pinsker's inequality: when DKL(P∥Q) is large (larger than 2 for instance.), Pinsker's inequality is vacuous, while Bretagnolle–Huber remains bounded and hence non-vacuous. It is used in statistics and machine learning to prove information-theoretic lower bounds relying on hypothesis testing
Formal statement:
Preliminary definitions Let P and Q be two probability distributions on a measurable space (X,F) Recall that the total variation between P and Q is defined by sup A∈F{|P(A)−Q(A)|}.
The Kullback-Leibler divergence is defined as follows: log if otherwise .
In the above, the notation P≪Q stands for absolute continuity of P with respect to Q , and dPdQ stands for the Radon–Nikodym derivative of P with respect to Q General statement The Bretagnolle–Huber inequality says: exp exp (−DKL(P∥Q))
Alternative version:
The following version is directly implied by the bound above but some authors prefer stating it this way. Let A∈F be any event. Then exp (−DKL(P∥Q)) where A¯=Ω∖A is the complement of A Indeed, by definition of the total variation, for any A∈F exp exp (−DKL(P∥Q)) Rearranging, we obtain the claimed lower bound on P(A)+Q(A¯)
Proof:
We prove the main statement following the ideas in Tsybakov's book (Lemma 2.6, page 89), which differ from the original proof (see C.Canonne's note for a modernized retranscription of their argument).
Proof:
The proof is in two steps: 1. Prove using Cauchy–Schwarz that the total variation is related to the Bhattacharyya coefficient (right-hand side of the inequality): 1−dTV(P,Q)2≥(∫PQ)2 2. Prove by a clever application of Jensen’s inequality that exp (−DKL(P∥Q)) Step 1:First notice that min max (P,Q)−1 To see this, denote arg max A∈Ω|P(A)−Q(A)| and without loss of generality, assume that P(A∗)>Q(A∗) such that dTV(P,Q)=P(A∗)−Q(A∗) . Then we can rewrite max min (P,Q) And then adding and removing max or min (P,Q) we obtain both identities.Then min max min max (P,Q))2=(∫PQ)2 because min max (P,Q).
Proof:
Step 2:We write exp log (⋅)) and apply Jensen's inequality: exp log exp log exp log exp log exp (−DKL(P,Q)) Combining the results of steps 1 and 2 leads to the claimed bound on the total variation.
Examples of applications:
Sample complexity of biased coin tosses The question is How many coin tosses do I need to distinguish a fair coin from a biased one? Assume you have 2 coins, a fair coin (Bernoulli distributed with mean p1=1/2 ) and an ε -biased coin ( p2=1/2+ε ). Then, in order to identify the biased coin with probability at least 1−δ (for some δ>0 ), at least log (12δ).
Examples of applications:
In order to obtain this lower bound we impose that the total variation distance between two sequences of n samples is at least 1−2δ . This is because the total variation upper bounds the probability of under- or over-estimating the coins' means. Denote P1n and P2n the respective joint distributions of the n coin tosses for each coin, then We have log (1/(1−4ε2))2 The result is obtained by rearranging the terms.
Examples of applications:
Information-theoretic lower bound for k-armed bandit games In multi-armed bandit, a lower bound on the minimax regret of any bandit algorithm can be proved using Bretagnolle–Huber and its consequence on hypothesis testing (see Chapter 15 of Bandit Algorithms).
History:
The result was first proved in 1979 by Jean Bretagnolle and Catherine Huber, and published in the proceedings of the Strasbourg Probability Seminar. Alexandre Tsybakov's book features an early re-publication of the inequality and its attribution to Bretagnolle and Huber, which is presented as an early and less general version of Assouad's lemma (see notes 2.8). A constant improvement on Bretagnolle–Huber was proved in 2014 as a consequence of an extension of Fano's Inequality. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indoor netball**
Indoor netball:
Indoor netball is a variation of netball, played exclusively indoors, in which the playing court is surrounded on each side and overhead by a net. The net prevents the ball from leaving the court, reducing the number of playing stoppages. This gives indoor netball a faster pace than netball.
Indoor netball:
There are two main types of indoor netball, "6-a-side" and "7-a-side". Indoor netball has a larger focus towards mixed-gender matches than netball does, although ladies' games, and to a smaller extent men's games, are ever-present. While the sport does not have as large a following as netball does, its popularity is growing in countries such as England, South Africa, Australia and New Zealand. The sport is administered at an international level by the World Indoor Netball Association (WINA)
Overview:
The rules of indoor netball are similar to that of netball, with two teams aiming to score as many goals as possible. An indoor netball game usually consists of four-quarters of 10 minutes. There are two umpires one for each half of the court. The winning team is the one with the most points at the end of the match. In case of a tie in elimination games, two straight 5-minute overtimes are played; if still tied, whoever is up two points will win.
Overview:
6-a-side In this version the court is divided into halves rather than thirds, and there are six players per team rather than seven. The team is made up of two Center's/Link players, two attack players, and two defence players. The two attack players are located in one half and the two defence in the opposite half to their Attacking side. Both the Attackers and Defenders are allowed anywhere in their half of the court which includes the Shooting Circles. The Center/Link players run the full court but are NOT allowed into the Shooting Circles at either end. Scoring is also different. Attackers can shoot from inside the Circle, a successful shot gives you 1 point. Attackers can also shoot from Circle edge or further back-outside of the Circle. A successful shot from this distance or further scores 2 points (much like 3-pointers in basketball). Because the 2-point line is the Circle edge in the Attackers zone, this allows the Centers/Links to shoot for 2 points from the outside along with Attackers. Fundamentally, this changes the game play from normal 7-a-side Netball significantly! with the Attack players working to set up the Center/Link players for the 2-point shot.
Overview:
Once a goal is scored, a defence player from the opposition team takes a throw-off from the top of their circle. This makes the game even faster as the ball doesn't have to be sent back to the centre third for a centre pass like 7-a-side.
7-a-side This version is a lot like original netball, with the court in divided into thirds and with seven players similarly positioned. Only 1-point shots are possible, and only from inside the shooting circle.
Equipment and uniforms:
The ball shall be a universally accepted Netball or Association Football Size 5 and shall be supplied by the centre.
There are all different rules for Players playing: 1a) Players must wear a form of rubber-soled sport shoe or boot which shall be non-marking for Indoor competition and acceptable to the Netball Coordinator.
b) Teams must wear a uniform which must be registered with the centre. It shall consist of matching shirts/tops and matching skirts/shorts (Men only).
c) Singlet tops, jumpers or wind-cheaters will be permitted with the Netball Co-ordinators approval.
d) All players must wear bibs identifying their court position. Players initials are to be included on both the front and back of the bibs. The initials must be a minimum of 200mm in height and clearly visible above the waist when the bibs are worn.
2 in the event of two teams having similar or identical uniforms, including bibs, team captains shall determine which team shall wear the neutral bibs supplied by the centre.
e) Advertising by team sponsors is permitted on the playing bibs but shall in no way encroach upon the initials on the bibs. Advertising is permitted on any other item of the playing uniform.
f) No jewellery shall be worn with the exception of a wedding ring or medical bracelet which must be taped to the satisfaction of the umpire.
Equipment and uniforms:
g) Fingernails shall be cut short or taped (band-aids and the like and electrical tape excluded) to the satisfaction of the Umpire. The Umpire may, at any time, request a player to re-tape their nails. (Gloves may be worn with the Umpires/Netball Co-Ordinators approval).Penalty: Players in breach of proceeding requirements shall be penalised. The offending player may be removed from the court or a Three Goal penalty will be awarded to the Non-Offending team.
International competition:
Internationally Australia and New Zealand have contested the Trans-Tasman Shield on a number of occasions. Following this series, South Africa joined the World Indoor Netball Association, and plans were put in place for the 2001 Indoor Netball World Cup in Australia. In June 2002 Australia and England travelled to South Africa for the WINA Tri-Series. Again Open Ladies, Open Mixed and Open Men were contested at this tournament.
International competition:
In 2003 New Zealand hosted the World Cup in Auckland, contested between Australia, New Zealand and South Africa. This was the first time that 21-&-Under Ladies was contested at a World Cup level, which has appeared in all subsequent Open events. Also at the 2003 World Cup, the World Indoor Netball Association introduced Over-30 Ladies, Over-30 Mixed and 18-&-Under divisions to their calendar of tournaments. In February 2004, Selected Masters & 18s Australian teams travelled to South Africa. The tour was a success and set the foundations for bi-annual tours to continue, with the next International series being held in 2006.
International competition:
In 2007 a squad of netball players was selected by the (English) Indoor Netball Association (INA) to represent England at the Tri-Nations Cup, which was held in November 2007 in South Africa. Teams were entered in the U-19, Open Ladies and Open Mixed Categories. Australia won at all three levels of the tournament. Australia hosted the 2008 Indoor Netball World Cup in June on the Gold Coast in Queensland, where all four countries played the inaugural World Cup Series.
International competition:
In 2010, South Africa held a Tri-Nations tournament for Open Men's, Women's and Mixed as well as U21 teams from South Africa Australia and England.
In 2012 Australia again hosted the Indoor Netball World Cup, in Brisbane, Queensland. Australia won every division (Ladies, Mens, Mixed and U21s) in both the 6-a-side and 7-a-side competitions. New Zealand teams were competitive in all divisions, dominating the grand finals.
In 2013, South Africa held the Tri-Nations Masters Series for Over 30s Mixed, Ladies and U18s Mixed and Ladies. South Africa in Australia competed in all divisions with England competing in the Ladies O30s and U18s.
National competitions:
Australia In Australia there are two national championships held annually, the Open National Championships and the Aged National Championships. The Opens have four divisions: Men's, Ladies, Mixed and Under-21 Ladies, whilst the Aged Nationals have four divisions: Over 30 Ladies, Over 30 Mixed, Under 18 Mixed and the Under 18 Ladies. From these tournaments the respective All Star teams are chosen as a reflection of the best players in Australia in each division. Indoor Netball is played socially throughout Australia. Various districts, centres or arenas take part in these competitions including the Rec Club Miranda) which is one of Sydney's oldest indoor sports centres.
National competitions:
New Zealand In New Zealand there are three major tournaments, held annually: the Northern Superleague competition (based in Auckland, also including Hamilton), The Central Superleague competition (based in Wellington, also including Manawatu, Taranaki and Napier and the Southern Superleague competition (based in Christchurch).
At the end of these seasons there is a National tournament which the best teams from these three competitions enter. This competition consists of a Mixed Grade,Mens Grade, Ladies Grade, Over40s Grade, Over 35s Grade, Over 30s Grade, Under 21s Grade and an Under 19s Grade. This takes place the first weekend of March each year.
National competitions:
South Africa In South Africa there are three national championships held annually, the Open & Mixed National Championships, the Overs and Unders Aged National Championships and the Juniors National Championships. The Opens & Mixed have six divisions: Men's, Ladies A & B, Mixed A & B and Under-21 Ladies, whilst the Overs & Unders Aged Nationals have three divisions: Over 30 Ladies, Over 35 Ladies, Over 30 Mixed, the Under 18 and Under 19 Ladies. All Star teams are chosen as a reflection of the best performers during the tournament in each division.
National competitions:
England In 2007 INA England competed in its first Tri Nation Tournament in South Africa, against South Africa and Australia. Following on from this success INA England competed in the 2008 Indoor Netball World Cup in Australia.
In 2010 INA competed in its third World Cup tournament again in South Africa with the Women's Open team gaining a silver medal and an INA England player winning 'Player of the Tournament'.
In the 2012 World Championships held in Brisbane, England's U21s squad gained silver medals.
In 2013, England competed in the Masters Tri-Nations tournament entering an over 30s ladies team and Under 18s women's squad. The over 30s ladies squad reach the semi-finals narrowly missing out on a finals place to Australia. INA England won 'Player of the Tournament' in both the Under 18 and Over 30s section for the 7s a side version. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Irish grid reference system**
Irish grid reference system:
The Irish grid reference system is a system of geographic grid references used for paper mapping in Ireland (both Northern Ireland and the Republic of Ireland). The Irish grid partially overlaps the British grid, and uses a similar co-ordinate system but with a meridian more suited to its westerly location.
Usage:
In general, neither Ireland nor Great Britain uses latitude or longitude in describing internal geographic locations. Instead grid reference systems are used for mapping.
Usage:
The national grid referencing system was devised by the Ordnance Survey, and is heavily used in their survey data, and in maps (whether published by the Ordnance Survey of Ireland, the Ordnance Survey of Northern Ireland or commercial map producers) based on those surveys. Additionally grid references are commonly quoted in other publications and data sources, such as guide books or government planning documents.
2001 recasting: the ITM grid:
In 2001, the Ordnance Survey of Ireland and the Ordnance Survey of Northern Ireland jointly implemented a new coordinate system for Ireland called Irish Transverse Mercator, or ITM, a location-specific optimisation of UTM, which runs in parallel with the existing Irish grid system. In both systems, the true origin is at 53° 30' N, 8° W — a point in Lough Ree, close to the western (Co. Roscommon) shore, whose grid reference is N000500. The ITM system was specified so as to provide precise alignment with modern high-precision global positioning receivers.
Grid letters:
The area of Ireland is divided into 25 squares, measuring 100 by 100 km (62 by 62 mi), each identified by a single letter. The squares are numbered A to Z with I being omitted. Seven of the squares do not actually cover any land in Ireland: A, E, K, P, U, Y and Z.
Eastings and northings:
Within each square, eastings and northings from the origin (south west corner) of the square are given numerically. For example, G0305 means 'square G, 3 km (1.9 mi) east, 5 km (3.1 mi) north'. A location can be indicated to varying resolutions numerically, usually from two digits in each coordinate (for a 1 km (0.62 mi) square) through to five (for a 1 m (3 ft 3 in)) square; the most common usage is the six figure grid reference, employing three digits in each coordinate to determine a 100 m (330 ft) square.
Eastings and northings:
Coordinates may also be given relative to the origin of the entire 500 by 500 km (310 by 310 mi) grid (in the format easting, northing). For example, the location of the Spire of Dublin on O'Connell Street may be given as 315904, 234671 as well as O1590434671. Coordinates in this format must never be truncated, because, for example, 31590, 23467 is also a valid location.
Summary parameters of the Irish Grid coordinate system:
Spheroid: Airy Modified, Datum: 1965, Map projection: Transverse Mercator Latitude of Origin: 53°30'00 N Longitude of Origin: 8°00'00 W Scale Factor: 1.000 035 False Easting: 200000 m False Northing: 250000 m | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Packet (container)**
Packet (container):
A packet or sachet is a small bag or pouch, made from paper, foil, plastic film or another type of packing material, often used to contain single-use quantities of foods or consumer goods such as ketchup or shampoo. Packets are commonly opened by making a small rip or tear in part of the package, and then squeezing out the contents.
Uses:
Condiments distributed in packets include ketchup, mustard, mayonnaise, salad cream, HP sauce, relish, tartar sauce, vinegar and soy sauce. They provide a simple and low-cost way of distributing small amounts of condiment with ready-to-eat packaged food such as hot dogs, French fries, or hamburgers, and are common in fast food restaurants. The packets produce less contamination and mess than freely available condiments dispensed into small disposable cups or other containers, especially if the food will be in transit before dining. Potpourri fragrances are also sold in sachets. Potpourri sachet envelopes are filled with scented herbs and flowers or use vermiculite containing aromatic fragrance oil. These are known as potpourri wardrobe sachets. In Argentina and Uruguay, milk and yogurts are also sold in packets.
Uses:
In 1983, the Indian company Cavin Kare began selling shampoo in small plastic packets instead of large bottles in order to make it more affordable to the poor. Sale of small amounts of shampoo and detergents in plastic packets is very popular throughout the Philippines, India and other Eastern countries. In 2011, 87% of shampoo sold in India was in sachets.
Porous pouch:
Some packets are made of materials with known porosity to allow vapors from the pouch to escape. These pouches, also called sachets, can be placed in other packages to help control the atmosphere. Uses include: volatile corrosion inhibitors, desiccants, oxygen scavengers, etc.
History:
Benjamin Eisenstadt invented a machine that produced the modern sugar packet after a failed endeavor to package and sell tea bags, later packaging other items, including sauces.
Variants:
The Sanford Redmond designed the no mess dispenSRpak for one handed operation. Introduced into Australia in 1990, it is used in other countries, but the design has not been widely licensed in the USA.In 2010, the H. J. Heinz Company designed a new ketchup packet. The new design was made with a cup and easy tear, thus making it easier to dip food without a plate along with holding three times as much ketchup. It has not been widely adopted.
Records:
In Collinsville, Illinois, the largest ketchup packet was created by H. J. Heinz Company for a fundraiser for the Collinsville Christian Academy. People could buy a bottle of ketchup for $1 to add to the ketchup packet. After it was filled, it weighed 1,500 lbs. and it was 8 ft × 4 ft (2.4 m × 1.2 m) across and 9.5 in (240 mm) thick.Annual production of ketchup packets by Heinz alone is 11 billion.
Pollution:
Plastic sachets are a major contributor to litter and pollution, especially in low-income countries where many more household goods are sold in sachets in small quantities. In June 2022, a Reuters report revealed that Unilever had lobbied the governments of India and the Philippines to stop legislation which would ban the sale of cosmetics in single-use plastic sachets, despite vowing in 2020 to stop using them. The design of these sachets had been called 'evil' by Hanneke Faber, Unilever's President for Global Food and Refreshments, 'because you cannot recycle it'. The bans were then dropped by lawmakers. In Sri Lanka, the company pressed the government to reconsider a proposed ban on sachets, and then tried to manoeuvre around the ban after regulations were implemented. Unilever sells 40 billion plastic sachets each year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clave (rhythm)**
Clave (rhythm):
The clave (; Spanish: [ˈklaβe]) is a rhythmic pattern used as a tool for temporal organization in Cuban music. In Spanish, clave literally means key, clef, code, or keystone. It is present in a variety of genres such as Abakuá music, rumba, conga, son, mambo, salsa, songo, timba and Afro-Cuban jazz. The five-stroke clave pattern represents the structural core of many Cuban rhythms.The clave pattern originated in sub-Saharan African music traditions, where it serves essentially the same function as it does in Cuba. In ethnomusicology, clave is also known as a key pattern, guide pattern, phrasing referent, timeline, or asymmetrical timeline. The clave pattern is also found in the African diaspora music of Haitian Vodou drumming, Afro-Brazilian music, African-American music, Louisiana Voodoo drumming, and Afro-Uruguayan music (candombe). The clave pattern (or hambone, as it is known in the United States) is used in North American popular music as a rhythmic motif or simply a form of rhythmic decoration.
Clave (rhythm):
The historical roots of the clave are linked to transnational musical exchanges within the African diaspora. For instance, influences of the African “bomba” rhythm are reflected in the clave. In addition to this, the emphasis and role of the drum within the rhythmic patterns speaks further to these diasporic roots.The clave is the foundation of reggae, reggaeton, and dancehall. In this sense, it is the “heartbeat” that underlies the essence of these genres. The rhythms and vibrations are universalized in that they demonstrate a shared cultural experience and knowledge of these roots. Ultimately, this embodies the diasporic transnational exchange.
Clave (rhythm):
In considering the clave as this basis of cultural understanding, relation, and exchange, this speaks to the transnational influence and interconnectedness of various communities. This musical fusion is essentially what constitutes the flow and foundational “heartbeat” of a variety of genres.
Etymology:
Clave is a Spanish word meaning 'code,' 'key,' as in key to a mystery or puzzle, or 'keystone,' the wedge-shaped stone in the center of an arch that ties the other stones together. Clave is also the name of the patterns played on claves; two hardwood sticks used in Afro-Cuban music ensembles.
The key to Afro-Cuban rhythm:
The clave pattern holds the rhythm together in Afro-Cuban music. The two main clave patterns used in Afro-Cuban music are known in North America as son clave and the rumba clave. Both are used as bell patterns across much of Africa. Son and rumba clave can be played in either a triple-pulse (128 or 68) or duple-pulse (44, 24 or 22) structure. The contemporary Cuban practice is to write the duple-pulse clave in a single measure of 44. It is also written in a single measure in ethnomusicological writings about African music.Although they subdivide the beats differently, the 128 and 44 versions of each clave share the same pulse names. The correlation between the triple-pulse and duple-pulse forms of clave, as well as other patterns, is an important dynamic of sub-Saharan-based rhythm. Every triple-pulse pattern has its duple-pulse correlative.
The key to Afro-Cuban rhythm:
Son clave has strokes on 1, 1a, 2&, 3&, 4.
The key to Afro-Cuban rhythm:
44: 1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . X . . . X . X . . . || 128: 1 & a 2 & a 3 & a 4 & a || X . X . X . . X . X . . || Rumba clave has strokes on 1, 1a, 2a, 3&, 4.
The key to Afro-Cuban rhythm:
44: 1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . . X . . X . X . . . || 128: 1 & a 2 & a 3 & a 4 & a || X . X . . X . X . X . . || Both clave patterns are used in rumba. What we now call son clave (also known as Havana clave) used to be the key pattern played in Havana-style yambú and guaguancó. Some Havana-based rumba groups still use son clave for yambú. The musical genre known as son probably adopted the clave pattern from rumba when it migrated from eastern Cuba to Havana at the beginning of the 20th century.
The key to Afro-Cuban rhythm:
During the nineteenth century, African music and European music sensibilities were blended in original Cuban hybrids. Cuban popular music became the conduit through which sub-Saharan rhythmic elements were first codified within the context of European ('Western') music theory. The first written music rhythmically based on clave was the Cuban danzón, which premiered in 1879. The contemporary concept of clave with its accompanying terminology reached its full development in Cuban popular music during the 1940s. Its application has since spread to folkloric music as well. In a sense, the Cubans standardized their myriad rhythms, both folkloric and popular, by relating nearly all of them to the clave pattern. The veiled code of African rhythm was brought to light due to the clave’s omnipresence. Consequently, the term clave has come to mean both the five-stroke pattern and the total matrix it exemplifies. In other words, the rhythmic matrix is the clave matrix. Clave is the key that unlocks the enigma; it de-codes the rhythmic puzzle. It is commonly understood that the actual clave pattern does not need to be played for the music to be 'in clave'—Peñalosa (2009).One of the most difficult applications of the clave is in the realm of composition and arrangement of Cuban and Cuban-based dance music. Regardless of the instrumentation, the music for all of the instruments of the ensemble must be written with a very keen and conscious rhythmic relationship to the clave . . . Any 'breaks' and/or 'stops' in the arrangements must also be 'in clave'. If these procedures are not properly taken into consideration, then the music is 'out of clave' which, if not done intentionally, is considered an error. When the rhythm and music are 'in clave', a great natural 'swing' is produced, regardless of the tempo. All musicians who write and/or interpret Cuban-based music must be 'clave conscious', not just the percussionists—Santos (1986).
Clave theory:
There are three main branches of what could be called clave theory.
Clave theory:
Cuban popular music First is the set of concepts and related terminology, which were created and developed in Cuban popular music from the mid-19th to the mid-20th centuries. In Popular Cuban Music, Emilio Grenet defines in general terms how the duple-pulse clave pattern guides all members of the music ensemble. An important Cuban contribution to this branch of music theory is the concept of the clave as a musical period, which has two rhythmically opposing halves. The first half is antecedent and moving, and the second half is consequent and grounded.
Clave theory:
Ethnomusicological studies of African rhythm The second branch comes from the ethnomusicological studies of sub-Saharan African rhythm. In 1959, Arthur Morris Jones published his landmark work Studies in African Music, in which he identified the triple-pulse clave as the guide pattern for many pieces of music from ethnic groups across Africa. An important contribution of ethnomusicology to clave theory is the understanding that the clave matrix is generated by cross-rhythm.
Clave theory:
The 3–2/2–3 clave concept and terminology The third branch comes from the United States. An important North American contribution to clave theory is the worldwide propagation of the 3–2/2–3 concept and terminology, which arose from the fusion of Cuban rhythms with jazz in New York City.Only in the last couple of decades have the three branches of clave theory begun to reconcile their shared and conflicting concepts. Thanks to the popularity of Cuban-based music and the vast amount of educational material available on the subject, many musicians today have a basic understanding of clave. Contemporary books that deal with clave, share a certain fundamental understanding of what clave means.
Clave theory:
Chris Washburne considers the term to refer to the rules that govern the rhythms played with the claves. Bertram Lehman regards the clave as a concept with wide-ranging theoretical syntactic implications for African music in general, and for David Peñalosa, the clave matrix is a comprehensive system for organizing music—Toussaint (2013).
Clave theory:
Mathematical analysis In addition to these three branches of theory, clave has in recent years been thoroughly analyzed mathematically. The structure of clave can be understood in terms of cross-rhythmic ratios, above all, three-against-two (3:2). Godfried Toussaint, a Research Professor of Computer Science, has published a book and several papers on the mathematical analysis of clave and related African bell patterns. Toussaint uses geometry and the Euclidean algorithm as a means of exploring the significance of clave.
Types:
Son clave The most common clave pattern used in Cuban popular music is called the son clave, named after the Cuban musical genre of the same name. Clave is the basic period, composed of two rhythmically opposed cells, one antecedent and the other consequent. Clave was initially written in two measures of 24 in Cuban music. When written this way, each cell or clave half is represented within a single measure.
Types:
Three-side / two-side The antecedent half has three strokes and is called the three-side of the clave. The consequent half (second measure above) of clave has two strokes and is called the two-side.
Going only slightly into the rhythmic structure of our music we find that all its melodic design is constructed on a rhythmic pattern of two measures, as though both were only one, the first is antecedent, strong, and the second is consequent, weak—Grenet (1939).
Types:
[With] clave... the two measures are not at odds, but rather, they are balanced opposites like positive and negative, expansive and contractive or the poles of a magnet. As the pattern is repeated, an alternation from one polarity to the other takes place creating the pulse and rhythmic drive. Were the pattern to be suddenly reversed, the rhythm would be destroyed as in a reversing of one magnet within a series... the patterns are held in place according to both the internal relationships between the drums and their relationship with clave... Should the drums fall out of clave (and in contemporary practice they sometimes do) the internal momentum of the rhythm will be dissipated and perhaps even broken—Amira and Cornelius (1992).
Types:
Tresillo In Cuban popular music, the first three strokes of son clave are also known collectively as tresillo, a Spanish word meaning triplet i.e. three almost equal beats in the same time as two main beats. However, in the vernacular of Cuban popular music, the term refers to the figure shown here.
Types:
Rumba clave The other main clave pattern is the rumba clave. Rumba clave is the key pattern used in Cuban rumba. The use of the triple-pulse form of the rumba clave in Cuba can be traced back to the iron bell (ekón) part in abakuá music. The form of rumba known as columbia is culturally and musically connected with abakuá which is an Afro Cuban cabildo that descends from the Kalabari of Cameroon. Columbia also uses this pattern. Sometimes 128 rumba clave is clapped in the accompaniment of Cuban batá drums. The 44 form of rumba clave is used in yambú, guaguancó and popular music.
Types:
There is some debate as to how the 44 rumba clave should be notated for guaguancó and yambú. In actual practice, the third stroke on the three-side and the first stroke on the two-side often fall in rhythmic positions that do not fit neatly into music notation. Triple-pulse strokes can be substituted for duple-pulse strokes. Also, the clave strokes are sometimes displaced in such a way that they don't fall within either a triple-pulse or duple-pulse "grid". Therefore, many variations are possible.
Types:
The first regular use of the rumba clave in Cuban popular music began with the mozambique, created by Pello el Afrikan in the early 1960s. When used in popular music (such as songo, timba or Latin jazz) rumba clave can be perceived in either a 3–2 or 2–3 sequence.
Standard bell pattern The seven-stroke standard bell pattern contains the strokes of both clave patterns. Some North American musicians call this pattern clave. Other North American musicians refer to the triple-pulse form as the 68 bell because they write the pattern in two measures of 68.
Like clave, the standard pattern is expressed in both triple and duple-pulse. The standard pattern has strokes on: 1, 1a, 2& 2a, 3&, 4, 4a.
Types:
128: 1 & a 2 & a 3 & a 4 & a || X . X . X X . X . X . X || 44: 1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . X X . . X . X . . X || The ethnomusicologist A.M. Jones observes that what we call son clave, rumba clave, and the standard pattern are the most commonly used key patterns (also called bell patterns, timeline patterns and guide patterns) in Sub-Saharan African music traditions and he considers all three to be basically the same pattern. Clearly, they are all expressions of the same rhythmic principles. The three key patterns are found within a large geographic belt extending from Mali in northwest Africa to Mozambique in southeast Africa.
"68 clave" as used by North American musicians:
In Afro-Cuban folkloric genres the triple-pulse (128 or 68) rumba clave is the archetypal form of the guide pattern. Even when the drums are playing in duple-pulse (44), as in guaguancó, the clave is often played with displaced strokes that are closer to triple-pulse than duple-pulse. John Santos states: "The proper feel of this [rumba clave] rhythm, is closer to triple [pulse].”Conversely, in salsa and Latin jazz, especially as played in North America, 44 is the basic framework and 68 is considered something of a novelty and in some cases, an enigma. The cross-rhythmic structure (multiple beat schemes) is frequently misunderstood to be metrically ambiguous. North American musicians often refer to Afro-Cuban 68 rhythm as a feel, a term usually reserved for those aspects of musical nuance not practically suited for analysis. As used by North American musicians, "68 clave" can refer to one of three types of triple-pulse key patterns.
"68 clave" as used by North American musicians:
Triple-pulse standard pattern When one hears triple-pulse rhythms in Latin jazz the percussion is most often replicating the Afro-Cuban rhythm bembé. The standard bell is the key pattern used in bembé and so with compositions based on triple-pulse rhythms, it is the seven-stroke bell, rather than the five-stroke clave that is the most familiar to jazz musicians. Consequently, some North American musicians refer to the triple-pulse standard pattern as "68 clave".
"68 clave" as used by North American musicians:
Triple-pulse rumba clave Some refer to the triple-pulse form of rumba clave as "68 clave". When rumba clave is written in 68 the four underlying main beats are counted: 1, 2, 1, 2.
"68 clave" as used by North American musicians:
1 & a 2 & a |1 & a 2 & a || X . X . . X |. X . X . . || Claves... are not usually played in Afro-Cuban 68 feels... [and] the clave [pattern] is not traditionally played in 68 though it may be helpful to do so to relate the clave to the 68 bell pattern—Thress (1994).The main exceptions are: the form of rumba known as Columbia, and some performances of abakuá by rumba groups, where the 68 rumba clave pattern is played on claves.
"68 clave" as used by North American musicians:
Triple-pulse son clave Triple-pulse son clave is the least common form of clave used in Cuban music. It is, however, found across an enormously vast area of sub-Saharan Africa. The first published example (1920) of this pattern identified it as a hand-clap part accompanying a song from Mozambique.
Cross-rhythm and the correct metric structure:
Because 68 clave-based music is generated from cross-rhythm, it is possible to count or feel the 68 clave in several different ways. The ethnomusicologist Arthur Morris Jones correctly identified the importance of this key pattern, but he mistook its accents as indicators of meter rather than the counter-metric phenomena they are. Similarly, while Anthony King identified the triple-pulse "son clave" as the ‘standard pattern’ in its simplest and most basic form, he did not correctly identify its metric structure. King represented the pattern in a polymetric 7+58 time signature.
Cross-rhythm and the correct metric structure:
It wasn't until African musicologists like C.K. Ladzekpo entered into the discussion in the 1970s and 80s that the metric structure of sub-Saharan rhythm was unambiguously defined. The writings of Victor Kofi Agawu and David Locke must also be mentioned in this regard.In the diagram below 68 (son) clave is shown on top and a beat cycle is shown below it. Any or all of these structures may be the emphasis at a given point in a piece of music using the "68 clave".
Cross-rhythm and the correct metric structure:
The example on the left (68) represents the correct count and ground of the "68 clave". The four dotted quarter-notes across the two bottom measures are the main beats. All clave patterns are built upon four main beats. The bottom measures on the other two examples (32 and 64) show cross-beats. Observing the dancer's steps almost always reveals the main beats of the music. Because the main beats are usually emphasized in the steps and not the music, it is often difficult for an "outsider" to feel the proper metric structure without seeing the dance component. Kubik states: "To understand the emotional structure of any music in Africa, one has to look at the dancers as well and see how they relate to the instrumental background" (2010: 78).
Cross-rhythm and the correct metric structure:
For cultural insiders, identifying the... ‘dance feet’ occurs instinctively and spontaneously. Those not familiar with the choreographic supplement, however, sometimes have trouble locating the main beats and expressing them in movement. Hearing African music on recordings alone without prior grounding in its dance-based rhythms may not convey the choreographic supplement. Not surprisingly, many misinterpretations of African rhythm and meter stem from a failure to observe the dance.
3–2/2–3 clave concept and terminology:
In Cuban popular music, a chord progression can begin on either side of the clave. When the progression begins on the three-side, the song or song section is said to be in 3–2 clave. When the chord progression begins on the two-side, it is in 2–3 claves. In North America, salsa and Latin jazz charts commonly represent clave in two measures of cut-time (22); this is most likely the influence of jazz conventions. When clave is written in two measures (right), changing from one clave sequence to the other is a matter of reversing the order of the measures.
3–2/2–3 clave concept and terminology:
Chord progression begins on the three-side (3–2) A guajeo is a typical Cuban ostinato melody, most often consisting of arpeggiated chords in syncopated patterns. Guajeos are a seamless blend of European harmonic and African rhythmic structures. Most guajeos have a binary structure that expresses clave.
3–2/2–3 clave concept and terminology:
Clave motif Kevin Moore states: "There are two common ways that the three-side is expressed in Cuban popular music. The first to come into regular use, which David Peñalosa calls 'clave motif,' is based on the decorated version of the three-side of the clave rhythm." The following guajeo example is based on a clave motif. The three-side (first measure) consists of the tresillo variant known as cinquillo.
3–2/2–3 clave concept and terminology:
Since this chord progression begins on the three-side, the song or song section is said to be in 3–2 clave.
3–2/2–3 clave concept and terminology:
Offbeat/onbeat motif Moore: "By the 1940s [there was] a trend toward the use of what Peñalosa calls the 'offbeat/onbeat motif.' Today, the offbeat/onbeat motif method is much more common." With this type of guajeo motif, the three-side of clave is expressed with all offbeats. The following I–IV–V–IV progression is in a 3–2 clave sequence. It begins with an offbeat pick-up on the pulse immediately before beat 1. With some guajeos, offbeats at the end of the two-side, or beats at the end of the three-side serve as pick-ups leading into the next measure (when clave is written in two measures).
3–2/2–3 clave concept and terminology:
Chord progression begins on the two-side (2–3) Clave motif A chord progression can begin on either side of the clave. One can, therefore, be on either the three-side or the two-side because the harmonic progression, rather than the rhythmic progression, is the primary referent. The following guajeo is based on the clave motif in a 2–3 sequence. The cinquillo rhythm is now in the second measure.
3–2/2–3 clave concept and terminology:
Onbeat/offbeat motif This guajeo is in 2–3 clave because it begins on the downbeat, emphasizing the onbeat quality of the two-side. The figure has the same harmonic sequence as the earlier offbeat/onbeat example, but rhythmically, the attack-point sequence of the two measures is reversed. Most salsa is in 2–3 clave and most salsa piano guajeos are based on the 2–3 onbeat/offbeat motif.
3–2/2–3 clave concept and terminology:
Going from one side of clave to the other within the same song The 3–2/2–3 concept and terminology was developed in New York City during the 1940s by Cuban-born Mario Bauza while he was the music director of Machito and his Afro-Cubans. Bauzá was a master at moving the song from one side of the clave to the other.
3–2/2–3 clave concept and terminology:
The following melodic excerpt is taken from the opening verses of "Que vengan los rumberos" by Machito and his Afro-Cubans. Notice that the melody goes from one side of clave to the other and then back again. A measure of 24 moves the chord progression from the two-side (2–3) to the three-side (3–2). Later, another measure of 24 moves the start of the chord progression back to two-side (2–3).
3–2/2–3 clave concept and terminology:
According to David Peñalosa: The first 4+1⁄2 claves of the verses are in 2–3. Following the measure of 24 (half clave) the song flips to the three-side. It continues in 3–2 on the V7 chord for 4+1⁄2 claves. The second measure of 24 flips the song back to the two-side and the I chord.
3–2/2–3 clave concept and terminology:
In songs like "Que vengan los rumberos", the phrases continually alternate between a 3–2 framework and a 2–3 framework. It takes a certain amount of flexibility to repeatedly reorder your orientation in this way. The most challenging moments are the truncations and other transitional phrases where you "pivot" to move your point of reference from one side of clave to the other.
3–2/2–3 clave concept and terminology:
Working in conjunction with the chord and clave changes, vocalist Frank "Machito" Grillo creates an arc of tension/release spanning more than a dozen measures. Initially, Machito sings the melody straight (first line), but soon expresses the lyrics in the freer and more syncopated inspiración of a folkloric rumba (second line). By the time the song changes to 3–2 on the V7 chord, Machito has developed a considerable amount of rhythmic tension by contradicting the underlying meter. That tension is then resolved when he sings on three consecutive main beats (quarter-notes), followed by tresillo. In the measure immediately following tresillo the song returns to 2–3 and the I chord (fifth line).
3–2/2–3 clave concept and terminology:
Tito Puente learned the concept from Bauzá. Tito Puente's "Philadelphia Mambo" is an example of a song that moves from one side of clave to the other. The technique eventually became a staple of composing and arranging in salsa and Latin jazz. According to Kevin Moore: Clave direction is relative while clave alignment is absolute. If you walk from New York to Miami, you're walking south; if you walk from Miami to New York, you're walking north. But if you put your left shoe on your right foot, (i.e., if your shoes are cruzado), it's going to be a very awkward walk in either direction. Your shoes remain "aligned" (or misaligned) with your feet regardless of the direction your feet are taking you, and regardless of how poorly they fit.
3–2/2–3 clave concept and terminology:
Cuban folkloric musicians do not use the 3–2/2–3 system. Many Cuban performers of popular music do not use it either. The great Cuban conga player and bandleader Mongo Santamaría said, "Don’t tell me about 3–2 or 2–3! In Cuba, we just play. We feel it, we don’t talk about such things." In another book, Santamaría said, "In Cuba, we don’t think about [clave]. We know that we’re in a clave. Because we know that we have to be in clave to be a musician." According to Cuban pianist Sonny Bravo, the late Charlie Palmieri would insist that "There’s no such thing as 3–2 or 2–3, there’s only one clave!" The contemporary Cuban bassist, composer and arranger Alain Pérez flatly states: "In Cuba, we do not use that 2–3, 3–2 formula... 2–3, 3–2 [is] not used in Cuba. That is how people learn Cuban music outside Cuba."
In non-Cuban music:
Controversy over use and origins Perhaps the greatest testament to the musical vitality of the clave is the spirited debate it engenders, both in terms of musical usage and historical origins. This section presents examples from non-Cuban music, which some musicians (not all) hold to be representative of the clave. The most common claims, those of Brazilian and subsets of American popular music, are described below.
In non-Cuban music:
In Africa A widely used bell pattern Clave is a Spanish word and its musical usage as a pattern played on claves was developed in the western part of Cuba, particularly the cities of Matanzas and Havana. Some writings have claimed that the clave patterns originated in Cuba. One frequently repeated theory is that the triple-pulse African bell patterns morphed into duple-pulse forms as a result of the influence of European musical sensibilities. "The duple meter feel [of 44 rumba clave] may have been the result of the influence of marching bands and other Spanish styles..."— Washburne (1995).However, the duple-pulse forms have existed in sub-Saharan Africa for centuries. The patterns the Cubans call clave are two of the most common bell parts used in Sub-Saharan African music traditions. Natalie Curtis, A.M. Jones, Anthony King and John Collins document the triple-pulse forms of what we call “son clave” and “rumba clave” in West, Central, and East Africa. Francis Kofi and C.K. Ladzekpo document several Ghanaian rhythms that use the triple or duple-pulse forms of "son clave". Percussion scholar royal hartigan identifies the duple-pulse form of "rumba clave" as a timeline pattern used by the Yoruba and Ibo of Nigeria, West Africa. He states that this pattern is also found in the high-pitched boat-shaped iron bell known as atoke played in the Akpese music of the Eve people of Ghana. There are many recordings of traditional African music where one can hear the five-stroke "clave" used as a bell pattern.
In non-Cuban music:
Popular dance music Cuban music has been popular in sub-Saharan Africa since the mid-twentieth century. To the Africans, clave-based Cuban popular music sounded both familiar and exotic. Congolese bands started doing Cuban covers and singing the lyrics phonetically. Soon, they were creating their original Cuban-like compositions, with lyrics sung in French or Lingala, a lingua franca of the western Congo region. The Congolese called this new music rumba, although it was based on the son. The Africans adapted guajeos to electric guitars and gave them their regional flavor. The guitar-based music gradually spread out from the Congo, increasingly taking on local sensibilities. This process eventually resulted in the establishment of several different distinct regional genres, such as soukous.
In non-Cuban music:
Soukous The following soukous bass line is an embellishment of clave.
Banning Eyre distills down the Congolese guitar style to this skeletal figure, where clave is sounded by the bass notes (notated with downward stems).
In non-Cuban music:
Highlife Highlife was the most popular genre in Ghana and Nigeria during the 1960s. This arpeggiated highlife guitar part is essentially a guajeo. The rhythmic pattern is known in Cuba as baqueteo. The pattern of attack-points is nearly identical to the 3–2 clave motif guajeo shown earlier in this article. The bell pattern known in Cuba as clave, is indigenous to Ghana and Nigeria, and is used in highlife.
In non-Cuban music:
Afrobeat The following afrobeat guitar part is a variant of the 2–3 onbeat/offbeat motif. Even the melodic contour is guajeo-based. 2–3 claves are shown above the guitar for reference only. The clave pattern is not ordinarily played in afrobeat.
In non-Cuban music:
Guide-patterns in Cuban versus non-Cuban music There is some debate as to whether or not clave, as it appears in Cuban music, functions in the same way as its sister rhythms in other forms of music (Brazilian, North American and African). Certain forms of Cuban music demand a strict relationship between the clave and other musical parts, even across genres. This same structural relationship between the guide-pattern and the rest of the ensemble is easily observed in many sub-Saharan rhythms, as well as rhythms from Haiti and Brazil. However, the 3–2/2–3 concept and terminology are limited to certain types of Cuban-based popular music and are not used in the music of Africa, Haiti, Brazil or in Afro-Cuban folkloric music. In American pop music, the clave pattern tends to be used as an element of rhythmic color, rather than a guide-pattern and as such is superimposed over many types of rhythms.
In non-Cuban music:
In Brazilian music Both Cuba and Brazil imported Yoruba, Fon and Congolese slaves. Therefore, it is not surprising that we find the bell pattern the Cubans call clave in the Afro-Brazilian music of Macumba and Maculelê (dance). "Son clave" and "rumba clave" are also used as a tamborim part in some batucada arrangements. The structure of Afro-Brazilian bell patterns can be understood in terms of the clave concept (see below). Although a few contemporary Brazilian musicians have adopted the 3–2/2–3 terminology, it is traditionally not a part of the Brazilian rhythmic concept.
In non-Cuban music:
Bell pattern 1 is used in maculelê (dance) and some Candomblé and Macumba rhythms. Pattern 1 is known in Cuba as son clave. Bell 2 is used in afoxê and can be thought of as pattern 1 embellished with four additional strokes. Bell 3 is used in batucada. Pattern 4 is the maracatu bell and can be thought of as pattern 1 embellished with four additional strokes.
In non-Cuban music:
Example in a Pixinguinha choro music Bossa nova pattern The so-called "bossa nova clave" (or "Brazilian clave") has a similar rhythm to that of the son clave, but the second note on the two-side is delayed by one pulse (subdivision). The rhythm is typically played as a snare rim pattern in bossa nova music. The pattern is shown below in 24, as it is written in Brazil. In North American charts it is more likely to be written in cut-time.
In non-Cuban music:
According to drummer Bobby Sanabria the Brazilian composer Antonio Carlos Jobim, who developed the pattern, considers it to be merely a rhythmic motif and not a clave (guide pattern). Jobim later regretted that Latino musicians misunderstood the role of this bossa nova pattern.
Other Brazilian examples The examples below are transcriptions of several patterns resembling the Cuban clave that is found in various styles of Brazilian music, on the ago-gô and surdo instruments.
Legend: Time signature: 24; L=low bell, H=high bell, O = open surdo hit, X = muffled surdo hit, and | divides the measure: Style: Samba 3:2; LL.L.H.H|L.L.L.H. (More common 3:2: .L.L.H.H|L.L.L.H.) Style: Maracatu 3:2; LH.HL.H.|L.H.LH.H Style: Samba 3:2; L|.L.L..L.|..L..L.L| Instrument: 3rd Surdo 2:3; X...O.O.|X...OO.O Variation of samba style: Partido Alto 2:3; L.H..L.L|.H..L.L.
Style: Maracatu 2:3; L.H.L.H.|LH.HL.H.
Style: Samba-Reggae or Bossanova 3:2; O..O..O.|..O..O..
Style: Ijexa 3:2; LL.L.LL.|L.L.L.L. (HH.L.LL.|H.H.L.L.)For 3rd example above, the clave pattern is based on a common accompaniment pattern played by the guitarist. B=bass note played by guitarist's thumb, C=chord played by fingers.
In non-Cuban music:
&|1 & 2 & 3 & 4 &|1 & 2 & 3 & 4 &|| C|B C . C B . C .|B . C . B C . C|| The singer enters on the wrong side of the clave and the ago-gô player adjusts accordingly. This recording cuts off the first bar so that it sounds like the bell comes in on the third beat of the second bar. This is suggestive of a pre-determined rhythmic relationship between the vocal part and the percussion and supports the idea of a clave-like structure in Brazilian music.
In non-Cuban music:
In Jamaican and French Caribbean music The son clave rhythm is present in Jamaican mento music, and can be heard on 1950s-era recordings such as "Don’t Fence Her In", "Green Guava" or "Limbo" by Lord Tickler, "Mango Time" by Count Lasher, "Linstead Market/Day O" by The Wigglers, "Bargie" by The Tower Islanders, "Nebuchanezer" by Laurel Aitken and others. The Jamaican population is part of the same origin (Congo) as many Cubans, which perhaps explains the shared rhythm. It is also heard frequently in Martinique's biguine and Dominica's Jing ping. Just as likely however is the possibility that claves and the clave rhythm spread to Jamaica, Trinidad and the other small islands of the Caribbean through the popularity of Cuban son recordings from the 1920s onward.
In non-Cuban music:
Experimental clave music Art music The clave rhythm and clave concept have been used in some modern art music ("classical") compositions. "Rumba Clave" by Cuban percussion virtuoso Roberto Vizcaiño has been performed in recital halls around the world. Another clave-based composition that has "gone global" is the snare drum suite "Cross" by Eugene D. Novotney.
In non-Cuban music:
Odd meter "clave" Technically speaking, the term odd meter clave is an oxymoron. Clave consists of two even halves, in a divisive structure of four main beats. However, in recent years jazz musicians from Cuba and outside of Cuba have been experimenting with creating new "claves" and related patterns in various odd meters. Clave which is traditionally used in a divisive rhythm structure, has inspired many new creative inventions in an additive rhythm context.
In non-Cuban music:
. . . I developed the concept of adjusting claves to other time signatures, with varying degrees of success. What became obvious to me quite quickly was that the closer I stuck to the general rules of clave the more natural the pattern sounded. Clave has a natural flow with a certain tension and resolves points. I found if I kept these points in the new meters they could still flow seamlessly, allowing me to play longer phrases. It also gave me many reference points and reduced my reliance on "one"—Guilfoyle (2006: 10).
In non-Cuban music:
Recommended listening for odd-meter "clave" Here are some examples of recordings that use odd meter clave concepts.
Dafnis Prieto About the Monks (Zoho).
Sebastian Schunke Symbiosis (Pimienta Records).
Paoli Mejias Mi Tambor (JMCD).
John Benitez Descarga in New York (Khaeon).
Deep Rumba A Calm in the Fire of Dances (American Clave).
Nachito Herrera Bembe en mi casa (FS Music).
Bobby Sanabria Quarteto Aché (Zoho).
Julio Barretto Iyabo (3d).
Michel Camilo Triangulo (Telarc).
Samuel Torres Skin Tones (www.samueltorres.com).
Horacio "el Negro" Hernandez Italuba (Universal Latino).
Tony Lujan Tribute (Bella Records).
Edward Simon La bikina (Mythology).
Jorge Sylvester In the Ear of the Beholder (Jazz Magnet).
Uli Geissendoerfer "The Extension" (CMO) Manuel Valera In Motion (Criss Cross Jazz).
General references:
Mauleón, Rebeca (1993). Salsa Guidebook for Piano and Ensemble. Petaluma, California: Sher Music. ISBN 0-9614701-9-4.
Moore, Kevin (2012). Understanding Clave and Clave Changes: Singing, Clapping and Dancing Exercises. Santa Cruz: Moore Music. ISBN 978-1-4664-6230-4.
Novotney, Eugene N. (1998) "Thesis: The 3:2 Relationship as the Foundation of Timelines in West African Musics" (registration required) ([1]), UnlockingClave.com. Urbana, IL: University of Illinois.
Ortiz, Fernando (1950). La Africana De La Musica Folklorica De Cuba. Ediciones Universales, en español. Hardcover illustrated edition. ISBN 84-89750-18-1.
Palmer, Robert (1979). A Tale of Two Cities: Memphis Rock and New Orleans Roll. Brooklyn. ISBN 978-0914678120 Peñalosa, David (2009). The Clave Matrix; Afro-Cuban Rhythm: Its Principles and African Origins. Redway, CA: Bembe Inc. ISBN 1-886502-80-3.
Peñalosa, David (2010). Rumba Quinto. Redway, CA: Bembe Books. ISBN 1-4537-1313-1.
Stewart, Alexander (2000). "'Funky Drummer': New Orleans, James Brown and the Rhythmic Transformation of American Popular Music". Popular Music, v. 19, n. 3. Oct. 2000), p. 293-318. JSTOR 853638. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NanoSight**
NanoSight:
NanoSight Ltd is a company that designs and manufactures instruments for the scientific analysis of nanoparticles that are between approximately ten nanometers (nm) and one micron (μm) in diameter. The company was founded in 2003 by Bob Carr and John Knowles to further develop a technique Bob Carr had invented to visualize nanoparticles suspended in liquid. The company has since developed the technique of Nanoparticle Tracking Analysis (NTA), and they produce a series of instruments to count, size and visualize nanoparticles in liquid suspension using this patented technology.NanoSight has 25 employees in the UK and has received several awards and recognitions. More than 450 instruments had been sold as of 2012. The technology has been cited in over 1300 scientific publications, presentations and reports.
NanoSight:
NanoSight was acquired by Malvern Instruments on 30 September 2013.
Product overview:
NanoSight develops and produces instruments that visualize, characterize and measure small particles in suspension. Detected particles may be as small as 10 nm in diameter, depending on composition. NanoSight instruments can analyze particle size, concentration, aggregation, and zeta potential. An optional fluorescence mode, employing an optical filter, allows speciation of fluorescently labeled particles.
Product overview:
Each instrument comprises a scientific camera, a microscope, and a sample viewing unit (LM12 or LM14). The viewing unit uses a laser diode to illuminate particles in liquid suspension that are held within or advanced through a flow chamber within the unit. The instrument is used in conjunction with a computer control unit that runs a custom-designed Nanoparticle Tracking Analysis (NTA) software package. NTA analyzes videos captured using the instrument, giving a particle size distribution and particle count based upon tracking of each particle's Brownian motion. Tracking is carried out for all particles in the laser scattering volume to produce a particle size distribution using the Stokes-Einstein equation, relating the Brownian motion of a particle to a sphere-equivalent hydrodynamic radius.
Instruments:
Several instruments are currently available.
General specifications: Nanoparticle analysis range: typically 10–1000 nm, dependent on particle material Particle type: any Solvent: any non-corrosive solvent and water. A range of solvent-resistant seals are available.
Power requirement (own adapter supplied): 110–220 V Laser output: Various. 40 mW at 640 nm (Class 1 Laser Product) for basic LM10 and LM20 models. Other lasers are available.
Instruments:
Viewing chamber volume requirements: 0.3 ml (most models) or 0.1 ml (NS500, although larger volumes must be loaded into the fluidics system if sample is not directly injected) LM10 NanoSight's LM10 instrument is based upon a conventional optical microscope fitted with a scientific camera (CCD, EMCCD or sCMOS) and either the LM12 or LM14 viewing unit. Using a laser light source with a wavelength of 405 nm (blue), 532 nm (green), or 638 nm (red), the particles in the sample are illuminated and the scattered light is captured by the camera and displayed on the connected personal computer running Nanoparticle Tracking Analysis (NTA) software.
Instruments:
Using NTA, the particles are automatically tracked and sized. Results are displayed as a frequency size distribution graph and are exported in various, user-selected formats including spreadsheets and video files. Additionally, information-rich videos clips may be captured and archived for future reference and alternative analyses. The LM10 is proven with most nanoparticle classes down to 10 nm (dependent upon particle density) dispersed in a wide range of solvents.
Instruments:
LM10-HS The LM10-HS instrument is similar to the standard LM10 unit but has a higher sensitivity sCMOS camera (EMCCD in earlier models). This allows smaller, lower refractive index particles to be analyzed.
The LM10-HS is more commonly used for sizing biological samples including viruses and vaccines.
Instruments:
LM20 NanoSight's LM20 is, in essence, a 'boxed-up' LM10, designed and created for greater ease of use. Using the same standard LM12 viewing unit as the LM10, this instrument provides identical results to those obtained in analyses run on an LM10 system. Typically, the LM20 is used in more industrial applications, such as analyzing particles used in paints, pigments, cosmetics, and foodstuffs. The LM20 is ideal for users unfamiliar with using a microscope.
Instruments:
NS500 The NS500 incorporates multiple automated features, including computer-controlled peristaltic pumps and stage positioning, for reproducibility and ease of use. Through the interface of the NTA Software Suite, the fluidics system may be used to inject samples into a small viewing chamber, dilute samples to a specified degree, flush the system between samples, or clean and dry the viewing chamber. In contrast with earlier models, the NS500 does not require manual cleaning of the viewing chamber between each sample, thus increasing throughput. Optical stage positions may be set for optical and fluorescent readings, improving reproducibility. Sample temperature control is also programmable. The NS500 can be used for both static as flow measurement using the additional syringe pump. A sample changer can provide enhanced throughput for static measurements, and using scripts high-throughput measurements under flow are available as well.
Instruments:
NS200 Like the LM20, but with a high-sensitivity camera like that of the NS500, the NS200 has a housing and is ideal for use in industrial settings, such as manufacture of inks, paints, pigments, petrochemicals, and vaccines. Its configuration is designed for study of small or otherwise weakly scattering nanoparticles, such as viruses, phage, liposomes and other drug delivery nanoparticles, and protein aggregates. It can be used in a non-laboratory environment by individuals unfamiliar with microscopes.
Applications:
NanoSight instruments are used for a variety of applications, including: Ceramic and metallic nanoparticles Pigments, paints and sun creams Exosomes, microvesicles, outer membrane vesicles, and other small biological particles Pharmaceutical particles - liposomes Viruses Carbon nanotubes (multi-walled) Colloidal suspensions and polymer particles Cosmetics and foodstuffs Particles in fuels and oils (soot, catalyst, wax etc.) Wear debris in lubricants Chemical Mechanical Polishing Slurries Nanotoxicology studiesIn 2011, the European Union (EU) announced that companies that use nanoparticles in their products may be required to report the quantity and size of their nanomaterials. NanoSight suggests that its products will be important for companies seeking to satisfy the new requirements.
Recognition:
Queen's Awards for Enterprise; Innovation (2013) Queen's Award for Enterprise in International Trade (2012) Technology World Business Innovation Award (2011) Winner, Deloitte Technology Fast 50 (2011) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Guanabenz**
Guanabenz:
Guanabenz (pronounced GWAHN-a-benz, sold under the trade name Wytensin) is an alpha agonist that is selective to the alpha-2 adrenergic receptor. Guanabenz is used as an antihypertensive drug, i.e., to treat high blood pressure (hypertension).The most common side effects during guanabenz therapy are dizziness, drowsiness, dry mouth, headache and weakness.Guanabenz can make one drowsy or less alert, therefore driving or operating dangerous machinery is not recommended.
Other possible uses:
Guanabenz also has some anti-inflammatory properties in different pathological situations, including multiple sclerosis.Guanabenz was found in one study to exert an inhibitory effect by decreasing the abundance of the enzyme CH25H, a cholesterol hydroxylase linked to antiviral immunity. Therefore, it is suggested that the drug and similar compounds could be used to treat type I interferon-dependent pathologies and that the CH25H enzyme could be a therapeutic target to control these diseases, including amyotrophic lateral sclerosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boron phosphide**
Boron phosphide:
Boron phosphide (BP) (also referred to as boron monophosphide, to distinguish it from boron subphosphide, B12P2) is a chemical compound of boron and phosphorus. It is a semiconductor.
History:
Crystals of boron phosphide were synthesized by Henri Moissan as early as 1891.
Appearance:
Pure BP is almost transparent, n-type crystals are orange-red whereas p-type ones are dark red.
Chemical properties:
BP is not attacked by acids or boiling aqueous alkali water solutions. It is only attacked by molten alkalis.
Physical properties:
BP is known to be chemically inert and exhibit very high thermal conductivity. Some properties of BP are listed below: lattice constant 0.45383 nm coefficient of thermal expansion 3.65×10−6 /°C (400 K) heat capacity CP ~ 0.8 J/(g·K) (300 K) Debye temperature = 985 K Bulk modulus 152 GPa relatively high microhardness of 32 GPa (100 g load).
electron and hole mobilities of a few hundred cm2/(V·s) (up to 500 for holes at 300 K) high thermal conductivity of ~ 460 W/mK at room temperature | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Niacin test**
Niacin test:
The niacin test has been widely used since the 1960s to identify mycobacteria at the species level in the clinical laboratory. The niacin test detects niacin (nicotinic acid) in aqueous extracts of a culture. M. tuberculosis strains that test negative for the niacin test are very rare.Redox reactions happening in Mycobacterium species produce niacin as a part of energy metabolism. Even though all mycobacteria produce niacin, M. tuberculosis accumulates an excess of niacin because of its inability to process niacin, excreting the excess niacin into the culture media, thus allowing it to be detected using the niacin test. The niacin test is typically only conducted on slow-growing, granular, tan colored colonies, as these are the morphology characteristics of M. tuberculosis on an agar plate. Because of its affordability compared to expensive identification methods like pyrosequencing or MALDI-TOF MS that require expensive machines and reagents.
Biochemical processes:
Biochemical pathway The nicotinamide adenine dinucleotide (NAD+) degradation pathway in M. tuberculosis has a blockade that the scavenging pathway resulting in an excess of niacin that cannot be processed by the organism. Because of this abundance, the M. tuberculosis along with Mycobacterium canetti and varied types of Mycobacterium africanum release the excess niacin into their outside environment, in this case, the agar plate or another medium. Niacin is water-soluble, so the culture media can be tested for the presence of niacin to determine whether a Mycobacterium isolate is one of the three mentioned species. Those three species are members of the Mycobacterium tuberculosis complex.
Biochemical processes:
Test strip process The niacin test strip is typically composed of potassium thiocyanate, chloramine-T, citric acid, and 4-Aminosalicylic acid. In the presence of citric acid, chloramine-T and potassium thiocyanate will react to form cyanogen chloride. This chemical will break apart the pyridine ring of niacin to produce y-carboxy glutaconic aldehyde and joins an aromatic amine to form a yellow color.
Testing procedure:
A niacin test strip is similar in appearance to a pH test strip. It is small, thin, rectangular, and white in color. Water is placed onto the culture plate and touched with a test strip for 15–20 minutes inside a small, sterile tube. If excess amounts of niacin are detected, the liquid inside the tube will turn yellow, a positive test. If the liquid in the tube is clear, there are no excess amounts of niacin and the test is negative. A positive niacin test does not necessarily indicate the presence of M. tuberculosis because other Mycobacterium species can test positive for excess niacin.Along with each batch of specimens being tested, a positive control of M. tuberculosis and a negative control with no organism will be included. If the positive control tests negative, there was probably an error with the batch, and likewise for a negative test showing positive.Because lab samples that are determined to be acid-fast bacilli are possibly M. tuberculosis, a biosafety level 3 organism, all niacin tests must be conducted in a biosafety cabinet with a full gown, respirator, gloves, and sealed laboratory to ensure the safety of the laboratory technician performing the test. All tests must also be conducted with sterile technique. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extra special group**
Extra special group:
In group theory, a branch of abstract algebra, extraspecial groups are analogues of the Heisenberg group over finite fields whose size is a prime. For each prime p and positive integer n there are exactly two (up to isomorphism) extraspecial groups of order p1+2n. Extraspecial groups often occur in centralizers of involutions. The ordinary character theory of extraspecial groups is well understood.
Definition:
Recall that a finite group is called a p-group if its order is a power of a prime p.
A p-group G is called extraspecial if its center Z is cyclic of order p, and the quotient G/Z is a non-trivial elementary abelian p-group.
Extraspecial groups of order p1+2n are often denoted by the symbol p1+2n. For example, 21+24 stands for an extraspecial group of order 225.
Classification:
Every extraspecial p-group has order p1+2n for some positive integer n, and conversely for each such number there are exactly two extraspecial groups up to isomorphism. A central product of two extraspecial p-groups is extraspecial, and every extraspecial group can be written as a central product of extraspecial groups of order p3. This reduces the classification of extraspecial groups to that of extraspecial groups of order p3. The classification is often presented differently in the two cases p odd and p = 2, but a uniform presentation is also possible.
Classification:
p odd There are two extraspecial groups of order p3, which for p odd are given by The group of triangular 3x3 matrices over the field with p elements, with 1's on the diagonal. This group has exponent p for p odd (but exponent 4 if p = 2).
Classification:
The semidirect product of a cyclic group of order p2 by a cyclic group of order p acting non-trivially on it. This group has exponent p2.If n is a positive integer there are two extraspecial groups of order p1+2n, which for p odd are given by The central product of n extraspecial groups of order p3, all of exponent p. This extraspecial group also has exponent p.
Classification:
The central product of n extraspecial groups of order p3, at least one of exponent p2. This extraspecial group has exponent p2.The two extraspecial groups of order p1+2n are most easily distinguished by the fact that one has all elements of order at most p and the other has elements of order p2.
Classification:
p = 2 There are two extraspecial groups of order 8 = 23, which are given by The dihedral group D8 of order 8, which can also be given by either of the two constructions in the section above for p = 2 (for p odd they give different groups, but for p = 2 they give the same group). This group has 2 elements of order 4.
Classification:
The quaternion group Q8 of order 8, which has 6 elements of order 4.If n is a positive integer there are two extraspecial groups of order 21+2n, which are given by The central product of n extraspecial groups of order 8, an odd number of which are quaternion groups. The corresponding quadratic form (see below) has Arf invariant 1.
Classification:
The central product of n extraspecial groups of order 8, an even number of which are quaternion groups. The corresponding quadratic form (see below) has Arf invariant 0.The two extraspecial groups G of order 21+2n are most easily distinguished as follows. If Z is the center, then G/Z is a vector space over the field with 2 elements. It has a quadratic form q, where q is 1 if the lift of an element has order 4 in G, and 0 otherwise. Then the Arf invariant of this quadratic form can be used to distinguish the two extraspecial groups. Equivalently, one can distinguish the groups by counting the number of elements of order 4.
Classification:
All p A uniform presentation of the extraspecial groups of order p1+2n can be given as follows. Define the two groups: M(p)=⟨a,b,c:ap=bp=1,cp=1,ba=abc,ca=ac,cb=bc⟩ N(p)=⟨a,b,c:ap=bp=c,cp=1,ba=abc,ca=ac,cb=bc⟩ M(p) and N(p) are non-isomorphic extraspecial groups of order p3 with center of order p generated by c. The two non-isomorphic extraspecial groups of order p1+2n are the central products of either n copies of M(p) or n−1 copies of M(p) and 1 copy of N(p). This is a special case of a classification of p-groups with cyclic centers and simple derived subgroups given in (Newman 1960).
Character theory:
If G is an extraspecial group of order p1+2n, then its irreducible complex representations are given as follows: There are exactly p2n irreducible representations of dimension 1. The center Z acts trivially, and the representations just correspond to the representations of the abelian group G/Z.
There are exactly p − 1 irreducible representations of dimension pn. There is one of these for each non-trivial character χ of the center, on which the center acts as multiplication by χ. The character values are given by pnχ on Z, and 0 for elements not in Z.
If a nonabelian p-group G has less than p2 − p nonlinear irreducible characters of minimal degree, it is extraspecial.
Examples:
It is quite common for the centralizer of an involution in a finite simple group to contain a normal extraspecial subgroup. For example, the centralizer of an involution of type 2B in the monster group has structure 21+24.Co1, which means that it has a normal extraspecial subgroup of order 21+24, and the quotient is one of the Conway groups.
Generalizations:
Groups whose center, derived subgroup, and Frattini subgroup are all equal are called special groups. Infinite special groups whose derived subgroup has order p are also called extraspecial groups. The classification of countably infinite extraspecial groups is very similar to the finite case, (Newman 1960), but for larger cardinalities even basic properties of the groups depend on delicate issues of set theory, some of which are exposed in (Shelah & Steprāns 1987). The nilpotent groups whose center is cyclic and derived subgroup has order p and whose conjugacy classes are at most countably infinite are classified in (Newman 1960). Finite groups whose derived subgroup has order p are classified in (Blackburn 1999). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paraguayan units of measurement**
Paraguayan units of measurement:
A number of units of measurement were used in Paraguay to measure quantities including length, mass, area, capacity, etc. Metric system had been optional since 1890, and adopted since 1899 in Paraguay.
System before metric system:
Spanish system was used before metric system.
System before metric system:
Length A number of units were used to measure length. According to legal equivalents, one vara (old) was 0.83856 m. One cuerda or One cordel was 83+1⁄3 vara or 69.88 m, according to legal equivalents. One vara was 0.866 m, according to legal equivalents. Some other units and their legal equivalents are given below: 1 pulgada (inch) = 1⁄36 vara1 linea (line) = 1⁄432 vara1 piede (foot) = 1⁄3 vara 1 pouce = 1⁄36 vara 1 ligne = 1⁄432 vara 1 cuadra = 100 vara 1 lieue (league) = 5000 vara.
Mass:
Several units were used to measure mass. According to legal equivalents, One libra was 0.459 kg (One libra (old) was 460.08 kg). Some other units and their legal equivalents were given below: 1 onza (once) = 1⁄16 libra 1 arroba = 25 libra 1 quintal (hundredweight) = 100 libra 1 tonelada (tonne) = 2000 libra.
Area:
Several units were used to measure area. Some units and their legal equalents are given below: 1 lifio (old) = 4883.2 m21 lifio = 100 square vara = 75 m2.
Capacity:
Two main systems, dry and liquid, were used to measure capacity.
Dry One fanega was 288 L and one almude was 1⁄12 fanega.
Liquid Several units were used to measure liquid capacity. One frasco was 3.029 litres. Some other units and their legal equivalents are given below: 1 cuarto = 1⁄4 frasco 1 baril = 32 frasco 1 pipe = 192 frasco. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Celestial pole**
Celestial pole:
The north and south celestial poles are the two points in the sky where Earth's axis of rotation, indefinitely extended, intersects the celestial sphere. The north and south celestial poles appear permanently directly overhead to observers at Earth's North Pole and South Pole, respectively. As Earth spins on its axis, the two celestial poles remain fixed in the sky, and all other celestial points appear to rotate around them, completing one circuit per day (strictly, per sidereal day).
Celestial pole:
The celestial poles are also the poles of the celestial equatorial coordinate system, meaning they have declinations of +90 degrees and −90 degrees (for the north and south celestial poles, respectively). Despite their apparently fixed positions, the celestial poles in the long term do not actually remain permanently fixed against the background of the stars. Because of a phenomenon known as the precession of the equinoxes, the poles trace out circles on the celestial sphere, with a period of about 25,700 years. The Earth's axis is also subject to other complex motions which cause the celestial poles to shift slightly over cycles of varying lengths (see nutation, polar motion and axial tilt). Finally, over very long periods the positions of the stars themselves change, because of the stars' proper motions. To take into account such movement, celestial pole definitions come with an epoch to specify the date of the rotation axis; J2000.0 is the current standard.
Celestial pole:
An analogous concept applies to other planets: a planet's celestial poles are the points in the sky where the projection of the planet's axis of rotation intersects the celestial sphere. These points vary because different planets' axes are oriented differently (the apparent positions of the stars also change slightly because of parallax effects).
Finding the north celestial pole:
The north celestial pole currently is within one degree of the bright star Polaris (named from the Latin stella polaris, meaning "pole star"). This makes Polaris, colloquially known as the "North Star", useful for navigation in the Northern Hemisphere: not only is it always above the north point of the horizon, but its altitude angle is always (nearly) equal to the observer's geographic latitude (though it can, of course, only be seen from locations in the Northern Hemisphere).
Finding the north celestial pole:
Polaris is near the north celestial pole for only a small fraction of the 25,700-year precession cycle. It will remain a good approximation for about 1,000 years, by which time the pole will have moved closer to Alrai (Gamma Cephei). In about 5,500 years, the pole will have moved near the position of the star Alderamin (Alpha Cephei), and in 12,000 years, Vega (Alpha Lyrae) will become the "North Star", though it will be about six degrees from the true north celestial pole.
Finding the north celestial pole:
To find Polaris, from a point in the Northern Hemisphere, face north and locate the Big Dipper (Plough) and Little Dipper asterisms. Looking at the "cup" part of the Big Dipper, imagine that the two stars at the outside edge of the cup form a line pointing upward out of the cup. This line points directly at the star at the tip of the Little Dipper's handle. That star is Polaris, the North Star.
Finding the south celestial pole:
The south celestial pole is visible only from the Southern Hemisphere. It lies in the dim constellation Octans, the Octant. Sigma Octantis is identified as the south pole star, more than one degree away from the pole, but with a magnitude of 5.5 it is barely visible on a clear night.
Finding the south celestial pole:
Method one: The Southern Cross The south celestial pole can be located from the Southern Cross (Crux) and its two "pointer" stars α Centauri and β Centauri. Draw an imaginary line from γ Crucis to α Crucis—the two stars at the extreme ends of the long axis of the cross—and follow this line through the sky. Either go four-and-a-half times the distance of the long axis in the direction the narrow end of the cross points, or join the two pointer stars with a line, divide this line in half, then at right angles draw another imaginary line through the sky until it meets the line from the Southern Cross. This point is 5 or 6 degrees from the south celestial pole. Very few bright stars of importance lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately beneath Crux.
Finding the south celestial pole:
Method two: Canopus and Achernar The second method uses Canopus (the second-brightest star in the sky) and Achernar. Make a large equilateral triangle using these stars for two of the corners. But where should the third corner go? It could be on either side of the line connecting Achernar and Canopus, and the wrong side will not lead to the pole. To find the correct side, imagine that Archernar and Canopus are both points on the circumference of a circle. The third corner of the equilateral triangle will also be on this circle. The corner should be placed clockwise from Achernar and anticlockwise from Canopus. The third imaginary corner will be the south celestial pole. If the opposite is done, the point will land in the middle of Eridanus, which isn't at the pole. If Canopus has not yet risen, the second-magnitude Alpha Pavonis can also be used to form the triangle with Achernar and the pole. In this case, go anticlockwise from Achernar instead of clockwise, form the triangle with Canopus, and the third point, the pole, will reveal itself. The wrong way will lead to Aquarius, which is very far away from the celestial pole.
Finding the south celestial pole:
Method three: The Magellanic Clouds The third method is best for moonless and clear nights, as it uses two faint "clouds" in the Southern Sky. These are marked in astronomy books as the Large and Small Magellanic Clouds (the LMC and the SMC). These "clouds" are actually dwarf galaxies near the Milky Way. Make an equilateral triangle, the third point of which is the south celestial pole. Like before, the SMC, LMC, and the pole will all be points on an equilateral triangle on an imaginary circle. The pole should be placed clockwise from the SMC and anticlockwise from the LMC. Going in the wrong direction will land you in the constellation of Horologium instead.
Finding the south celestial pole:
Method four: Sirius and Canopus A line from Sirius, the brightest star in the sky, through Canopus, the second-brightest, continued for the same distance lands within a couple of degrees of the pole. In other words, Canopus is halfway between Sirius and the pole. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magway Ltd**
Magway Ltd:
Magway is a UK startup noted for its e-commerce and freight delivery system that aims to transport goods in pods that fit in new and existing 90 cm (35 in)-diameter pipes, underground and overground, reducing road congestion and air pollution. It uses linear magnetic motors to shuttle pods, designed to accommodate a standard delivery crate (or tote), at approximately 31 miles per hour (50 km/h).Founded in 2017 by Rupert Cruise, an engineer on Elon Musk's Hyperloop project, and Phill Davies, a business expert, Magway secured a £0.65 million grant in 2018, through Innovate UK’s 'Emerging and Enabling Technologies' competition, to develop an operational demonstrator. In 2019, £1.58 million was raised through crowdfunding to fund a pilot scheme, and in 2020, Magway was awarded £1.9 million from the UK Government's 'Driving the Electric Revolution Challenge', an initiative launched to coincide with the first meeting of a new Cabinet committee focused on climate change. In September 2020, Magway completed its first full loop of test track in a warehouse in Wembley. Primarily focused on two freight routes from large consolidation centres near London (Milton Keynes, Buckinghamshire and Hatfield, Hertfordshire) into Park Royal, a west London distribution centre, future plans involve installing 850 kilometres (530 mi) of track in decommissioned London gas pipelines, to deliver e-commerce goods from distribution centres direct to consumers in the capital. The design of the pipes is similar to the current underground pipe system in small tunnels that distribute water, gas, and electricity in the city. The pods are powered by electromagnetic wave from magnetic motors that are similar to those used in roller coasters. A proposed route that runs from Milton Keynes to London will have the capacity to transport more than 600 million parcels annually. Outside of urban areas, Magway plans to build its pipe system alongside motorways. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Abbreviated Injury Scale**
Abbreviated Injury Scale:
The Abbreviated Injury Scale (AIS) is an anatomical-based coding system created by the Association for the Advancement of Automotive Medicine to classify and describe the severity of injuries. It represents the threat to life associated with the injury rather than the comprehensive assessment of the severity of the injury. AIS is one of the most common anatomic scales for traumatic injuries. The first version of the scale was published in 1969 with major updates in 1976, 1980, 1985, 1990, 1998, 2005, 2008 and 2015.
Scale:
The score describes three aspects of the injury using seven numbers written as 12(34)(56).7 Type Location SeverityEach number signifies 1- body region 2- type of anatomical structure 3,4- specific anatomical structure 5,6- level 7- Severity of score Severity Abbreviated Injury Score-Code is on a scale of one to six, one being a minor injury and six being maximal (currently untreatable). An AIS-Code of 6 is not the arbitrary code for a deceased patient or fatal injury, but the code for injuries specifically assigned an AIS 6 severity. An AIS-Code of 9 is used to describe injuries for which not enough information is available for more detailed coding, e.g. crush injury to the head.
Scale:
The AIS scale is a measurement tool for single injuries. A universally accepted injury aggregation function has not yet been proposed, though the injury severity score and its derivatives are better aggregators for use in clinical settings. In other settings such as automotive design and occupant protection, MAIS is a useful tool for the comparison of specific injuries and their relative severity and the changes in those frequencies that may result from evolving motor vehicle design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mycobacteria growth indicator tube**
Mycobacteria growth indicator tube:
Mycobacteria Growth Indicator Tube (MGIT) is intended for the detection and recovery of mycobacteria. The MGIT Mycobacteria Growth Indicator Tube contains 7 mL of modified Middlebrook 7H9 Broth base. The complete medium, with OADC enrichment and PANTA antibiotic mixture, is one of the most commonly used liquid media for the cultivation of mycobacteria.
All types of clinical specimens, pulmonary as well as extra-pulmonary (except blood and urine), can be processed for primary isolation in the MGIT tube using conventional methods. After processed specimen is inoculated, MGIT tube must be continuously monitored either manually or by automated instruments until positive or the end of the testing protocol.
Principles of the Procedure:
A fluorescent compound is embedded in silicone on the bottom of 16 × 100 mm round bottom tubes. The fluorescent compound is sensitive to the presence of oxygen dissolved in the broth. Initially, the large amount of dissolved oxygen quenches emissions from the compound and little fluorescence can be detected. Later, actively respiring microorganisms consume the oxygen and allow the fluorescence to be detected.Tubes are filled with samples in the broth and continuously incubated at 37 °C. The tubes are monitored for increasing fluorescence to determine if the tube is instrument positive; i.e., the test sample contains viable organisms. Fluorescence can be recorded by automated instruments such as Becton Dickinson's BACTEC MGIT 960 System, or manually using the BD BACTEC microMGIT fluorescence reader or a Wood's lamp or other long-wave UV light source.
Instruments:
BACTEC MGIT 960 System This instrument is produced by Becton Dickinson (BD). It is specially designed to accommodate Mycobacteria Growth Indicator Tube (MGIT) and incubate them at 37 °C. The instrument scans the MGIT every 60 minutes for increased fluorescence. Analysis of the fluorescence is used to determine if the tube is instrument positive; i.e., the test sample contains viable organisms. An instrument-positive tube contains approximately 105 to 106 colony-forming units per milliliter (CFU/mL). Culture tubes which remain negative for a minimum of 42 days (up to 56 days) and which show no visible signs of positivity are removed from the instrument as negatives and discarded. Its usefulness has been evaluated in the scientific literature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Music manuscript**
Music manuscript:
Music manuscripts are handwritten sources of music. Generally speaking, they can be written on paper or parchment. If the manuscript contains the composer's handwriting it is called an autograph. Music manuscripts can contain musical notation as well as texts and images. There exists a wide variety of types from sketches and fragments, to compositional scores and presentation copies of musical works.
Earliest music manuscripts:
The earliest written sources of Western European music can be found in medieval manuscripts of the late 9th century that contain liturgical texts. Above these texts small symbols (neumes) indicated the shape of the melodic lines. Only a few manuscripts survive from that time. New systems were devised over the centuries.
Preparation of manuscripts and copying of music:
In the past, each composer was required to draw their own staff lines (staves) onto blank paper. Eventually, staff paper was manufactured pre-printed with staves as a labor-saving technique. The composer could then compose music directly onto the lines in pencil or ink.
Until the arrival of more modern methods, music (unless printed) had to be written by hand. For larger scale works, a copyist was often employed to hand-copy individual parts (for each musician) from a composer's musical score.
With the advent of the personal computer in the late 1980s and beyond, music typesetting could now be accomplished by a graphics computer software made for this purpose, such as Dorico, Finale, Musescore, or Sibelius, which has reduced the necessity of music manuscripts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exonumia**
Exonumia:
Exonumia are numismatic items (such as tokens, medals, or scrip) other than coins and paper money. This includes "Good For" tokens, badges, counterstamped coins, elongated coins, encased coins, souvenir medallions, tags, wooden nickels and other similar items. It is an aspect of numismatics and many coin collectors are also exonumists.
Besides the above strict definition, others extend it to include non-coins which may or may not be legal tenders such as cheques, credit cards and similar paper. These can also be considered notaphily or scripophily.
Etymology:
The noun exonumia is derived from two classical roots: exo, meaning "out-of" in Greek, and nummus, meaning "coin" in Latin (from Greek νοῦμμος – noummos, "coin"); thus, "out[side]-of-[the category]coins". The equivalent British term, paranumismatica, may also be used.
The words exonumist and exonumia were coined in July 1960 by Russell Rulau, a recognized authority and author on the subject, and accepted by Webster's dictionary in 1965.
Token Coins:
Many tokens were produced and used as currency in the United States and elsewhere when there was a shortage of government-issued money. Tokens have been used for both to advertise and to facilitate commerce and may or may not have a value.
Token Coins:
Token authority Russell Rulau offers a broad definition for exonumia in his 1040 page tome, UNITED STATES TOKENS: 1700–1900 but lines between categories can be fuzzy. For example, an advertising token may also be considered a medal. Good For tokens may also advertise. Counter-stamped coins have been called "little billboards." One way of parsing tokens is into these three general categories: Has a "value," facilitating commerce, such as Good for (something).
Token Coins:
Commemoration, remembrance, dedication, or the like, for some person, place, idea or event.
Token Coins:
Of a personal nature.Typically, catalogs of tokens are organized by location, time period, and/or type of item. Historically, the need for tokens grew out of the need for currency. In America, some tokens legally circulated alongside or instead of currency. Hard Times Tokens and Civil War Tokens each were the size of the contemporary cent. Afterwards, value based items, such as Good for (amount of money), Good for One Quart of Milk, Good for One Beer, Good for One Ride… and others were specifically linked to commerce of the store or place of issue.
Medals:
Medals are coin-like artistic objects, typically with a commemorative purpose. They may be awarded for recognition of achievement or created for sale to commemorate individuals or events. They may be souvenirs, devotional, or purely artistic. Medals are generally not used as currency or for exchange.
Exonumia Collecting:
Exonumia collectors, like coin collectors, are attentive to condition and rarity, as well as to history, form and type. Exonumists may collect items by region, topic, type, shape or material and this affects the ways tokens are documented.
The following categories are typical. This is not all-inclusive but is a sampling of the wide variety of exonumia.
By type Modified/Augmented: Love Token: A coin with hand engraving, generally on one side, or deliberately bent.
Carved Potty coins: usually United States Seated Liberty coinage carved to show lady Liberty sitting on a chamber pot.
Hobo nickels: Initially, hand-engraved Buffalo nickels mostly in the era 1913–38. Now, applied more generally to hand-engraved coins of different denominations.
Counterstamped/countermarked or chopped coins (done by merchants or governments) Cut Coins: artistically carved creations made from genuine coins, both new and old, often for jewelry.
Exonumia Collecting:
Elongated coins: Rolled out with advertising, commemorative, or souvenir designs on one side Encased Coin: Generally in a ring with advertising Colored or painted circulation or bullion issues Short snorter: paper money signed by people sharing a common experiencePlay money / Fantasy / Counterfeit / Art Play money Fantasy issue or novelty money (e.g. Promotional fake United States currency, Prop money) Mardi Gras Doubloons Counterfeit coins Money artGovernment Services & Non-National tools to Facilitate Commerce Jetons: Used as counters when verifying totals or weights of coins for commerce and exchange Telephone tokens Evasion tokens: 18th century semi-counterfeit were made to look like kind of but not exactly like actual currency Local currency, e.g. Ithaca Hours Sales tax tokens: Issued by states and merchants Dog license tags Post office tags Food stamps Slave tags: see Slave codesTransportation Tokens Ferries and watercraft Buses Subway Trains Trams/TrolleysClosed Community / Membership Communion tokens: given to congregation members to permit them to participate in Holy Communion) Company scrip Ingle Credit System script Lumber Mining Civilian Conservation Corps (CCC) College Currency Challenge coins Fraternal Masonic Elks Moose Woodmen of the World Geocoins used in geocaching Leper colony money Military Store and Entertainment Plantation Picker tokens for crops Prison and Correctional/Asylums Sobriety coinBy material / shapes Wooden nickels Cardboard or paper Hard rubber or ebonite Bullion, e.g. non-legal tender silver roundsMovements and ideals Temperance Anti-slavery Religious (including temple tokens) Political campaign tokens World's Fair or other expositions Locations City or state anniversaryOf a Personal nature – Personals Key tags (e.g. In case lost return to …) Badges Company Occupation Hand-engraved or uniquely counterstamped coins, as pocket pieces Watch fobsBy Issuer Arcade/Amusement tokens Apothecary tokens Bakery token Beer Pub/bar/saloon Billiards/pool Brothel tokens Car wash tokens Casino/Slot tokens/Casino chips Cigar/smoke shops Coat check Disney Dollars Fisherman tokens Milk/dairy Parking tokens: for meters or gates Pay toilet tokens Peep show Railway cheque tokensMedals George Washington medals Presidents, governors, other politicians Inventors and other important personsModern items under the exonumia umbrella include: Credit cards Gift cards: Gift cards have been replacing the giving of cash for events Telephone cards Music cards By Region China There are many types of Chinese exonumia, including alternative currencies: Bamboo tally Tokenand numismatic charms: Buddhist coin charm Burial money Confucian coin charm Horse coin Hell money Lei Ting curse charm Marriage coin charm Open-work charm Vault protector coin Taoist coin charm Zhengde Tongbao Germany Notgeld, primarily in the form of paper banknotes, was issued in Germany and Austria during World War I and the interwar period by towns, banks and other institutions due to a shortage of money.
Exonumia Collecting:
Latin America Latin American coffee or plantation tokens were an important part of commerce. Many plantation owners had their own commissaries and workers used plantation tokens to pay for provisions. Many tokens were made in the United States or Europe. Plantation tokens had an array of denominations and names. The name can be the owner, their relatives or the name of the farm (or finca). Tokens had allegorical symbols to identify the owner. Tokens were used as currency when there was not enough official currency available. Workers could convert the tokens to official currency on Saturdays.Tokens were made in all types of base metals and alloys plus plastic, celluloid and bakelite. Unique to Costa Rica were tokens made of paper (paper chits). The word "boleto" is used in Costa Rica for the word token whereas "ficha" is used in the rest of Latin America.
Exonumia Collecting:
United Kingdom Conder tokens were privately minted tokens from the later part of the 18th century and the early part of the 19th century in England, Anglesey and Wales, Scotland, and Ireland.
Exonumia Collecting:
United States Rulau breaks down American tokens into these general time periods: Early American Hard times tokens were made during the "hard times" after President Andrew Jackson shut down the Second Bank of the United States. These tokens were issued privately to circulate in the local economy as a one cent coin. They had a wide variety of subject matter, including advertising and political/satirical themes (anti-slavery, anti-Jackson).
Exonumia Collecting:
Civil War tokens were made between 1861 and 1864 due to the scarcity of government-issued cents during the American Civil War. Encased postage stamps were also used for this purpose.
Merchant (including modern gas tokens, ex: Shell tokens)Trade tokens Gay 90s | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-Aminophthalic acid**
3-Aminophthalic acid:
3-Aminophthalic acid is a product of the oxidation of luminol. The reaction requires the presence of a catalyst. A mixture of luminol and hydrogen peroxide is used in forensics. When the mixture is sprayed on an area that contains blood, the iron in the hemoglobin in the blood catalyzes a reaction between the mixture, resulting in 3-aminophthalate which gives out light by chemiluminescence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kinoshita–Terasaka knot**
Kinoshita–Terasaka knot:
In knot theory, the Kinoshita–Terasaka knot is a particular prime knot. It has 11 crossings. The Kinoshita–Terasaka knot has a variety of interesting mathematical properties. It is related by mutation to the Conway knot, with which it shares a Jones polynomial. It has the same Alexander polynomial as the unknot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rete tubular ectasia**
Rete tubular ectasia:
Rete tubular ectasia , also known as cystic transformation of rete testis is a benign condition, usually found in older men, involving numerous small, tubular cystic structures within the rete testis.
Presentation:
It is usually found in men older than 55 years and is frequently found on bilateral testes but often asymmetrical.
Mechanism:
The formation of cysts in the rete testis is associated with the obstruction of the efferent ducts, which connect the rete testis with the head of the epididymis. They are often bilateral.
Diagnosis:
The condition can be detected with ultrasonography. Cystic lesions us usually found at the mediastinum testis with elongated shaped lesion displacing the mediastinum. It is commonly associated with epididymal abnormalities, such as spermatocele, epididymal cyst, and epididymitis. The condition shares a common location with cystic dysplasia of the testis and intratesticular cysts. Unlike cystic neoplasms, they don't present specific tumor markers. Another distinguishing feature is that tubular ectasia of the testes are confined only to the mediastinum, unlike testicular cancer such as cystic teratoma of testis which spreads throughout the testis.
Treatment:
Typically none is required, but they can be treated surgically if symptomatic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.