text
stringlengths
11
320k
source
stringlengths
26
161
In parapatric speciation , two subpopulations of a species evolve reproductive isolation from one another while continuing to exchange genes . This mode of speciation has three distinguishing characteristics: 1) mating occurs non-randomly, 2) gene flow occurs unequally, and 3) populations exist in either continuous or discontinuous geographic ranges. This distribution pattern may be the result of unequal dispersal , incomplete geographical barriers, or divergent expressions of behavior, among other things. Parapatric speciation predicts that hybrid zones will often exist at the junction between the two populations. In biogeography , the terms parapatric and parapatry are often used to describe the relationship between organisms whose ranges do not significantly overlap but are immediately adjacent to each other; they do not occur together except in a narrow contact zone. Parapatry is a geographical distribution opposed to sympatry (same area) and allopatry or peripatry (two similar cases of distinct areas). Various "forms" of parapatry have been proposed and are discussed below. Coyne and Orr in Speciation categorise these forms into three groups: clinal (environmental gradients), "stepping-stone" (discrete populations), and stasipatric speciation in concordance with most of the parapatric speciation literature. [ 1 ] : 111 Henceforth, the models are subdivided following a similar format. Charles Darwin was the first to propose this mode of speciation. It was not until 1930, when Ronald Fisher published The Genetical Theory of Natural Selection where he outlined a verbal theoretical model of clinal speciation . In 1981, Joseph Felsenstein proposed an alternative, "discrete population" model (the "stepping-stone model). Since Darwin, a great deal of research has been conducted on parapatric speciation—concluding that its mechanisms are theoretically plausible, "and has most certainly occurred in nature". [ 1 ] : 124 Mathematical models, laboratory studies, and observational evidence supports the existence of parapatric speciation's occurrence in nature. The qualities of parapatry imply a partial extrinsic barrier during divergence; [ 2 ] thus leading to a difficulty in determining whether this mode of speciation actually occurred, or if an alternative mode (notably, allopatric speciation ) can explain the data. This problem poses the unanswered question as to its overall frequency in nature. [ 1 ] : 124 Parapatric speciation can be understood as a level of gene flow between populations where m = 0 {\displaystyle m=0} in allopatry (and peripatry), m = 0.5 {\displaystyle m=0.5} in sympatry, and midway between the two in parapatry. [ 3 ] Intrinsic to this, parapatry covers the entire continuum; represented as 0 < m < 0.5 {\displaystyle 0<m<0.5} . Some biologists reject this delineation, advocating the disuse of the term "parapatric" outright, "because many different spatial distributions can result in intermediate levels of gene flow". [ 4 ] Others champion this position and suggest the abandonment of geographic classification schemes (geographic modes of speciation) altogether. [ 5 ] Natural selection has been shown to be the primary driver in parapatric speciation (among other modes), [ 6 ] and the strength of selection during divergence is often an important factor. [ 7 ] Parapatric speciation may also result from reproductive isolation caused by social selection : individuals interacting altruistically . [ 8 ] Due to the continuous nature of a parapatric population distribution, population niches will often overlap, producing a continuum in the species' ecological role across an environmental gradient. [ 9 ] Whereas in allopatric or peripatric speciation—in which geographically isolated populations may evolve reproductive isolation without gene flow—the reduced gene flow of parapatric speciation will often produce a cline in which a variation in evolutionary pressures causes a change to occur in allele frequencies within the gene pool between populations. This environmental gradient ultimately results in genetically distinct sister species. Fisher's original conception of clinal speciation relied on (unlike most modern speciation research) the morphological species concept . [ 1 ] : 113 With this interpretation, his verbal, theoretical model can effectively produce a new species; of which was subsequently confirmed mathematically. [ 10 ] [ 1 ] : 113 Further mathematical models have been developed to demonstrate the possibility of clinal speciation with most relying on, what Coyne and Orr assert are, "assumptions that are either restrictive or biologically unrealistic". [ 1 ] : 113 A mathematical model for clinal speciation was developed by Caisse and Antonovics that found evidence that, "both genetic divergence and reproductive isolation may therefore occur between populations connected by gene flow". [ 11 ] This research supports clinal isolation comparable to a ring species (discussed below), except that the terminal geographic ends do not meet to form a ring. Doebeli and Dieckmann developed a mathematical model that suggested that ecological contact is an important factor in parapatric speciation and that, despite gene flow acting as a barrier to divergence in the local population, disruptive selection drives assortative mating; eventually leading to a complete reduction in gene flow. This model resembles reinforcement with the exception that there is never a secondary contact event. The authors conclude that, "spatially localized interactions along environmental gradients can facilitate speciation through frequency-dependent selection and result in patterns of geographical segregation between the emerging species." [ 9 ] However, one study by Polechová and Barton disputes these conclusions. [ 12 ] The concept of a ring species is associated with allopatric speciation as a special case; [ 13 ] however, Coyne and Orr argue that Mayr's original conception of a ring species does not describe allopatric speciation, "but speciation occurring through the attenuation of gene flow with distance". They contend that ring species provide evidence of parapatric speciation in a non-conventional sense. [ 1 ] : 102–103 They go on to conclude that: Nevertheless, ring species are more convincing than cases of clinal isolation for showing that gene flow hampers the evolution of reproductive isolation. In clinal isolation, one can argue that reproductive isolation was caused by environmental differences that increase with distance between populations. One cannot make a similar argument for ring species because the most reproductively isolated populations occur in the same habitat. [ 1 ] : 102 Referred to as a "stepping-stone" model by Coyne and Orr, it differs by virtue of the species population distribution pattern. Populations in discrete groups undoubtedly speciate more easily than those in a cline due to more limited gene flow. [ 1 ] : 115 This allows for a population to evolve reproductive isolation as either selection or drift overpower gene flow between the populations. The smaller the discrete population, the species will likely undergo a higher rate of parapatric speciation. [ 14 ] Several mathematical models have been developed to test whether this form of parapatric speciation can occur, providing theoretical possibility and supporting biological plausibility (dependent on the models parameters and their concordance with nature). [ 1 ] : 115 Joseph Felsenstein was the first to develop a working model. [ 1 ] : 115 Later, Sergey Gavrilets and colleagues developed numerous analytical and dynamical models of parapatric speciation that have contributed significantly to the quantitative study of speciation. (See the "Further reading" section) Further concepts developed by Barton and Hewitt in studying 170 hybrid zones , suggested that parapatric speciation can result from the same components that cause allopatric speciation. Called para-allopatric speciation, populations begin diverging parapatrically, fully speciating only after allopatry. [ 15 ] One variation of parapatric speciation involves species chromosomal differences. Michael J. D. White developed the stasipatric speciation model when studying Australian morabine grasshoppers ( Vandiemenella ). The chromosomal structure of sub-populations of a widespread species become underdominate ; leading to fixation . Subsequently, the sub-populations expand within the species larger range, hybridizing (with sterility of the offspring ) in narrow hybrid zones. [ 16 ] Futuyama and Mayer contend that this form of parapatric speciation is untenable and that chromosomal rearrangements are unlikely to cause speciation. [ 17 ] Nevertheless, data does support that chromosomal rearrangements can possibly lead to reproductive isolation, but it does not mean speciation results as a consequence. [ 1 ] : 259 Very few laboratory studies have been conducted that explicitly test for parapatric speciation. However, research concerning sympatric speciation often lends support to the occurrence of parapatry. This is due to the fact that, in symaptric speciation, gene flow within a population is unrestricted; whereas in parapatric speciation, gene flow is limited—thus allowing reproductive isolation to evolve easier. [ 1 ] : 117 Ödeen and Florin complied 63 laboratory experiments conducted between the years 1950–2000 (many of which were discussed by Rice and Hostert previously [ 18 ] ) concerning sympatric and parapatric speciation. They contend that the laboratory evidence is more robust than often suggested, citing laboratory populations sizes as the primary shortcoming. [ 19 ] Parapatric speciation is very difficult to observe in nature. This is due to one primary factor: patterns of parapatry can easily be explained by an alternate mode of speciation. Particularly, documenting closely related species sharing common boundaries does not imply that parapatric speciation was the mode that created this geographic distribution pattern. [ 1 ] : 118 Coyne and Orr assert that the most convincing evidence of parapatric speciation comes in two forms. This is described by the following criteria: This has been exemplified by the grass species Agrostis tenuis that grows on soil contaminated with high levels of copper, leached from an unused mine. Adjacent is the non-contaminated soil. The populations are evolving reproductive isolation due to differences in flowering. The same phenomenon has been found in Anthoxanthum odoratum in lead and zinc contaminated soils. [ 20 ] [ 21 ] Speciation may be caused by allochrony . [ 22 ] Clines are often cited as evidence of parapatric speciation and numerous examples have been documented to exist in nature; many of which contain hybrid zones. These clinal patterns, however, can also often be explained by allopatric speciation followed by a period of secondary contact—causing difficulty for researchers attempting to determine their origin. [ 1 ] : 118 [ 23 ] Thomas B. Smith and colleagues posit that large ecotones are "centers for speciation" (implying parapatric speciation) and are involved in the production of biodiversity in tropical rainforests. They cite patterns of morphologic and genetic divergence of the passerine species Andropadus virens . [ 24 ] Jiggins and Mallet surveyed a range of literature documenting every phase of parapatric speciation in nature positing that it is both possible and likely (in the studied species discussed). [ 25 ] A study of tropical cave snails ( Georissa saulae ) found that cave-dwelling population descended from the above-ground population, likely speciating in parapatry. [ 26 ] Partula snails on the island of Mo'orea have parapatrically speciated in situ after a single or a few colonization events, with some species expressing patterns of ring species. [ 27 ] In the Tennessee cave salamander , timing of migration was used to infer the differences in gene flow between cave-dwelling and surface-dwelling continuous populations. Concentrated gene flow and mean migration time results inferred a heterogenetic distribution and continuous parapatric speciation between populations. [ 28 ] Researchers studying Ephedra , a genus of gymnosperms in North American, found evidence of parapatric niche divergence for the sister species pairs E. californica and E. trifurca . [ 29 ] One study of Caucasian rock lizards suggested that habitat differences may be more important in the development of reproductive isolation than isolation time. Darevskia rudis , D. valentini and D. portschinskii all hybridize with each other in their hybrid zone ; however, hybridization is stronger between D. portschinskii and D. rudis , which separated earlier but live in similar habitats than between D. valentini and two other species, which separated later but live in climatically different habitats. [ 30 ] It is widely thought that parapatric speciation is far more common in oceanic species due to the low probability of the presence of full geographic barriers (required in allopatry). [ 31 ] Numerous studies conducted have documented parapatric speciation in marine organisms. Bernd Kramer and colleagues found evidence of parapatric speciation in Mormyrid fish ( Pollimyrus castelnaui ); [ 32 ] whereas Rocha and Bowen contend that parapatric speciation is the primary mode among coral-reef fish. [ 33 ] Evidence for a clinal model of parapatric speciation was found to occur in Salpidae . [ 31 ] Nancy Knowlton found numerous examples of parapatry in a large survey of marine organisms. [ 34 ] Quantitative speciation research
https://en.wikipedia.org/wiki/Parapatric_speciation
Paraphyly is a taxonomic term describing a grouping that consists of the grouping's last common ancestor and some but not all of its descendant lineages. The grouping is said to be paraphyletic with respect to the excluded subgroups. In contrast, a monophyletic grouping (a clade ) includes a common ancestor and all of its descendants. The terms are commonly used in phylogenetics (a subfield of biology ) and in the tree model of historical linguistics . Paraphyletic groups are identified by a combination of synapomorphies and symplesiomorphies . If many subgroups are missing from the named group, it is said to be polyparaphyletic. The term received currency during the debates of the 1960s and 1970s accompanying the rise of cladistics , having been coined by zoologist Willi Hennig to apply to well-known taxa like Reptilia ( reptiles ), which is paraphyletic with respect to birds . Reptilia contains the last common ancestor of reptiles and all descendants of that ancestor except for birds. Other commonly recognized paraphyletic groups include fish , [ 1 ] monkeys , [ 2 ] and lizards . [ 3 ] The term paraphyly , or paraphyletic , derives from the two Ancient Greek words παρά ( pará ), meaning "beside, near", and φῦλον ( phûlon ), meaning "genus, species", [ 4 ] [ 5 ] and refers to the situation in which one or several monophyletic subgroups of organisms (e.g., genera, species) are left apart from all other descendants of a unique common ancestor. Conversely, the term monophyly , or monophyletic , builds on the Ancient Greek prefix μόνος ( mónos ), meaning "alone, only, unique", [ 4 ] [ 5 ] and refers to the fact that a monophyletic group includes organisms consisting of all the descendants of a unique common ancestor. By comparison, the term polyphyly , or polyphyletic , uses the Ancient Greek prefix πολύς ( polús ), meaning "many, a lot of", [ 4 ] [ 5 ] and refers to the fact that a polyphyletic group includes organisms arising from multiple ancestral sources. Groups that include all the descendants of a common ancestor are said to be monophyletic . A paraphyletic group is a monophyletic group from which one or more subsidiary clades (monophyletic groups) are excluded to form a separate group. Philosopher of science Marc Ereshefsky has argued that paraphyletic taxa are the result of anagenesis in the excluded group or groups. [ 6 ] A cladistic approach normally does not grant paraphyletic assemblages the status of "groups", nor does it reify them with explanations, as in cladistics they are not seen as the actual products of evolutionary events. [ 7 ] A group whose identifying features evolved convergently in two or more lineages is polyphyletic (Greek πολύς [ polys ], "many"). More broadly, any taxon that is not paraphyletic or monophyletic can be called polyphyletic. Empirically, the distinction between polyphyletic groups and paraphyletic groups is rather arbitrary, since the character states of common ancestors are inferences, not observations. [ citation needed ] These terms were developed during the debates of the 1960s and 1970s accompanying the rise of cladistics . Paraphyletic groupings are considered problematic by many taxonomists, as it is not possible to talk precisely about their phylogenetic relationships, their characteristic traits and literal extinction. [ 8 ] [ 9 ] Related terms are stem group , chronospecies , budding cladogenesis, anagenesis, or 'grade' groupings. Paraphyletic groups are often relics from outdated hypotheses of phylogenic relationships from before the rise of cladistics. [ 10 ] The prokaryotes (single-celled life forms without cell nuclei) are a paraphyletic grouping, because they exclude the eukaryotes , a descendant group. Bacteria and Archaea are prokaryotes, but archaea and eukaryotes share a common ancestor that is not ancestral to the bacteria. The prokaryote/eukaryote distinction was proposed by Edouard Chatton in 1937 [ 11 ] and was generally accepted after being adopted by Roger Stanier and C.B. van Niel in 1962. The botanical code (the ICBN, now the ICN ) abandoned consideration of bacterial nomenclature in 1975; currently, prokaryotic nomenclature is regulated under the ICNB with a starting date of 1 January 1980 (in contrast to a 1753 start date under the ICBN/ICN). [ 12 ] Among plants, dicotyledons (in the traditional sense) are paraphyletic because the group excludes monocotyledons . "Dicotyledon" has not been used as a botanic classification for decades, but is allowed as a synonym of Magnoliopsida. [ note 1 ] Phylogenetic analysis indicates that the monocots are a development from a dicot ancestor. Excluding monocots from the dicots makes the latter a paraphyletic group. [ 13 ] Among animals, several familiar groups are not, in fact, clades. The order Artiodactyla ( even-toed ungulates ) as traditionally defined is paraphyletic because it excludes Cetaceans (whales, dolphins, etc.). Under the ranks of the ICZN Code , the two taxa are separate orders. Molecular studies, however, have shown that the Cetacea descend from artiodactyl ancestors, although the precise phylogeny within the order remains uncertain. Without the Cetaceans the Artiodactyls are paraphyletic. [ 14 ] The class Reptilia is paraphyletic because it excludes birds (class Aves ). Under a traditional classification, these two taxa are separate classes. However birds are sister taxon to a group of dinosaurs (part of Diapsida ), both of which are "reptiles". [ 15 ] Osteichthyes , bony fish, are paraphyletic when circumscribed to include only Actinopterygii (ray-finned fish) and Sarcopterygii (lungfish, etc.), and to exclude tetrapods ; more recently, Osteichthyes is treated as a clade, including the tetrapods. [ 16 ] [ 17 ] The " wasps " are paraphyletic, consisting of the narrow-waisted Apocrita without the ants and bees . [ 18 ] The sawflies ( Symphyta ) are similarly paraphyletic, forming all of the Hymenoptera except for the Apocrita, a clade deep within the sawfly tree. [ 16 ] Crustaceans are not a clade because the Hexapoda (insects) are excluded. The modern clade that spans all of them is the Pancrustacea . [ 19 ] [ 20 ] [ 21 ] One of the goals of modern taxonomy over the past fifty years has been to eliminate paraphyletic taxa from formal classifications. [ 22 ] [ 23 ] Below is a partial list of obsolete taxa and informal groups that have been found to be paraphyletic. Species have a special status in systematics as being an observable feature of nature itself and as the basic unit of classification. [ 49 ] Some articulations of the phylogenetic species concept require species to be monophyletic, but paraphyletic species are common in nature, to the extent that they do not have a single common ancestor. Indeed, for sexually reproducing taxa, no species has a "single common ancestor" organism. Paraphyly is common in speciation , whereby a mother species (a paraspecies ) gives rise to a daughter species without itself becoming extinct. [ 50 ] Research indicates as many as 20 percent of all animal species and between 20 and 50 percent of plant species are paraphyletic. [ 51 ] [ 52 ] Accounting for these facts, some taxonomists argue that paraphyly is a trait of nature that should be acknowledged at higher taxonomic levels. [ 53 ] [ 54 ] Cladists advocate a phylogenetic species concept [ 55 ] that does not consider species to exhibit the properties of monophyly or paraphyly, concepts under that perspective which apply only to groups of species. [ 56 ] They consider Zander's extension of the "paraphyletic species" argument to higher taxa to represent a category error [ 57 ] When the appearance of significant traits has led a subclade on an evolutionary path very divergent from that of a more inclusive clade, it often makes sense to study the paraphyletic group that remains without considering the larger clade. For example, the Neogene evolution of the Artiodactyla (even-toed ungulates, like deer, cows, pigs and hippopotamuses - Cervidae , Bovidae , Suidae and Hippopotamidae , the families that contain these various artiodactyls, are all monophyletic groups) has taken place in environments so different from that of the Cetacea (whales, dolphins, and porpoises) that the Artiodactyla are often studied in isolation even though the cetaceans are a descendant group. The prokaryote group is another example; it is paraphyletic because it is composed of two Domains (Eubacteria and Archaea) and excludes (the eukaryotes ). It is very useful because it has a clearly defined and significant distinction (absence of a cell nucleus, a plesiomorphy ) from its excluded descendants. [ citation needed ] Also, some systematists recognize paraphyletic groups as being involved in evolutionary transitions, the development of the first tetrapods from their ancestors for example. Any name given to these hypothetical ancestors to distinguish them from tetrapods—"fish", for example—necessarily picks out a paraphyletic group, because the descendant tetrapods are not included. [ 58 ] Other systematists consider reification of paraphyletic groups to obscure inferred patterns of evolutionary history. [ 59 ] The term " evolutionary grade " is sometimes used for paraphyletic groups. [ 60 ] Moreover, the concepts of monophyly , paraphyly, and polyphyly have been used in deducing key genes for barcoding of diverse group of species. [ 61 ] The concept of paraphyly has also been applied to historical linguistics , where the methods of cladistics have found some utility in comparing languages. For instance, the Formosan languages form a paraphyletic group of the Austronesian languages because they consist of the nine branches of the Austronesian family that are not Malayo-Polynesian and are restricted to the island of Taiwan . [ 62 ]
https://en.wikipedia.org/wiki/Paraphyly
In mathematics , a paraproduct is a non-commutative bilinear operator acting on functions that in some sense is like the product of the two functions it acts on. According to Svante Janson and Jaak Peetre, in an article from 1988, [ 1 ] "the name 'paraproduct' denotes an idea rather than a unique definition; several versions exist and can be used for the same purposes." The concept emerged in J.-M. Bony ’s theory of paradifferential operators. [ 2 ] This said, for a given operator Λ {\displaystyle \Lambda } to be defined as a paraproduct, it is normally required to satisfy the following properties: A paraproduct may also be required to satisfy some form of Hölder's inequality . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paraproduct
Paraquat ( trivial name ; / ˈ p ær ə k w ɒ t / ), or N , N ′-dimethyl-4,4′-bipyridinium dichloride ( systematic name ), also known as methyl viologen , is a toxic organic compound with the chemical formula [(C 6 H 7 N) 2 ]Cl 2 . It is classified as a viologen , a family of redox -active heterocycles of similar structure. [ 5 ] This salt is one of the most widely used herbicides worldwide. [ 6 ] It is quick-acting and non-selective, killing green plant tissue on contact. Paraquat is highly toxic to humans and other animals. The toxicity and lethality depends on the dose and how the herbicide is absorbed by the body. In humans, paraquat damages the mouth, stomach, and intestines if it is ingested orally. [ 7 ] Once absorbed in the body, paraquat causes particular damage to the lungs, kidneys, and liver. [ 7 ] Paraquat's lethality is attributed to its enhancing production of superoxide anions and human lung cells can accumulate paraquat. Paraquat exposure has been strongly linked to the development of Parkinson's disease . [ 8 ] Paraquat may be in the form of salt with chloride or other anions ; quantities of the substance are sometimes expressed by cation mass alone (paraquat cation, paraquat ion). The name is derived from the para positions of the quaternary nitrogens. Pyridine is coupled by treatment with sodium in ammonia followed by oxidation to give 4,4′-bipyridine . This chemical is then di methylated with chloromethane (its discoverers Austrian chemist Hugo Weidel and his student M. Russo used iodomethane ) to give the final product as the dichloride salt. [ 9 ] Use of other methylating agents gives the bispyridinium with alternate counterions . For example, Hugo Weidel 's original synthesis used methyl iodide to produce the di iodide . [ 10 ] Although first synthesized by Weidel and Russo in 1882, [ 10 ] paraquat's herbicidal properties were not recognized until 1955 in the Imperial Chemical Industries (ICI) laboratories at Jealott's Hill , Berkshire, England. [ 11 ] [ 12 ] Paraquat was first manufactured and sold by ICI in early 1962 under the trade name Gramoxone , and is today among the most commonly used herbicides. Paraquat is classified as a non-selective contact herbicide. The key characteristics that distinguish it from other agents used in plant protection products are: These properties led to paraquat being used in the development of no-till farming . [ 15 ] [ 16 ] [ 17 ] The European Union approved the use of paraquat in 2004 but Sweden, supported by Denmark, Austria, and Finland, appealed this decision. In 2007, the court annulled the directive authorizing paraquat as an active plant protection substance stating that the 2004 decision was wrong in finding that there were no indications of neurotoxicity associated with paraquat and that the studies about the link between paraquat and Parkinson's disease should have been considered. [ 18 ] Thus, paraquat has been banned in the European Union since 2007. [ 18 ] China also banned the domestic use of Paraquat in 2017; so followed India, [ 19 ] Thailand in 2019 and Brazil, Chile, Malaysia, Peru and Taiwan between 2020 and 2022. [ 20 ] In the United States, paraquat is available primarily as a solution in various strengths. It is classified as a restricted use pesticide , which means that it can be used by licensed applicators only. According to an October 2021 estimate, the use of paraquat in US agriculture as mapped by the US Geological Survey showed a doubling from 2013 to 2018, reaching 10,000,000 pounds (4,500,000 kg) annually, [ 21 ] up from 1,054,000 pounds (478,000 kg) in 1974. [ 22 ] There is an ongoing international campaign for a global ban, but the cheap and popular paraquat continues to be unrestricted in most developing countries. [ 23 ] The Chemical Review Committee (CRC) of the Rotterdam Convention recommended to the Conference of the Parties (COP) paraquat dichloride formulations for inclusion in Annex III to the Convention in 2011. [ 24 ] A small group of countries, including India and Guatemala and supported by manufacturers, have since blocked the listing of paraquat as a hazardous chemical for the purposes of the Rotterdam Convention. [ 25 ] In Australia, paraquat is used as a herbicide to control annual grasses, broadleaf weeds and ryegrass in crops of Chickpeas , Faba beans , field peas , lupins , lentils and vetch . Aerial spraying is forbidden, as is harvesting within 2 weeks of application in some crops. [ 26 ] In India, Paraquat dichloride 24% SL is widely used for broad-spectrum control of weeds on potato, cotton, rubber, wheat, tea, maize, rice, grapes, apple and aquatic weeds. [ 27 ] Paraquat is an oxidant that interferes with electron transfer , a process that is common to all life. Addition of one electron gives the radical cation: The radical cation is also susceptible to further reduction to the neutral [paraquat] 0 : [ 29 ] As an herbicide, paraquat acts by inhibiting photosynthesis. In light-exposed plants, it accepts electrons from photosystem I (more specifically ferredoxin , which is presented with electrons from PS I) and transfers them to molecular oxygen. In this manner, destructive reactive oxygen species (ROS) are produced. In forming these reactive oxygen species, the oxidized form of paraquat is regenerated, and is again available to shunt electrons from photosystem I to restart the cycle. [ 30 ] This induces necrosis , and unlike with some mechanisms of necrosis, does not produce double-stranded breaks . [ 31 ] Target weeds die within 4 days; symptoms can show after as little as a few hours. [ 26 ] Paraquat is often used in science to catalyze the formation of ROS, more specifically, the superoxide free radical. Paraquat will undergo redox cycling in vivo , being reduced by an electron donor such as NADPH , before being oxidized by an electron receptor such as dioxygen to produce superoxide , a major ROS. [ 32 ] Problems with herbicide resistant weeds may be addressed by applying herbicides with different modes of action, along with cultural methods such as crop rotation , in integrated weed management (IWM) systems. Paraquat, with its distinctive mode of action, is one of few chemical options that can be used to prevent and mitigate problems with weeds that have become resistant to the very widely used non-selective herbicide glyphosate . [ 33 ] [ 34 ] Paraquat is a Group L (Aus), D (global), 22 (numeric) resistance class herbicide, which it shares with diquat and cyperquat . [ 35 ] One example is the "double knock" system used in Australia. [ 36 ] Before planting a crop, weeds are sprayed with glyphosate first, then followed seven to ten days later by a paraquat herbicide. Although twice as expensive as using a single glyphosate spray, the "Double Knock" system is widely relied upon by farmers as a resistance management strategy. [ 37 ] Nevertheless, herbicide resistance has been seen for both herbicides in a vineyard in Western Australia [ 38 ] – though this singular report gives no indication of what regimen was being followed, particularly if the two herbicides were being used in a "double knock" tandem. A computer simulation reported in the scientific journal Weed Research showed that with alternating annual use between glyphosate and paraquat, only one field in five would be expected to have glyphosate-resistant annual ryegrass ( Lolium rigidum ) after 30 years, compared to nearly 90% of fields sprayed only with glyphosate. [ 39 ] A "Double Knock" regime with paraquat cleaning-up after glyphosate was predicted to keep all fields free of glyphosate resistant ryegrass for at least 30 years. Paraquat is toxic to humans ( Category II ) by the oral route and moderately toxic ( Category III ) through the skin. [ 40 ] Pure paraquat, when ingested, is highly toxic to mammals, including humans, causing severe inflammation and potentially leading to severe lung damage (e.g., irreversible pulmonary fibrosis , also known as 'paraquat lung'), acute respiratory distress syndrome ( ARDS ), and death. [ 41 ] [ 42 ] The mortality rate is estimated between 60% and 90%. [ 41 ] Paraquat is also toxic when inhaled and is in the Toxicity Category I (the highest of four levels) for acute inhalation effects. [ 40 ] For agricultural uses, the United States Environmental Protection Agency (EPA) determined that particles used in agricultural practices (400–800 μm) are not in the respirable range. [ 40 ] Paraquat also causes moderate to severe irritation of the eye and skin. [ 40 ] Diluted paraquat used for spraying is less toxic; thus, the greatest risk of accidental poisoning is during mixing and loading paraquat for use. [ 12 ] The standard treatment for paraquat poisoning is first to remove as much as possible by pumping the stomach. [ 43 ] Fuller's earth or activated charcoal may also improve outcomes depending on the timing. Haemodialysis, haemofiltration, haemoperfusion, or antioxidant therapy may also be suggested. [ 41 ] Immunosuppressive therapy to reduce the inflammation is an approach suggested by some, however only low certainty evidence supports using medications such as glucocorticoids with cyclophosphamide in addition to the standard care to reduce mortality. [ 41 ] It is also unknown if adding glucocorticoid with cyclophosphamide to the standard care has unwanted side effects such as increasing the risk of infection. [ 41 ] Oxygen should not be administered unless SpO 2 levels are below 92%, as high concentrations of oxygen intensify the toxic effects. [ 44 ] [ 45 ] Death may occur up to 30 days after ingestion. Lung injury is a main feature of poisoning. Liver, heart, lung, and kidney failure can occur within several days to weeks that can lead to death up to 30 days after ingestion. Those who suffer large exposures are unlikely to survive. Chronic exposure can lead to lung damage, kidney failure, heart failure, and oesophageal strictures . [ 46 ] The mechanism underlying paraquat's toxic damage to humans is still unknown. The severe inflammation is thought to be caused by the generation of highly reactive oxygen species and nitrite species that results in oxidative stress . The oxidative stress may result in mitochondrial toxicity and the induction of apoptosis and lipid peroxidation which may be responsible for the organ damage. [ 41 ] It is known that the alveolar epithelial cells of the lung selectively concentrate paraquat. [ 47 ] It has been reported that a small dose, even if removed from the stomach or spat out, can still cause death from fibrous tissue developing in the lungs, leading to asphyxiation . [ 48 ] Accidental deaths and suicides from paraquat ingestion are relatively common. For example, there are more than 5,000 deaths in China from paraquat poisoning every year [ 49 ] in part leading to China's ban in 2017. [ 20 ] Long-term exposures to paraquat would most likely cause lung and eye damage, but reproductive/fertility damage was not found by the EPA in their review. During the late 1970s, a controversial program sponsored by the US government sprayed paraquat on cannabis fields in Mexico. [ 50 ] Following Mexican efforts to eradicate marijuana and poppy fields in 1975, the United States government helped by sending helicopters and other technological assistance. Helicopters were used to spray the herbicides paraquat and 2,4-D on the fields; marijuana contaminated with these substances began to show up in US markets, leading to debate about the program. [ 51 ] Whether any injury came about due to the inhalation of paraquat-contaminated marijuana is uncertain. A 1995 study found that "no lung or other injury in cannabis users has ever been attributed to paraquat contamination". [ 52 ] Also a United States Environmental Protection Agency manual states: "... toxic effects caused by this mechanism have been either very rare or nonexistent. Most paraquat that contaminates cannabis is pyrolyzed during smoking to dipyridyl , which is a product of combustion of the leaf material itself (including cannabis) and presents little toxic hazard." [ 53 ] In a study by Imperial Chemical Industries , rats that inhaled paraquat showed development of squamous metaplasia in their respiratory tracts after a couple of weeks. This study was included in a report given to the State Department by the Mitre Corporation . The U.S. Public Health Service stated that "this study should not be used to calculate the safe inhalation dose of paraquat in humans." [ 54 ] A large majority (93 percent) of fatalities from paraquat poisoning are suicides , which occur mostly in developing countries . [ 55 ] For instance, in Samoa from 1979 to 2001, 70 percent of suicides were by paraquat poisoning. Trinidad and Tobago is particularly well known for its incidence of suicides involving the use of Gramoxone (commercial name of paraquat). In southern Trinidad , particularly in Penal, Debe from 1996 to 1997, 76 percent of suicides were by paraquat, 96 percent of which involved the over-consumption of alcohol such as rum. [ 56 ] Fashion celebrity Isabella Blow died by suicide using paraquat in 2007. Paraquat is widely used as a suicide agent in developing countries because it is widely available at low cost. Further, the toxic dose is low (10 mL or 2 teaspoons is enough to kill). Campaigns exist to control or even ban paraquat, and there are moves to restrict its availability by requiring user education and the locking up of paraquat stores. When a 2011 South Korean law completely banned paraquat in the country, death by pesticide plummeted 46%, contributing to the decrease of the overall suicide rate. [ 57 ] The indiscriminate paraquat murders , which occurred in Japan in 1985, were carried out using paraquat as a poison. Paraquat was used in the UK in 1981 by a woman who poisoned her husband. [ 58 ] American serial killer Steven David Catlin killed two of his wives and his adoptive mother with paraquat between 1976 and 1984. In 2022, a 22-year-old woman, Greeshma Raj, was found guilty of using paraquat for murdering her boyfriend, Sharon Raj , in Kerala, India. [ 59 ] According to the WHO (2022), some of the measures to prevent Parkinson's disease include "banning of pesticides (e.g., paraquat and chlorpyrifos ) and chemicals (e.g., trichloroethylene ) which have been linked to PD and develop safer alternatives as per WHO guidance" and "accelerate action to reduce levels of and exposure to air pollution, an important risk factor for PD". [ 60 ] A 2011 study showed a link between paraquat use and Parkinson's disease in farm workers. [ 61 ] A co-author of the paper said that paraquat increases production of certain oxygen derivatives that may harm cellular structures, and that people who used paraquat, or other pesticides with a similar mechanism of action, were more likely to develop Parkinson's. [ 62 ] A 2013 meta-analysis published in Neurology found that "exposure to paraquat ... was associated with about a 2-fold increase in risk" of Parkinson's disease. [ 63 ] A review in 2021 concluded that the available evidence does not support a causal conclusion. [ 64 ] In 2022 and 2023, two reviews from India "decisively demonstrated that paraquat is a substantial stimulant of oxidative stress … and is associated with Parkinson's disease (PD)"; and stated that "From the studies we can consider that PQ and MB with its combined effects has tremendous contribution towards neurodegeneration in PD." [ 65 ] [ 66 ] In the UK, the use of paraquat was banned in 2007, but the manufacture and export of the herbicide is still permitted. In April 2022, the BBC reported that some UK farmers had called for a ban on British production of paraquat, and stated that "There is no scientific consensus and many conflicting studies on any possible association between Paraquat and Parkinson's". In the US, a class action lawsuit against Syngenta is ongoing; the company rejects the claims but has paid £187.5 million into a settlement fund. [ 67 ] As of August 2024, more than 5,700 cases against Syngenta (manufacturer of Gramoxone) and Chevron (the former distributor) are pending in the paraquat multidistrict litigation in the US; the first of 10 bellwether trials will start in 2024. [ 68 ] [ 69 ] [ 70 ] On April 15, 2025, attorneys representing plaintiffs in the multidistrict litigation entered into a settlement agreement. [ 71 ] [ 72 ] The settlement came as the first bellwether trial was six months away. [ 71 ] In August 2024, the British Columbia Supreme Court certified a class-action lawsuit against Syngenta on behalf of at least two plaintiffs who were diagnosed with Parkinson's after exposure to paraquat. [ 73 ] According to the NIEHS , pesticide exposure has consistently been associated with the onset of Parkinson's disease. [ 74 ] Some people are more vulnerable to the harmful effects of pesticides because of their age or genetic makeup. [ 74 ] Further research into links between preventable exposures and Parkinson's disease, as well as preventative therapies , could help reduce the incidence of the disease. For example, using protective gloves and other hygiene practices reduced the risk of Parkinson's disease among farmers using paraquat, permethrin , and trifluralin . [ 74 ]
https://en.wikipedia.org/wiki/Paraquat
The parasexual cycle , a process restricted to fungi and single-celled organisms , is a nonsexual mechanism of parasexuality for transferring genetic material without meiosis or the development of sexual structures. [ 1 ] It was first described by Italian geneticist Guido Pontecorvo in 1956 during studies on Aspergillus nidulans (also called Emericella nidulans when referring to its sexual form, or teleomorph ). A parasexual cycle is initiated by the fusion of hyphae ( anastomosis ) during which nuclei and other cytoplasmic components occupy the same cell (heterokaryosis and plasmogamy ). Fusion of the unlike nuclei in the cell of the heterokaryon results in formation of a diploid nucleus ( karyogamy ), which is believed to be unstable and can produce segregants by recombination involving mitotic crossing-over and haploidization . Mitotic crossing-over can lead to the exchange of genes on chromosomes ; while haploidization probably involves mitotic nondisjunctions which randomly reassort the chromosomes and result in the production of aneuploid and haploid cells. Like a sexual cycle, parasexuality gives the species the opportunity to recombine the genome and produce new genotypes in their offspring. Unlike a sexual cycle, the process lacks coordination and is exclusively mitotic. The parasexual cycle resembles sexual reproduction. In both cases, unlike hyphae (or modifications thereof) may fuse (plasmogamy) and their nuclei will occupy the same cell. The unlike nuclei fuse (karyogamy) to form a diploid (zygote) nucleus. In contrast to the sexual cycle, recombination in the parasexual cycle takes place during mitosis followed by haploidization (but without meiosis). The recombined haploid nuclei appear among vegetative cells, which differ genetically from those of the parent mycelium. Both heterokaryosis and the parasexual cycle are very important for those fungi that have no sexual reproduction. Those cycles provide for somatic variation in the vegetative phase of their life cycles . This is also true for fungi where the sexual phase is present, although in this case, additional and significant variation is incorporated through the sexual reproduction. Occasionally, two haploid nuclei fuse to form a diploid nucleus—with two homologous copies of each chromosome. The mechanism is largely unknown, and it seems to be a relatively rare event, but once a diploid nucleus has been formed it can be very stable and divide to form further diploid nuclei, along with the normal haploid nuclei. Thus the heterokaryon consists of a mixture of the two original haploid nuclear types as well as diploid fusion nuclei. [ 2 ] Chiasma formation is common in meiosis, where two homologous chromosomes break and rejoin, leading to chromosomes that are hybrids of the parental types. It can also occur during mitosis but at a much lower frequency because the chromosomes do not pair in a regular arrangement. Nevertheless, the result will be the same when it does occur—the recombination of genes . [ 2 ] Occasionally, nondisjunction of chromosomes occurs during division of a diploid nucleus, so that one of the daughter nuclei has one chromosome too many (2n+1) and the other has one chromosome too few (2n–1). Such nuclei with incomplete multiples of the haploid number are termed aneuploid , as they do not have even chromosome number sets such as n or 2n. They tend to be unstable and to lose further chromosomes during subsequent mitotic divisions, until the 2n+1 and 2n-1 nuclei progressively revert to n. Consistent with this, in E. nidulans (where normally, n=8) nuclei have been found with 17 (2n+1), 16 (2n), 15 (2n–1), 12, 11, 10, and 9 chromosomes. [ 2 ] Each of these events is relatively rare, and they do not constitute a regular cycle like the sexual cycle. But the outcome would be similar. Once a diploid nucleus has formed by fusion of two haploid nuclei from different parents, the parental genes can potentially recombine. And, the chromosomes that are lost from an aneuploid nucleus during its reversion to a euploid could be a mixture of those in the parental strain. [ 2 ] The potential to undergo a parasexual cycle under laboratory conditions has been demonstrated in many species of filamentous fungi, including Fusarium monoliforme , [ 3 ] Penicillium roqueforti [ 4 ] (used in making blue cheeses [ 5 ] ), Verticillium dahliae , [ 6 ] [ 7 ] Verticillium alboatrum , [ 8 ] Pseudocercosporella herpotrichoides , [ 9 ] Ustilago scabiosae , [ 10 ] Magnaporthe grisea , [ 11 ] Cladosporium fulvum , [ 12 ] [ 13 ] and the human pathogens Candida albicans [ 14 ] and Candida tropicalis . [ 15 ] A study of the evolution of sexual reproduction in six Candida species concluded that there were recent losses in components of the major meiotic crossover -formation pathway, but retention of a minor pathway. [ 16 ] It was suggested that if Candida species undergo meiosis it is with reduced machinery, or different machinery, and also that unrecognized meiotic cycles may exist in many species. [ 16 ] Parasexuality has become a useful tool for industrial mycologists to produce strains with desired combinations of properties. Its significance in nature is largely unknown and will depend on the frequency of heterokaryosis, determined by cytoplasmic incompatibility barriers and it is also useful in rDNA technology. [ 2 ]
https://en.wikipedia.org/wiki/Parasexual_cycle
Parasite Rex: Inside the Bizarre World of Nature's Most Dangerous Creatures is a nonfiction book by Carl Zimmer that was published by Free Press in 2000. The book discusses the history of parasites on Earth and how the field and study of parasitology formed, along with a look at the most dangerous parasites ever found in nature. A special paperback edition was released in March 2011 for the tenth anniversary of the book's publishing, including a new epilogue written by Zimmer. Signed bookplates were also given to fans that sent in a photo of themselves with a copy of the special edition. [ 1 ] The cover of Parasite Rex includes a scanning electron microscope image of a tick as the focus, along with illustrations in the centerfold of parasites and topics discussed in the book. [ 2 ] The book begins by discussing the history of parasites in human knowledge, from the earliest writings about them in ancient cultures, up through modern times. The focus comes to rest extensively on the views and experiments conducted by scientists in the 17th, 18th, and 19th centuries, such as those done by Antonie van Leeuwenhoek , Japetus Steenstrup , Friedrich Küchenmeister , and Ray Lankester . Among them, Leeuwenhoek was the first to ever physically view cells through a microscope, Steenstrup was the first to explain and confirm the multiple stages and life cycles of parasites that are different from most other living organisms, and Küchenmeister, through his religious beliefs and his views on every creature having a place in the natural order, denied the ideas of his time and proved that all parasites are a part of active evolutionary niches and not biological dead ends by conducting morally ambiguous experiments on prisoners. Lankester is given a specific focus and repeated discussion throughout the book due to his belief that parasites are examples of degenerative evolution , especially in regards to Sacculina , and Zimmer's repeated refutation of this idea. [ 3 ] Several chapters are taken to discuss various types of parasites and how they infect and control their hosts, along with the biochemistry involved in their take-over or evasion of their host's immune system, eventually leading to their dispersal into their next form and life cycle. An extended time is also given on the workings of immunology and how the immune systems of living beings respond to parasite infection, along with the methods that bodily functions use to counteract and potentially kill invading microorganisms. Woven into this discussion are several specific sites that Zimmer visited during his writing of Parasite Rex and the scientists he worked with to understand different biosystems and all the parasites that live within them, including human sleeping sickness infections in Sudan from the tsetse fly , the parasites of frogs in Costa Rica , primarily showcased by filarial worms that infect humans and a variety of species, and the USDA National Parasite Collection based out of Maryland . [ 2 ] [ 3 ] The final chapters focus on an overall effect parasites have had on the evolution of life and the theory that it is due to parasitic infection that sexual reproduction evolved to become dominant, in contrast to previous asexual reproduction methods, due to the increased genetic variety and thus potential parasitic resistance that this would confer. This research was showcased by W. D. Hamilton and his theories on the evolution of sex, along with the Red Queen hypothesis and the idea of an evolutionary arms race between parasites and their hosts. [ 4 ] Zimmer then discusses a final time the wide variety of parasites that evolved to have humans as their primary hosts and our attempts through scientific advancement to eradicate them. [ 2 ] The closing chapter considers the positive benefits of parasites and how humans have used them to improve agriculture and medical technology, but also how ill-considered usage of parasites could also destroy various habitats by having them act as invasive species . [ 5 ] In the end, Zimmer ponders whether humanity counts as a parasite on the planet and what the effects of this relationship could be. [ 2 ] In a review for Science , Albert O. Bush pointed out how Zimmer creates a writing style that is written with "clarity, conviction, and seemingly without prejudice" and that while the "purist will find the odd mistakes, oversights, and minor errors of fact", these are "insignificant" and do not remove from Parasite Rex' s "overall quality or, more importantly, its focus and take-home message." [ 2 ] The New York Times ' Kevin Padian praised the book and Zimmer's writing, saying that it showcases him as "fine a science essayist as we have" and that the importance of this book rests "not only in its accessible presentation of the new science of evolutionary parasitology but in its thoughtful treatment of the global strategies and policies that scientists, health workers and governments will have to consider in order to manage parasites in the future". [ 5 ] Publishers Weekly called the book a "exemplary work of popular science" and one of the "most fascinating works" of its kind, while also being "its most disgusting". [ 6 ] Margaret Henderson, writing for the Library Journal , recommended the book for placement in all libraries, saying that the book "makes parasitology interesting and accessible to anyone". [ 7 ] Writing in the Quarterly Review of Biology , May Berenbaum describes Parasite Rex as a "remarkable book" that is "unique in its focus and is extremely readable" and earns the reviewer's "respect and recommendation" for being able to discuss the life cycles of lancet flukes and the Red Queen hypothesis properly in a single book. [ 3 ] Joe Eaton in the Whole Earth Review categorized Parasite Rex as "one of those books that change the way you see the world" due to how it shows that ecosystems are largely made up of the parasites that the individual organisms carry. [ 8 ] A review in The American Biology Teacher by Donald A. Lawrence labeled the book as a "splendid overview of current knowledge about parasites" and praised the extensive Notes, Literature Cited, and Index sections. [ 9 ] The newsletter editor for the American Society of Parasitologists , Scott Lyell Gardner, congratulated the book for bringing the field of parasitology into the public view, saying that how Zimmer "presents parasites in the “ugh” and “oooh” mode, in addition to trying to show how parasitologists actually ply our trade" helps to provide interest into the subject. [ 10 ] BlueSci writer Harriet Allison summed up the book as one where Zimmer "manages to weave just enough easily understandable science into each chapter in order to create an engrossing and squirm-inducing story that will have you hooked until the end". [ 11 ] Kirkus Reviews stated its acclaim for the "vivid detail" given to the lifestyles of parasites, calling the book an "eye-opening perspective on biology, ecology, and medicine" and "well worth reading". [ 12 ]
https://en.wikipedia.org/wiki/Parasite_Rex
Parasite load is a measure of the number and virulence of the parasites that a host organism harbours. Quantitative parasitology deals with measures to quantify parasite loads in samples of hosts and to make statistical comparisons of parasitism across host samples. In evolutionary biology , parasite load has important implications for sexual selection and the evolution of sex , as well as openness to experience . [ 1 ] A single parasite species usually has an aggregated distribution across host individuals, which means that most hosts harbor few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology : use of parametric statistics should be avoided. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is often recommended. However, this can give rise to further problems. Therefore, modern day quantitative parasitology is based on more advanced biostatistical methods. In vertebrates , males frequently carry higher parasite loads than females. [ 2 ] Differences in movement patterns, habitat choice, diet, body size, and ornamentation are all thought to contribute to this sex bias observed in parasite loads. Often males have larger habitat ranges and thus are likely to encounter more parasite-dense areas than female conspecifics . Whenever sexual dimorphism is exhibited in species, the larger sex is thought to tolerate higher parasite loads. In insects, susceptibility to parasite load has been linked to genetic variation in the insect colony. [ 3 ] In colonies of Hymenoptera (ants, bees and wasps), colonies with high genetic variation that were exposed to parasites experienced lesser parasite loads than colonies that are more genetically similar. Depending on the parasitic species in question, various methods of quantification allow scientists to measure the numbers of parasites present and determine the parasite load of an organism. Quantifying the parasite depends on what type of parasite is in question as well as where it resides in the host body. For example, intracellular parasites such as the protozoan genus Plasmodium which causes Malaria in humans, are quantified through performing a blood smear and counting the number of white blood cells infected by viewing the smear through a microscope. [ 4 ] Other parasites residing in the blood of a host could be similarly counted on a blood smear using specific staining methods to better visualize the cells. As technology advances, more modernized methods of parasite quantification are emerging such as hand held automated cell counters, in order to efficiently count parasites such as Plasmodium in blood smears. Quantifying intestinal parasites, such as nematodes present in an individual, often it requires dissection of the animal, extraction and counting of the parasites. Other techniques to determine intestinal parasites exist which do not require dissection; such as detection of parasitic infections by fecal examination. This is a common practice in veterinary medicine and is used to calculate parasite load in domestic animals, such as cats and dogs. Methods of fecal examination include fecal smears and flotation methods. Fecal floats can detect reproductive means of endoparasitic (see endoparasite ) organisms (eggs, larvae, oocysts, and cysts) that are passed through the digestive system and are therefore present in the feces. [ 5 ] For analytical statistical methods used to study the extent and intensity of parasitic infection see Quantitative parasitology. Parasite load has been known to affect sexual selection in various species. Hamilton and Zuk (1982) suggested that females of species could base their choice of mates on heritable resistance to parasites. [ 6 ] This hypothesis proposes that the expression of secondary sex characteristics depends on the hosts overall health. Hosts coevolve with parasites and thus generate heritable resistance to parasites, which have a net negative effect on host viability. Therefore, females will select males with few or no parasites by basing their choice on whether or not the male has fully expressed secondary sexual, otherwise known as 'healthy' characteristics. One study found that parasite load predicts mate choice in guppies. [ 7 ] When controlling for other variables, females were shown to prefer males with relatively few parasites with this preference being associated with higher display rates that occur in less parasitized males. This phenomenon has also been observed in other species. Parasite load has also been shown to affect the behavior of the infected individual. Numerous studies have been done looking at the effects of number of parasites present in a host and how this correlates with behaviors such as foraging, migration, and competitive behavior. In a study performed at the University of Georgia, it was found that beetles with higher parasite loads won more fights than those with lower parasitic loads. [ 8 ] When put up against beetles with no parasites present, the parasite-laden beetles lost the fights. Bird species have also exhibited behavioural effects in relation to parasite load. In passerine songbirds, high parasite load results in reduced song outputs, affecting the output of secondary sexual characteristics that influence mate selection. [ 9 ] Similar effects have been observed in other bird species. Parasite load has been shown to affect the spread of infectious diseases . For example, parasitologists at the Universidade de São Paulo researched the effect of Chaga's disease on the immune system. They found that individuals who survived the acute phase of infection develop parasite-specific immune response that reduces parasite levels in tissues and blood. [ 10 ] This research aims to discover if the parasite load during the acute stage of infection affects if the host will eventually have a positive immune response. The research was conducted on mice, with the intention of eventually using the information gleaned from the experiments to assist humans who have contracted Chaga's disease. Marinho et al. found that parasite loads in the acute phase of infection correlates at the late chronic stage of the disease, with the intensity of the activation and response of the immune system of the host. This research could lead to new discoveries in parasitology. This could potentially prevent the spread of parasites and therefore diseases linked to parasite infection within a given population. Host stress causes conditions within the host to be less than ideal for parasites, leading to and causing parasite load. Malnutrition has been shown to suppress the immune system, leading to higher parasite loads within a population and increased transmission rates throughout the population. [ 11 ] It has been shown that malnutrition, and putrefaction can lead to illness within a population and therefore increase the amount of parasites within a population. Those individuals that are malnourished and stressed exhibit the highest numbers of parasite load. This implies that these individuals have a higher likelihood of dying due to the environmental factors, as well as parasite infection, likely killing the population of parasites within that specific host. This would then limit the propagation of the parasites within the population. In the experiment conducted by Pulkkinen et al. [ 12 ] it was found that when food was limited in a population of crabs infected with daphnia , there were mortalities among the infected population of crabs. This was due to stress within environment, as well as stress within the host (crab body) from parasite infection. Pulkkinen et al. also found that after a period of time there was a corresponding reduction in average size of crabs, and therefore the mortality rate due to malnutrition and environmental stress was reduced. This increased the parasite load within the population. Parasite load is a complex ecological phenomenon, often exhibiting a negative feedback loop, as it is within the interest of the parasite population for the host to survive infection.
https://en.wikipedia.org/wiki/Parasite_load
Parasitic chromosomes are "selfish" chromosomes that propagate throughout cell divisions, even if they confer no benefit to the overall organism's survival. Parasitic chromosomes can persist even if slightly detrimental to survival, as is characteristic of some selfish genetic elements. Parasitic chromosomes are often B chromosomes , such that they are not necessarily present in the majority of the species population and are not needed for basic life functions, in contrast to A chromosomes . Parasitic chromosomes are classified as selfish genetic elements . [ 1 ] Parasitic chromosomes, if detrimental to an organism's survival, often are selected against by natural selection over time, but if the chromosome is able to act like a selfish DNA element, it can spread throughout a population. An example of a parasitic chromosome is the b24 chromosome in grasshoppers . [ 1 ] [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parasitic_chromosome
Parasitic computing is a technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature without exploiting vulnerabilities to execute attacker-supplied code on the latter. It is, in a sense, a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program. It was first proposed by Albert-Laszlo Barabasi , Vincent W. Freeh, Hawoong Jeong & Jay B. Brockman from University of Notre Dame, Indiana , USA, in 2001. [ 1 ] The example given by the original paper was two computers communicating over the Internet , under disguise of a standard communications session. The first computer is attempting to solve a large and extremely difficult 3-SAT problem; it has decomposed the original 3-SAT problem in a considerable number of smaller problems. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. The packet/checksum is then sent to another computer. This computer will, as part of receiving the packet and deciding whether it is valid and well-formed , create a checksum of the packet and see whether it is identical to the provided checksum. If the checksum is invalid, it will then request a new packet from the original computer. The original computer now knows the answer to that smaller problem based on the second computer's response, and can transmit a fresh packet embodying a different sub-problem. Eventually, all the sub-problems will be answered and the final answer easily calculated. The example is based on an exploit of the Transmission Control Protocol (TCP), used for internet connections, so in the end, the target computer(s) is unaware that it has performed computation for the benefit of the other computer, or even done anything besides have a normal TCP/IP session. The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the packets in the first place easily exceeds the computations leeched from the other program; the 3-SAT problem would be solved much more quickly if just analyzed locally. In addition, in practice packets would probably have to be retransmitted occasionally when real checksum errors and network problems occur. However, parasitic computing on the level of checksums is a demonstration of the concept. The authors suggest that as one moves up the application stack , there might come a point where there is a net computational gain to the parasite - perhaps one could break down interesting problems into queries of complex cryptographic protocols using public keys . If there was a net gain, one could in theory use a number of control nodes for which many hosts on the Internet form a distributed computing network completely unawares. Students of the University of Applied Sciences, Bern, Switzerland, extended this concept into a programmable virtual machine in 2002. [ 2 ]
https://en.wikipedia.org/wiki/Parasitic_computing
Parasitic oscillation is an undesirable electronic oscillation (cyclic variation in output voltage or current) in an electronic or digital device. It is often caused by feedback in an amplifying device. The problem occurs notably in RF , [ 1 ] audio , and other electronic amplifiers [ 2 ] as well as in digital signal processing . [ 3 ] It is one of the fundamental issues addressed by control theory . [ 4 ] [ 5 ] [ 6 ] Parasitic oscillation is undesirable for several reasons. The oscillations may be coupled into other circuits or radiate as radio waves , causing electromagnetic interference (EMI) to other devices. In audio systems, parasitic oscillations can sometimes be heard as annoying sounds in the speakers or earphones. The oscillations waste power and may cause undesirable heating. For example, an audio power amplifier that goes into parasitic oscillation may generate enough power to damage connected speakers . A circuit that is oscillating will not amplify linearly, so desired signals passing through the stage will be distorted. In digital circuits, parasitic oscillations may only occur on particular logic transitions and may result in erratic operation of subsequent stages; for example, a counter stage may see many spurious pulses and count erratically. Parasitic oscillation in an amplifier stage occurs when part of the output energy is coupled into the input, with the correct phase and amplitude to provide positive feedback at some frequency. The coupling can occur directly between input and output wiring with stray capacitance or mutual inductance between input and output. In some solid-state or vacuum electron devices there is sufficient internal capacitance to provide a feedback path. Since the ground is common to both input and output, output current flowing through the impedance of the ground connection can also couple signals back to the input. Similarly, impedance in the power supply can couple input to output and cause oscillation. When a common power supply is used for several stages of amplification, the supply voltage may vary with the changing current in the output stage. The power supply voltage changes will appear in the input stage as positive feedback. An example is a transistor radio which plays well with a fresh battery, but squeals or " motorboats " when the battery is old. In audio systems, if a microphone is placed close to a loudspeaker, parasitic oscillations may occur. This is caused by positive feedback, from amplifier's output to loudspeaker to sound waves, and back via the microphone to the amplifier input. See Audio feedback . Feedback control theory was developed to address the problem of parasitic oscillation in servo control systems – the systems oscillated rather than performing their intended function, for example velocity control in engines. The Barkhausen stability criterion gives the necessary condition for oscillation; the loop gain around the feedback loop, which is equal to the amplifier gain multiplied by the transfer function of the inadvertent feedback path, must be equal to one, and the phase shift around the loop must be zero or a multiple of 360° (2π radians ). In practice, feedback may occur over a range of frequencies (for example the operating range of an amplifier); at various frequencies, the phase of the amplifier may be different. If there is one frequency where the feedback is positive and the amplitude condition is also fulfilled – the system will oscillate at that frequency. These conditions can be expressed in mathematical terms using the Nyquist plot . Another method used in control loop theory uses Bode plots of gain and phase vs. frequency. Using Bode plots, a design engineer checks whether there is a frequency where both conditions for oscillations are met: the phase is zero ( positive feedback ) and the loop gain is 1 or greater. When parasitic oscillations occur, the designer can use the various tools of control loop engineering to correct the situation – to reduce the gain or to change the phase at problematic frequencies. Several measures are used to prevent parasitic oscillation. Amplifier circuits are laid out so that input and output wiring are not adjacent, preventing capacitive or inductive coupling. A metal shield may be placed over sensitive portions of the circuit. Bypass capacitors may be put at power supply connections, to provide a low-impedance path for AC signals and prevent interstage coupling through the power supply. Where printed circuit boards are used, high- and low-power stages are separated and ground return traces are arranged so that heavy currents don't flow in mutually shared portions of the ground trace. In some cases the problem may only be solved by introduction of another feedback neutralization network, calculated and adjusted to eliminate the negative feedback within the passband of the amplifying device. A classic example is the Neutrodyne circuit used in tuned radio frequency receivers .
https://en.wikipedia.org/wiki/Parasitic_oscillation
Parasitology is the study of parasites , their hosts , and the relationship between them. As a biological discipline , the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology , bioinformatics , biochemistry , molecular biology , immunology , genetics , evolution and ecology . The study of these diverse organisms means that the subject is often broken up into simpler, more focused units, which use common techniques, even if they are not studying the same organisms or diseases. Much research in parasitology falls somewhere between two or more of these definitions. In general, the study of prokaryotes falls under the field of bacteriology rather than parasitology. [ 1 ] The parasitologist F. E. G. Cox noted that "Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth". [ 3 ] One of the largest fields in parasitology, medical parasitology is the subject that deals with the parasites that infect humans, the diseases caused by them, clinical picture and the response generated by humans against them. It is also concerned with the various methods of their diagnosis, treatment and finally their prevention & control. A parasite is an organism that live on or within another organism called the host. These include organisms such as: [ 4 ] Medical parasitology can involve drug development , epidemiological studies and study of zoonoses . The study of parasites that cause economic losses in agriculture or aquaculture operations, or which infect companion animals . Examples of species studied are: This is the study of structures of proteins from parasites. Determination of parasitic protein structures may help to better understand how these proteins function differently from homologous proteins in humans. In addition, protein structures may inform the process of drug discovery . Parasites exhibit an aggregated distribution among host individuals, thus the majority of parasites live in the minority of hosts. This feature forces parasitologists to use advanced biostatistical methodologies. [ 5 ] Parasites can provide information about host population ecology. In fisheries biology , for example, parasite communities can be used to distinguish distinct populations of the same fish species co-inhabiting a region. Additionally, parasites possess a variety of specialized traits and life-history strategies that enable them to colonize hosts. Understanding these aspects of parasite ecology, of interest in their own right, can illuminate parasite-avoidance strategies employed by hosts. Conservation biology is concerned with the protection and preservation of vulnerable species, including parasites. A large proportion of parasite species are threatened by extinction, partly due to efforts to eradicate parasites which infect humans or domestic animals, or damage human economy, but also caused by the decline or fragmentation of host populations and the extinction of host species. The huge diversity between parasitic organisms creates a challenge for biologists who wish to describe and catalogue them. Recent developments in using DNA to identify separate species and to investigate the relationship between groups at various taxonomic scales has been enormously useful to parasitologists, as many parasites are highly degenerate , disguising relationships between species. Antonie van Leeuwenhoek observed and illustrated Giardia lamblia in 1681, and linked it to "his own loose stools". This was the first protozoan parasite of humans that he recorded, and the first to be seen under a microscope. [ 6 ] A few years later, in 1687, the Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni published that scabies is caused by the parasitic mite Sarcoptes scabiei , marking scabies as the first disease of humans with a known microscopic causative agent. [ 7 ] In the same publication, Esperienze Intorno alla Generazione degl'Insetti ( Experiences of the Generation of Insects ), Francesco Redi also described ecto- and endoparasites, illustrating ticks , the larvae of nasal flies of deer , and sheep liver fluke . His earlier (1684) book Osservazioni intorno agli animali viventi che si trovano negli animali viventi ( Observations on Living Animals found in Living Animals ) described and illustrated over 100 parasites including the human roundworm . [ 8 ] He noted that parasites develop from eggs, contradicting the theory of spontaneous generation . [ 9 ] Modern parasitology developed in the 19th century with accurate observations by several researchers and clinicians. In 1828, James Annersley described amoebiasis , protozoal infections of the intestines and the liver, though the pathogen, Entamoeba histolytica , was not discovered until 1873 by Friedrich Lösch. James Paget discovered the intestinal nematode Trichinella spiralis in humans in 1835. James McConnell described the human liver fluke in 1875. A physician at the French naval hospital at Toulon, Louis Alexis Normand, in 1876 researching the ailments of French soldiers returning from what is now Vietnam, discovered the only known helminth that, without treatment, is capable of indefinitely reproducing within a host and causes the disease strongyloidiasis . [ 3 ] Patrick Manson discovered the life cycle of elephantiasis , caused by nematode worms transmitted by mosquitoes, in 1877. Manson further predicted that the malaria parasite, Plasmodium , had a mosquito vector, and persuaded Ronald Ross to investigate. Ross confirmed that the prediction was correct in 1897–1898. At the same time, Giovanni Battista Grassi and others described the malaria parasite's life cycle stages in Anopheles mosquitoes. Ross was controversially awarded the 1902 Nobel prize for his work, while Grassi was not. [ 6 ]
https://en.wikipedia.org/wiki/Parasitology
A paraspecies (a paraphyletic species) is a species , living or fossil, that gave rise to one or more daughter species without itself becoming extinct . [ 1 ] Geographically widespread species that have given rise to one or more daughter species as peripheral isolates without themselves becoming extinct (i.e. through peripatric speciation ) are examples of paraspecies. [ 2 ] Paraspecies are expected from evolutionary theory (Crisp and Chandler, 1996), and are empirical realities in many terrestrial and aquatic taxa. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Paraspecies
The parasternal line is a vertical line on the front of the thorax. It is midway between the lateral sternal line and the mid-clavicular line . This anatomy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parasternal_line
A parasympatholytic agent is a substance or activity that reduces the activity of the parasympathetic nervous system . [ 1 ] [ 2 ] The term parasympatholytic typically refers to the effect of a drug , although some poisons act to block the parasympathetic nervous system as well. Most drugs with parasympatholytic properties are anticholinergics . Parasympatholytic agents and sympathomimetic agents have similar effects to each other, although some differences between the two groups can be observed. For example, both cause mydriasis , but parasympatholytics reduce accommodation ( cycloplegia ), whereas sympathomimetics do not. Parasympatholytic drugs are sometimes used to treat slow heart rhythms ( bradycardias or bradydysrhythmias ) caused by myocardial infarctions or other pathologies, as well as to treat conditions that cause bronchioles in the lung to constrict, such as asthma . By blocking the parasympathetic nervous system, parasympatholytic drugs can increase heart rate in patients with bradycardic heart rhythms, and open up airways and reduce mucus production in patients with asthma. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parasympatholytic
Parataxonomy is a system of labor division for use in biodiversity research, in which the rough sorting tasks of specimen collection, field identification, documentation and preservation are conducted by primarily local, less specialized individuals, thereby alleviating the workload for the "alpha" or "master" taxonomist . [ 1 ] Parataxonomy may be used to improve taxonomic efficiency by enabling more expert taxonomists to restrict their activity to the tasks that require their specialist knowledge and skills, which has the potential to expedite the rate at which new taxa may be described and existing taxa may be sorted and discussed. [ 1 ] Parataxonomists generally work in the field, sorting collected samples into recognizable taxonomic units (RTUs) based on easily recognized features. The process can be used alone for rapid assessment of biodiversity. [ 2 ] Some researchers consider reliance on parataxonomist-generated data to be prone to error depending on the sample, the sorter and the group of organisms in question. Therefore, quantitative studies based on parataxonomic processes may be unreliable [ 3 ] and is therefore controversial. [ 4 ] Today, [ when? ] the concepts of citizen science and parataxonomy are somewhat overlapping, with unclear distinctions between those employed to provide supplemental services to taxonomists and those who do so voluntarily, whether for personal enrichment or the altruistic desire to make substantive scientific contributions. [ citation needed ] These terms are occasionally used interchangeably, but some taxonomists maintain that each possess unique differences. [ citation needed ] A "parataxonomist" is a term coined by Dr. Daniel Janzen and Dr. Winnie Hallwachs in the late 1980s [ 5 ] [ 1 ] who used it to describe the role of assistants working at INBio in Costa Rica . [ 6 ] It describes a person who collects specimens for ecological studies as well as the basic information for a specimen as it is being collected in the field. Information they collect includes date, location (lat/long), collector's name, the species of plant and caterpillar if known, and each specimen is assigned a unique voucher code. [ 5 ] The term was a play on the word "paramedic", someone who can operate independently, may not have a specialized university degree, but has some taxonomic training. [ 1 ] Hallwachs and Janzen created and implemented an intensive six-month course that taught everything from taxonomy to how to operate a chainsaw. [ 1 ] Dr. Janzen trained the first cohort in January 1989, additional cohorts receiving training up until 1992. From 1992 onward, all other training was conducted by parataxonomists. [ 5 ] As of 2017, some 10,000 new species in the Area de Conservacion Guanacaste have been identified thanks to the efforts of parataxonomists. [ 5 ] During the time period that Janzen's parataxonomic model was in place, INBio became the second largest biological collection in Latin America with over 3.5 million collections, all of which were digitized. As of 2015, the institute had produced over 2,500 scientific articles, 250 books and 316 conventions. Its website logged an average of 25,000 unique visitors daily from 125 countries, and its park had received upwards of 15 million visitors. [ 7 ]
https://en.wikipedia.org/wiki/Parataxonomy
In mathematics, the paratingent cone and contingent cone were introduced by Bouligand ( 1932 ), and are closely related to tangent cones . Let S {\displaystyle S} be a nonempty subset of a real normed vector space ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\cdot \|)} . An equivalent definition is given in terms of a distance function and the limit infimum. As before, let ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\cdot \|)} be a normed vector space and take some nonempty set S ⊂ X {\displaystyle S\subset X} . For each x ∈ X {\displaystyle x\in X} , let the distance function to S {\displaystyle S} be Then, the contingent cone to S ⊂ X {\displaystyle S\subset X} at x ∈ cl ⁡ ( S ) {\displaystyle x\in \operatorname {cl} (S)} is defined by [ 2 ] This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Paratingent_cone
Paratomy is a form of asexual reproduction in animals where the organism splits in a plane perpendicular to the antero-posterior axis and the split is preceded by the "pregeneration" of the anterior structures in the posterior portion. The developing organisms have their body axis aligned, i.e., they develop in a head to tail fashion. Budding can be considered to be similar to paratomy except that the body axes need not be aligned: the new head may grow toward the side or even point backward (e.g. Convolutriloba retrogemma an acoel flat worm). [ 1 ] [ 2 ] In animals that undergo fast paratomy a chain of zooids packed in a head to tail formation may develop. Many oligochaete annelids, [ 3 ] acoelous turbellarians , [ 1 ] echinoderm larvae [ 4 ] and coelenterates [ 5 ] reproduce by this method. This paper has a detailed description of the changes during paratomy. [ 3 ]
https://en.wikipedia.org/wiki/Paratomy
Paratransgenesis is a technique that attempts to eliminate a pathogen from vector populations through transgenesis of a symbiont of the vector. The goal of this technique is to control vector-borne diseases . The first step is to identify proteins that prevent the vector species from transmitting the pathogen. The genes coding for these proteins are then introduced into the symbiont , so that they can be expressed in the vector. The final step in the strategy is to introduce these transgenic symbionts into vector populations in the wild. One use of this technique is to prevent mortality for humans from insect-borne diseases. Preventive methods and current controls against vector-borne diseases depend on insecticides, [ 1 ] even though some mosquito breeds may be resistant to them. There are other ways to fully eliminate them. [ 2 ] “Paratransgenesis focuses on utilizing genetically modified insect symbionts to express molecules within the vector that are deleterious to pathogens they transmit.” [ 1 ] The acidic bacteria Asaia symbionts are beneficial in the normal development of mosquito larvae; however, it is unknown what Asais symbionts do to adult mosquitoes. [ 1 ] The first example of this technique used Rhodnius prolixus which is associated with the symbiont Rhodococcus rhodnii . R. prolixus is an important insect vector of Chagas disease that is caused by Trypanosoma cruzi . The strategy was to engineer R. rhodnii to express proteins such as Cecropin A that are toxic to T. cruzi or that block the transmission of T. cruzi . [ 3 ] Attempts are also made in Tse-tse flies using bacteria [ 4 ] [ 5 ] and in malaria mosquitoes using fungi, [ 6 ] viruses, [ 7 ] or bacteria. [ 8 ] Although the use of paratransgenesis can serve many different purposes, one of the main purposes is “breaking the disease cycle”. This study focuses on the experiments with tsetse flies and trypanosomes, which cause sleeping sickness in Subsaharan Africa. The tsetse fly’s transmission biology was studied to learn how it transmits the disease. This was done inn order to find the best way to use paratransgenesis, which could help solve transmission. In this case, paratransgenesis was used to create trypanocides which stop the transmission of trypanosomes in the tsetse fly vector. [ 4 ] Another disease caused by the transmission of mosquitoes to humans is malaria. This has been an ongoing health issue as there is not an effective vaccine and malaria is deadly. “The development of innovative control measures is an imperative to reduce malaria transmission.” [ 9 ] In this study, it was found that when using paratransgenesis of Asaia (gfp) in these mosquitoes, there was a lower chance of the disease. [ 9 ] They are using anti-pathogen effector molecules. [ 9 ] Another example is in honey bees. A study done in 2012 found that using lactic acid bacteria could improve or help with honey bee’s health and digestion. [ 10 ] This is a different use of paratransgenesis and was suggested as the Lactobacillus was an easy target for paratransgenesis. The scientists wanted to see if maintaining the microbiome in the insect model’s guts would work to keep the bees and the entire colony healthy. [ 10 ] There has been a major decrease in honey bee populations and colonies in recent years. By using paratransgenesis, scientists and beekeepers hope to increase the population of honey bees. Experiments have shown that the spread through mosquito populations is resistant to parasites engineered through symbiotic bacterium Serratia AS1. Major concerns of regulators for the release of such engineered bacteria into the field shows there were zero options for “recall”. “Serratia AS1 loses plasmids as it replicates in mosquitoes and in culture, reverting to wild type and that horizontal transfer of the plasmid from Serratia AS1 to other bacteria is difficult to detect.” [ 11 ] This means the initial field trials can be used in the reversible system besides the released recombinant bacteria expressing antiplasmodial compounds from a plasmid revert to wild type at a certain rate. [ 11 ] “Paratransgenesis is the genetically modified symbiotic organisms that block pathogen development or transmission by vectors using expressing molecules”. Figure 2 shows An. gambiae [ 7 ] and Ae. aegypti [ 12 ] symbiotic viruses using bacteria symbionts blood-sucking, [ 3 ] tsetse flies [ 13 ] and mosquitoes. [ 14 ] Symbionts expressing molecules targeting pathogen development can have transmission in endemic regions. [ 13 ] As with transgenesis, The spread of transformed symbionts benefits from the availability of a gene drive system to replace non-transformed symbionts present in natural vector populations is also seen in transgenesis. [ 13 ] Paratransgenesis reduces African trypanosomes transmission by tsetse flies. It has transformed Sodalis , a symbiont of tsetse flies found in the midgut and hemolymph of Glossina m. morsitans , Glossina p. palpalis , Glossina austeni , and Glossina brevipalpis , and the salivary glands of Gl. p. palpalis , which all have transmitted vertically via the female milk glands. [ 13 ] Vertical transmission has GFP-transformed (recSodalis) that was detected in 9 out of 12 F1 offspring and eight out of 12 F2 descendents, which has transformed symbiont to be spread across tsetse populations. [ 13 ] This resulted in Sodalis being isolated from Gl. m. morsitans and Gl. fuscipes transforming with GFP, the recSodalis obtained colonized septic non-native tsetse host species at a density similar to native colonization. [ 14 ] [ 15 ] A future direction on vector paratransgenesis is within the natural insect populations and it has not been determined if transformed symbionts can replace non-transformed symbionts. There are no effects on insect hosts and are capable of being transmitted vertically (via trans-ovarian transmission) or laterally (due to feeding habits) when it comes to symbionts. Wolbachia endosymbionts is a gene driven system and can also affect paratransgenesis. [ 4 ] Wolbachia are intracellular transitional bacteria that control the reproduction of insects via cytoplasmic incompatibility (CI). [ 16 ] “ Wolbachia -uninfected females will not breed with infected males, which reduces the frequency of uninfected individuals and increases the frequency of Wolbachia -infected insects in a population.” [ 16 ] This effect will cause other transitional controlled transformed symbionts to spread within an insect population which expands the frequency. [ 4 ] These insects include: Ae. aegypti, Aedes albopictus, and Culex quinquefasciatus. [ 16 ] Densovirus is an example of how the spread is transformed through symbionts, occurring in the natural populations of mosquitoes. During the process of conducting as a gene driven mechanism, the Wolbachia strain reduces the mosquito lifespan for pathogen development inside the mosquito (known as the extrinsic incubation period or EIP). [ 17 ] Elimination of the disease vectors is hard to treat due to reduced vector lifespan for its own growth from a shorter growth time. [ 18 ] This means it targets older mosquitoes over younger ones and this also implies evolution-proof of mosquitocidal biocontrols agents. [ 19 ] Time already exists for a selective pressure on pathogen development in Plasmodium-infected mosquitoes for Anopheles (marsh mosquitos) from 20% to 40% per gonotrophic cycle [ 20 ] [ 19 ] resulting in a shortening of the parasite life cycle within the vector. “One approach is to reduce vector competence (linear parameter), and vector survivorship (exponential parameter). Both effects together should reduce vectorial capacity and disease burden in endemic areas and prevent transmission.” [ 14 ] Vector-borne diseases are common; therefore working to understand how these diseases are transmitted can lead to better prevention of or treatment for these illnesses. Vector borne diseases such as malaria are passed from mosquitoes to humans. [ 21 ] Trypanosoma Cruz causes Chagas disease, and there are efforts to use paratransgenesis to prevent the spread of this disease. The strategy is to alter the microbe then reinsert it into the insect that has been genetically modified to alter pathogens. The article “Paratransgenic Control of Vector Borne Diseases'' discusses the approach to understanding these diseases. [ 22 ] Human African Trypanosomiasis (sleeping sickness) is an illness that affects many individuals in sub-Saharan Africa. In the last decade the numbers have come close to an elimination. This illness is passed by flies and the past few efforts on controlling this disease is less than 10,000 cases per year. [ 23 ] There are many diseases in which paratransgenesis can occur, with the most common being malaria. The paper “Evaluating the usefulness of paratransgenesis for malaria control,” describes the global problem of malaria, a cause of significant health issues. [ 2 ] It is carried by mosquitoes and although the most useful way to eliminate them is to use insecticides, some mosquito species are resistant to insecticide. In order to combat insecticide-resistant mosquitos, there are genetically engineered plasmodium that has been created to help destroy the mosquito gut. [ 2 ] Another study “Using infection to fight infection: paratransgenesis fungi can block malaria transmission in mosquitoes” demonstrates anti-malaria effector genes that were injected into entomopathogenic fungus, Metarhizium anisopliae. [ 24 ] Next, the fungus was injected into non-infected mosquitoes and expressed in the hemolymph. The interesting thing about this is when other molecules were coexpressed, the salivary glands expressed Malaria levels up to 98%. [ 24 ] In order to perform paratransgenesis, there are several requirements:
https://en.wikipedia.org/wiki/Paratransgenesis
In zoology and botany , a paratype is a specimen of an organism that helps define what the scientific name of a species and other taxon actually represents, but it is not the holotype (and in botany is also neither an isotype nor a syntype ). Often there is more than one paratype. Paratypes are usually held in museum research collections. The exact meaning of the term paratype when it is used in zoology is not the same as the meaning when it is used in botany. In both cases however, this term is used in conjunction with holotype . In zoological nomenclature , a paratype is officially defined as "Each specimen of a type series other than the holotype ." [ 1 ] In turn, this definition relies on the definition of a "type series". A type series is the material (specimens of organisms) that was cited in the original publication of the new species or subspecies, and was not excluded from being type material by the author (this exclusion can be implicit, e.g., if an author mentions "paratypes" and then subsequently mentions "other material examined", the latter are not included in the type series), nor referred to as a variant, or only dubiously included in the taxon (e.g., a statement such as "I have before me a specimen which agrees in most respects with the remainder of the type series, though it may yet prove to be distinct" would exclude this specimen from the type series). Thus, in a type series of five specimens, if one is the holotype , the other four will be paratypes. A paratype may originate from a different locality than the holotype. A paratype cannot become a lectotype , though it is eligible (and often desirable) for designation as a neotype . The International Code of Zoological Nomenclature (ICZN) has not always required a type specimen, but any species or subspecies newly described after the end of 1999 must have a designated holotype or syntypes . A related term is allotype , a term that indicates a specimen that exemplifies the opposite sex of the holotype, [ 1 ] and is almost without exception designated in the original description, and, accordingly, part of the type series, and thus a paratype; in such cases, it is functionally no different from any other paratype. It has no nomenclatural standing whatsoever, and although the practice of designating an allotype is recognized by the ICZN, it is not a " name-bearing type " and there are no formal rules controlling how one is designated. Apart from species exhibiting strong sexual dimorphism , relatively few authors take the trouble to designate such a specimen. It is not uncommon for an allotype to be a member of an entirely different species from the holotype, because of an incorrect association by the original author. In botanical nomenclature, a paratype is a specimen cited in the original description that may not have been said to be a type. It is not the holotype nor an isotype (duplicate of the holotype). Like other types, a paratype may be specified for taxa at the rank of family or below (Article 7). [ 2 ] A paratype may be designated as a lectotype if no holotype, isotype, syntype, or isosyntype (duplicate of a syntype) is extant (Article 9.12). [ 2 ]
https://en.wikipedia.org/wiki/Paratype
List of ediacaran genera for more. Parazoa ( Parazoa , gr. Παρα-, para, "next to", and ζωα, zoa, "animals") is an obsolete subkingdom that is located at the base of the phylogenetic tree of the animal kingdom in opposition to the subkingdom Eumetazoa ; they group together the most primitive forms, characterized by not having proper tissues or where, in any case, these tissues are only partially differentiated. It generally includes a single phylum, Porifera , which lack muscles , nerves and internal organs , which in many cases resembles a cell colony rather than a multicellular organism itself. All other animals are eumetazoans and agnotozoans (Agnotozoans are possibly paraphyletic or even nonexistent in studies), which do have differentiated tissues. On occasion, Parazoa reunites Porifera with Archaeocyatha , a group of extinct sponges sometimes considered a separate phylum. In other cases, Placozoa is included, depending on the authors. Porifera and Archaeocyatha show similarities such as benthic and sessile habitat and the presence of pores, with differences such as the presence of internal walls and septa in Archaeocyatha. They have been considered separate phyla, [ 1 ] however, the consensus is growing that Archaeocyatha was in fact a type of sponge that can be classified into Porifera. [ 2 ] Some authors include in Parazoa the poriferous or sponge phyla and Placozoa on the basis of shared primitive characteristics: Both are simple, show a lack of true tissues and organs, have both asexual and sexual reproduction , and are invariably aquatic. As animals, they are a group that in various studies are at the base of the phylogenetic tree, albeit in a paraphyletic form. Of this group only surviving sponges, which belong to the phylum Porifera , and Trichoplax in the phylum Placozoa . Parazoa do not show any body symmetry (they are asymmetric); all other groups of animals show some kind of symmetry. There are currently 5000 species, 150 of which are freshwater. The larvae are planktonic and the adults are sessile. The Parazoa–Eumetazoa division has been estimated to be 940 million years ago. [ 3 ] The Parazoa group is now considered paraphyletic. [ citation needed ] When referenced, it is sometimes considered an equivalent to the Porifera. [ citation needed ] Some authors include the Placozoa , [ 4 ] a phylum long thought to consist of a single species, Trichoplax adhaerens , in the division, but sometimes it is also placed in the Agnotozoa subkingdom. According to the most up-to-date phylogeny, Porifera should not have a direct relationship with Placozoa. In any case, placozoans are likely simplified “coelenterates” without common characteristics with sponges. [ 5 ] [ 6 ] [ 7 ] Porifera Ctenophora Bilateria Cnidaria Placozoa
https://en.wikipedia.org/wiki/Parazoa
Parbuckle salvage , or parbuckling , is the righting of a sunken vessel using rotational leverage. A common operation with smaller watercraft , parbuckling is also employed to right large vessels. In 1943, the USS Oklahoma was rotated nearly 180 degrees to upright after being sunk in the attack on Pearl Harbor , and the Italian cruise ship Costa Concordia was successfully parbuckled off the west coast of Italy in September 2013, the largest salvage operation of that kind to date. While the mechanical advantage used by a laborer to parbuckle a cask up an incline is 2:1, parbuckling salvage is not so limited. Each of the 21 winches used to roll the Oklahoma used cables that passed through two 17-part tackle assemblies (17:1 advantage). Eight 28-inch-diameter (710 mm) sheaves , eight 24-inch-diameter (610 mm) sheaves, and one 20-inch-diameter (510 mm) sheave comprised just half the mechanical effort. [ 1 ] A major concern during salvage is preventing rotational torque from becoming a transverse force moving the ship sideways. USS Utah , lost like the Oklahoma in the Pearl Harbor attack, was meant to be recovered by a similar rotation after the Oklahoma. As the Utah was rotated, however, its hull did not catch on the harbor bottom, and the vessel slid toward Ford Island . The Utah recovery effort was abandoned. [ 2 ] Oklahoma weighed about 35,000 short tons (32,000 metric tons). Twenty-one electric winches were installed on Ford Island, anchored in concrete foundations. They operated in unison. Each winch pulled about 20 short tons (18 metric tons) by a wire operated through a block system which gave an advantage of seventeen, for a total pull of 21×20×17, or 7,140 short tons (6,480 metric tons). In order to increase the leverage, the wire passed over a wooden strut arrangement (a bent ) which stood on the bottom of the ship about 40 feet (12 meters) high. Oil had been removed from the ship through the bottom. The ship was lightened by air inside the hull. There was a large amount of weight in the ship which may have been removed prior to righting, but not all could be accessed. About one-third of the ammunition was taken off together with some of the machinery. The blades of the two propellers were also taken off, but more to avoid damage to them than to reduce weight. Tests were made to check whether restraining forces should be used to prevent sliding toward Ford Island. It was indicated that the soil under the aft part of the ship prevented sliding, whereas the bow section rested in soupy mud which permitted it. To prevent sliding, about 2200 tons of coral soil were deposited near the bow section. During righting, excess soil under the starboard side was washed away by high-pressure jets operated by divers. The ship rolled as it should have and was right-side up by 16 June 1943, the work having started 8 March 1943. The mean draft of the ship after righting was c. 50 feet (15 meters). [ 3 ] Following its capsizing and sinking in January 2012 , the hull of Costa Concordia lay starboard side to the seaward face of a small outcropping very near the mouth of the harbor of Giglio, Italy , resting precariously on the incline to deeper water. To right the vessel, four key pieces of apparatus were required: Tensioning the cables started the roll of the ship. At about the halfway-to-vertical position the sponsons were filled with seawater, and Costa Concordia completed its roll to upright upon the ledge. [ 4 ] The hull was rotated 65 degrees to become vertical. [ 5 ] Parbuckling was accomplished in three phases: At the completion of parbuckling, Costa Concordia rested on the ledge at a depth of 30 meters (98 feet). [ 5 ] The holdback system consisted of 56 chains in total, of which 22 chains were attached to the port side to go under the hull to the island. Each chain was 58 meters (190 ft) long and weighed about 26 metric tons (29 short tons). [ 5 ] Each link weighed 205 kilograms (452 pounds). The ledge was part steel and part grout. There were six steel platforms. The three larger platforms measured 35 by 40 meters (115 by 131 feet) each; the three smaller platforms measured 15 by 5 meters (49 by 16 feet) each. The 6 platforms were supported by 21 pillars of 1.6 meters (5.2 feet) diameter each and plunged for an average of 9 meters (30 feet) in the granite sea face of Giglio. The grout filled the space between the land side of the platforms and the sea bed. It totaled 1,180 individual bags with a volume of over 12,000 cubic meters (16,000 cubic yards) and over 16,000 metric tons (18,000 short tons) in weight. [ 5 ] The grout bags contained an "ecofriendly cement," and were built with eyelets to aid post-recovery cleanup. [ 6 ] Eleven steel sponsons were installed on the port side of the hull: two long horizontal sponsons; two long vertical sponsons and seven short vertical sponsons. Two steel "blister" tanks were connected together at the hull's bow. They measured 23 meters (75 feet) in length, 20 meters (66 feet) in height each, and had a total breadth of about 36 meters (118 feet). The whole blister structure (the two blister tanks, the tubular frame and the three anchor pipes) weighed about 1,700 metric tons (1,900 short tons). They provided a net buoyancy of 4,500 metric tons (5,000 short tons) to the bow section. [ 5 ] The cable system provided a force of about 23,800 metric tons (26,200 short tons) to start the Costa Concordia's rotation. [ 5 ] The hull of Costa Concordia rested on two spurs of rock, and was severely deformed from the weight of the ship pressing down on the spurs. This phase began when the strand jacks exerted force and the ship started to return to an upright position. This was "without doubt one of the most delicate phases of the entire recovery plan." [ 5 ] This phase began when the hull lifted from the seabed. Rotation continued by tensioning the cables operated by the strand jacks, and continued until the sponson water intakes reached sea level. [ 5 ] The hull continued to rotate, pulled down by the weight of seawater added to the sponsons. The strand jacks and cables went slack. Redundant systems were designed as a guard against failure. For example, two seawater inlet valves were provided to each sponson. [ 5 ]
https://en.wikipedia.org/wiki/Parbuckle_salvage
In mathematics education, a parent function is the core representation of a function type without manipulations such as translation and dilation. [ 1 ] For example, for the family of quadratic functions having the general form the simplest function is and every quadratic may be converted to that form by translations and dilations, which may be seen by completing the square . This is therefore the parent function of the family of quadratic equations. For linear and quadratic functions, the graph of any function can be obtained from the graph of the parent function by simple translations and stretches parallel to the axes. For example, the graph of y = x 2 − 4 x + 7 can be obtained from the graph of y = x 2 by translating +2 units along the X axis and +3 units along Y axis. This is because the equation can also be written as y − 3 = ( x − 2) 2 . For many trigonometric functions, the parent function is usually a basic sin( x ), cos( x ), or tan( x ). For example, the graph of y = A sin( x ) + B cos( x ) can be obtained from the graph of y = sin( x ) by translating it through an angle α along the positive X axis (where tan(α) = A ⁄ B ), then stretching it parallel to the Y axis using a stretch factor R , where R 2 = A 2 + B 2 . This is because A sin( x ) + B cos( x ) can be written as R sin( x −α) (see List of trigonometric identities). Alternatively, the parent function may be interpreted as cos( x ). The concept of parent function is less clear or inapplicable polynomials of higher degree because of the extra turning points, but for the family of n -degree polynomial functions for any given n , the parent function is sometimes taken as x n , or, to simplify further, x 2 when n is even and x 3 for odd n . Turning points may be established by differentiation to provide more detail of the graph. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parent_function
In chemistry , a parent hydride in IUPAC nomenclature refers to a main group compound with the formula AH n , where A is a main group element. The names of parent hydrides end with -ane , analogous with the nomenclature for alkanes . Derivatives of parent hydrides are named by appending prefixes or suffixes to the name of the parent hydride to indicate the substituents that replace the hydrogen atoms. Parent hydrides are used in both the organic nomenclature , and inorganic nomenclature systems. [ 1 ] *extensive body of chemistry Parent hydrides are useful reference compounds, but many are nonexistent or unstable. Group III parent hydrides exist only under extraordinary conditions. Borane dimerizes irreversibly. Gallane and heavier congeners polymerize, sometimes with loss of hydrogen. Plumbane and bismuthane, stibane, indane, thallane are unstable.
https://en.wikipedia.org/wiki/Parent_hydride
In chemistry , a parent structure is the structure of an unadorned ion or molecule from which derivatives can be visualized. [ 1 ] Parent structures underpin systematic nomenclature and facilitate classification. Fundamental parent structures have one or no functional groups and often have various types of symmetry. Benzene ( C 6 H 6 ) is a chemical itself consisting of a hexagonal ring of carbon atoms with a hydrogen atom attached to each, and is the parent of many derivatives that have substituent atoms or groups replacing one or more of the hydrogens. Some parents are rare or nonexistent themselves, as in the case of porphine , though many simple and complex derivatives are known. According to the International Union of Pure and Applied Chemistry , the concept of parent structure is closely related to or identical to parent compound , parent name , or simply parent . These species consist of an unbranched chain of skeletal atoms, or consisting of an unsubstituted monocyclic or polycyclic ring system. [ 2 ] Parent structures bearing one or more functional groups that are not specifically denoted by a suffix are called functional parents . [ 3 ] Names of parent structures are used in IUPAC nomenclature as basis for systematic names. A parent hydride is a parent structure with one or more hydrogen atoms. Parent hydrides have a defined standard population of hydrogen atoms attached to a skeletal structure. Parent hydrides are used extensively in organic nomenclature, but are also used in inorganic chemistry. [ 4 ]
https://en.wikipedia.org/wiki/Parent_structure
Parental care is a behavioural and evolutionary strategy adopted by some animals, involving a parental investment being made to the evolutionary fitness of offspring. Patterns of parental care are widespread and highly diverse across the animal kingdom. [ 1 ] There is great variation in different animal groups in terms of how parents care for offspring, and the amount of resources invested by parents. For example, there may be considerable variation in the amount of care invested by each sex, where females may invest more in some species, males invest more in others, or investment may be shared equally. Numerous hypotheses have been proposed to describe this variation and patterns in parental care that exist between the sexes, as well as among species. [ 2 ] Parental care is any behaviour that contributes to offspring survival, such as building a nest, provisioning offspring with food, or defending offspring from predators. Reptiles may produce self-sufficient young needing no parental care, while some hatchling birds may be helpless at birth, relying on their parents for survival. Parental care is beneficial if it increases the parent's inclusive fitness, such as by improving offspring survival, quality, or reproductive success. [ 3 ] Since parental care is costly and often affects the parent's own future survival and reproductive success, parents ensure that any investment is well-spent. Parental care thus only evolves where it is adaptive. Types of parental care include maternal or paternal care, biparental care and alloparental care. [ 1 ] Sexual conflict is known to occur over mating, and further familial conflicts may continue after mating when there is parental care of the eggs or young. For example, conflict may arise between male and female parents over how much care each should provide, conflict may arise between siblings over how much care each should demand, and conflicts may arise between parents and offspring over the supply and demand of care. [ 4 ] Although parental care increases the evolutionary fitness of the offspring receiving the care, it produces a cost for the parent organism as energy is expended on caring for the offspring, and mating opportunities may be lost. [ 5 ] [ 6 ] As this is costly, it only evolves from a when the costs are outweighed by the benefits. [ 7 ] Parental care is seen in many insects , notably the social insects such as ants , bees and wasps ; in certain fishes , such as the mouthbrooders ; widely in birds ; in amphibians; rarely in reptiles and especially widely in mammals , which share two major adaptations for care of the young, namely gestation (development of the embryo inside the mother's body) and production of milk . Care of offspring by males may evolve when natural selection favouring parental care is stronger than sexual selection against paternal care. [ 8 ] In approximately 1% of bird species, males exclusively provide care after eggs are laid. [ 9 ] Male-only care is prevalent in a variety of organisms, including fish and amphibians. [ citation needed ] The occurrence of paternal care is mostly associated with biparental care in socially monogamous mating systems. [ citation needed ] The rise of paternal care in primates may be explained by the Mating Effort and Maternal Relief hypotheses. The Mating Effort hypothesis suggests that males may provide care for offspring in an attempt to increase their own mating opportunities and thus enhance their future reproductive success. [ 10 ] [ 11 ] The Maternal Relief hypothesis proposes that males provide care to reduce the burdens associated with reproduction for the female, which ultimately generates shorter inter-birth intervals and produces more successful offspring. [ 11 ] The type of mating system may influence paternity certainty, and therefore the likelihood that a male is caring for his own true offspring. Paternal certainty is relatively high in monogamous pair-bonded species. Males are less likely to be caring for unrelated offspring, therefore a greater prevalence of paternal care tends to exist in association with this mating system. [ 7 ] By contrast, paternity certainty is reduced in polygamous species. Males are at greater risk of providing care for unrelated offspring, which therefore compromises their own fitness. [ 12 ] In polygynous species, where a single male mates with more than one female, the male's role as a caregiver therefore tends to be reduced. Conversely, males may be exclusively responsible for caring for their offspring in polyandrous species, where a single female mates with more than one male. [ 8 ] The evolution of male parental care is particularly rare in non-monogamous species because predominantly, investing effort into mating is more evolutionarily effective for males than providing parental care. [ 13 ] [ 14 ] One hypothesis regarding the evolution of male parental care in non-monogamous species suggests that parental behaviour is correlated with increased siring of offspring. [ 13 ] For instance, in mountain gorillas ( Gorilla beringei ), males of the upper tertile, regarding their frequency of interaction with young gorillas, regardless of the young's parentage, fathered five times more offspring than males of the lower-two affiliative tertiles. [ 13 ] Further, male burying beetles ( Nicrophorus vespilloides ) attracted three times more females when given the opportunity to breed and provide parental care, compared to males that were not presented with a breeding opportunity. [ 14 ] Species such as Gorilla beringei and Nicrophorus vespilloides indicate that selection may promote male parental care in non-monogamous species. [ 13 ] [ 14 ] In mammalian species, female parents possess adaptations that may predispose them to care more for offspring. These adaptations include gestation and the production of milk. In invertebrates, maternal care is known to be a prerequisite for the evolution of permanent family grouping and eusociality. In spiders, permanent sociality is dependent on extended maternal care following hatching. [ 15 ] Females of some species of reptiles may remain with their clutch to provide care, by curling around their eggs for the duration of the incubation period. The most intricate example of maternal care in this group can be seen in crocodilian species, as mothers may stay with their young for multiple months. [ 16 ] The general mammalian tendency for female parents to invest more in offspring was focused on in the development of early hypotheses to describe sex differences in paternal care. It was initially suggested that different levels of investment by each sex in terms of gamete size and number may have led to the evolution of female-only care. This early hypothesis suggested that because females invest more in the production of fewer and larger gametes, compared with males who produce many, smaller gametes, maternal care would be favoured. This is because females have initially invested more, and would thus stand more to lose if they did not continue to invest in the offspring. [ 17 ] Biparental care tends to be favoured when sexual selection is not intense, and when the adult sex ratio of males to females is not strongly skewed. [ 18 ] For two parents to cooperate in caring for young, the mates must be coordinated with each other as well as with the requirements of the developing young, and the demands of the environment. [ 19 ] The selection of biparental care as a behavioural strategy is considered to be an important factor driving the evolution of monogamy, if the value of exclusive cooperation in care for mutual offspring by two parents outweighs the potential benefits of polygamy for either sex. [ 20 ] Biparental care may increase offspring survival as well as allow parents to gain further mating opportunities with the pair mate. [ 21 ] There is conflicting evidence for whether offspring fare equally, better or worse when receiving care by two parents rather a single parent. On one hand, it has been suggested that due to sexual conflict, parents should withhold the amount of care they provide and shift as much of the workload as possible to their partner. In this case, offspring may be worse off. Other experimental evidence contrasts this, and suggests that when both parents care for their mutual offspring, their individual contributions may have synergistic effects on the fitness of their young. In this case, offspring would benefit from biparental care. [ 22 ] Biparental care is particularly prevalent in mammals and birds. [ 23 ] 90% of bird species are monogamous, in which biparental care patterns are predominant. [ 19 ] In birds, this parental care system is generally attributed to the ability of male birds to engage in most parental behaviours, with the exception of egg-laying. Due to their endothermy and small size at birth, there is a huge pressure for infant birds to grow up quickly to prevent energy loss. Since both sexes are able to forage and provision offspring, it is therefore beneficial for parents to cooperate in care to meet the requirements of infant birds. Offspring survival will ultimately increase the fitness of both parents. [ 21 ] In insects, biparental care occures only rarely. It was documented in several beetle families, e.g. buryin beetles. Also is known in cockroaches, e.g. genus cryptocercus. In hymenoptera was documented in Trypoxylon wasps and bee Ceratina nigrolabiata. [ 24 ] Alloparental care, caring for non-descendant offspring, is a seemingly altruistic and reproductively costly behaviour; it has both adaptive benefits and evident costs. It has been observed in over 120 mammal and 150 bird species. [ 25 ] It is a defining feature of eusociality , which is found in insects, including various ants , bees , and termites . [ 26 ] For mammalian mothers, alloparenting may be beneficial in promoting earlier weaning of infants (as long as earlier weaning does not compromise infant survival). This strategy results in shorter inter-birth intervals and increased reproductive success. Frequent alloparenting may provide mothers more opportunities to feed without their young, which may ultimately increase their net energy gains and permits them to invest more energy in milk synthesis. However, potential costs of alloparenting may include the expenditure of time and resources in caring for non-descendant offspring with no apparent direct benefits to alloparents. [ 27 ] The offspring that experience alloparental care may benefit from increased protection from predators and the learning of group dynamics through social interactions. [ 28 ] In the eusocial insects, the evolution of a caste system has driven workers to sacrifice their own personal reproductive fitness to assist in the reproductive success of the colony. Indirect fitness benefits are gained instead through assisting related members of the colony. [ 26 ] It may be in the best interest of a worker to forgo her own personal reproduction and participate in alloparenting, or rearing drones, so that there is an enhanced likelihood that males from her colony will ultimately mate with a queen. This would provide a greater chance for her colony's genes to be represented in the future colony. [ 29 ] Similarly, worker ants tend to raise their sisters rather than their daughters, due to their greater relatedness. The survival of the colony is believed to be the main reward that drives the altruism of the workers. [ 30 ] Parental care is not frequently observed in invertebrate species. In Dipterans , oviposition is instead commonly observed. Adults lay their eggs before leaving them to hatch and develop into larva, then pupa, then adults. For example, Phormia regina adults lay their eggs preferentially on carrion and corpses. [ 31 ] Though biparental and male-only care are rarely observed, female-only care does exist in some invertebrates. [ 32 ] [ 33 ] Some insects , including the Hymenoptera ( ants , bees and wasps ), invest substantial effort in caring for their young. The type and amount of care invested varies widely. Solitary wasps such as the potter wasps (Eumeninae) build nests for their young, provisioning them with food, often caterpillars, caught by the mother. The nests are then sealed, and the young live on the food until they leave the nest as adults. [ 34 ] In contrast, social wasps and honeybees raise young in substantial colonies, with eggs laid mainly by queens (mothers), and the young cared for mainly by workers (sisters of the young). [ 35 ] Outside the Hymenoptera, parental care is found among the burying beetles and the magnificent salt beetle . [ 36 ] Many species of Hemiptera take care of their young, for instance in the Belostomatidae genus Abedus . [ citation needed ] Among arachnids , several groups exhibit parental care. Wolf spiders are known for carrying their young on their abdomens for several weeks after they hatch. Nursery web spiders and some jumping spiders guard their young in silk nests after they hatch. The jumping spider species Toxeus magnus is notable for nursing its young through a form of lactation . [ 37 ] Some crustaceans also show parental care. Mothers of the crab species Metopaulias depressus raises their young in water-filled bromeliads, cleaning them of debris, defending them against predators and feeding them with captured prey. [ 38 ] In the desert isopod Hemilepistus reaumuri , juveniles share their parents' burrow for the first 10-20 days of their life, and are supplied with food by their parents. [ 39 ] Finally, some species of Synalpheus shrimps are eusocial, living in colonies with one or a few breeders of each sex together with non-breeders that defend the colony. [ 40 ] Several groups of fish have evolved parental care. The ratio of fish genera that exhibit male-only: biparental: female-only care is 9:3:1. [ 41 ] Some fish such as pipefish, sea dragons and seahorses ( Syngnathidae ) have a form of male pregnancy, where the female takes no part in caring for the young once she has laid her eggs. [ 42 ] [ 43 ] Males in other species may take a role in guarding the eggs before they hatch. Mouthbrooding is the care given by some groups of fish (and a few other animals such as Darwin's frog ) to their offspring by holding them in their mouth for extended periods of time. Mouthbrooding has evolved independently in several different families of fish including the cardinalfish , sea catfish , bagrid catfish , cichlids , snakeheads , jawfishes , gouramis , and arowanas . [ 44 ] There is an equal prevalence of female-only and male-only care in amphibians. However, biparental care is uncommon. [ 45 ] Provisioning in this animal group tends to be rare, and offspring guarding is more prevalent. For example, in the frog species Bibron's Toadlet , male frogs are left to care for the nest. Parental care after the laying of eggs has been observed in 5% of caecilian species, 18% of salamander species and 6% of frog species, [ 46 ] though this number is likely an underestimate due to taxonomic bias in research [ 47 ] and the cryptic nature of many species. [ 48 ] Six modes of parental care are recognized among the Amphibia , in different species: egg attendance, egg transport, tadpole attendance, tadpole transport, tadpole feeding, and internal gestation in the oviduct (viviparity and ovoviviparity). [ 46 ] Many species also care for offspring (either eggs or tadpoles) in specially adapted structures of their body. For example, the male pouched frog of eastern Australia protects tadpoles in pouches on the lateral surface of their skin, [ 49 ] the gastric-brooding frog raised tadpoles (and potentially eggs) in their stomach [ 50 ] and the common Suriname toad raises eggs embedded in the skin on its back. Reptiles provide less parental care than other tetrapods. When it does occur, it is usually female-only or biparental care. [ 52 ] Many species within this group produce offspring that are self-sufficient, and are able to regulate their body temperatures and forage for themselves immediately after birth, thereby eliminating the need for parental care. Maternal care exists in crocodilians , where the mother assists hatchlings by transporting them in her mouth from the nest to the water. She may stay with the young for up to several months. [ 53 ] Parental behavior have also been observed in Cunningham's skink , a viviparous lizard that protects its offspring against predators. [ 54 ] Birds are distinctive in the way they care for their young. 90% of bird species display biparental care, including 9% of species with alloparental care, or helpers at the nest. [ 9 ] Biparental care may have originated in the stem reptiles ( archosaurs ) that gave rise to the birds, before they developed flight . [ 55 ] In the remainder of bird species, female-only care is prevalent, and male-only care is rare. [ 9 ] [ 23 ] Most birds, including passerines (perching birds), have their young born blind, naked and helpless (altricial), totally dependent for their survival on parental care. The young are typically raised in a nest; the parents catch food and regurgitate it for the young. Some birds such as pigeons create a " crop milk " which they similarly regurgitate. [ 56 ] David Lack developed a hypothesis that clutch size has evolved in response to the costs of parental care known as Lack's principle . It has since seen modifications but is still used as a general model. There is maternal care in all species of mammals , and while 95% of species exhibit female-only care, in only 5% biparental care is present. [ citation needed ] Thus, there are no known cases of male-only care in mammals. [ 57 ] The major adaptation shared by all live-bearing mammals for care of their young after birth is lactation (the feeding of milk from the mammary glands). [ citation needed ] Further, many mammals exhibit other parental care behaviors to increase the fitness of their offspring, for example, building a den, feeding, guarding, carrying, huddling, grooming and teaching their young. [ 58 ] [ 59 ] Others, consider also as a type of care when males provision the pregnant females. [ 60 ] Humans Parenting or child rearing in humans is the process of promoting and supporting the physical , emotional , social , financial, and intellectual development of a child from infancy to adulthood . [ 61 ] This goes far beyond anything found in other animals, including not only the provision of food, shelter, and protection from threats such as predators , but a prolonged period of support during which the child learns whatever is needed to live successfully in human society . [ 62 ] In evolutionary biology, parental investment is the expenditure of time and effort towards rearing offspring that benefits the offspring's evolutionary fitness at a cost to parents' ability to invest in other components of the species' fitness. Parental care requires resources from one or both parents that increases the fitness of their offspring and of themselves. [ 63 ] [ 60 ] These resources thus cannot be invested in the parents own survival, growth or future reproduction. Therefore, parental care will only evolve in a species that requires care. Some animal groups produce self-sufficient young and thus no parental care is required. For species that do require care, trade-offs exist in regards to where parental investment should be directed and how much care should be provided, since resources and time are limited. [ 64 ] For example, if the strategy of parental care involves parents choosing to give each of a relatively small number of offspring an increased chance of surviving to reproduce themselves, they may accordingly have evolved to produce a small number of zygotes at a time, possibly only one. [ 65 ] [ 66 ] The ideal amount of parental investment would guarantee the survival and quality of both broods. [ 23 ] Parents need to trade off investment into current and future reproductive events, since parental care increases offspring survival at the expense of the parent's ability to invest in future broods. Nonetheless, there is some evidence suggesting that in mammals provinding male care actually leads to more fecund females, and thus caring for the offspring can lead to having more number of litters. [ 60 ] Predation on offspring and species habitat-type are two potential proximate causes for the evolution of parental care. [ 2 ] Generally, parental care is expected to evolve from a previous state of no care when the costs of providing care are outweighed by the benefits to a caring parent. For example, if the benefit of increased offspring survival or quality exceed the decreased chance of survival and future reproductive success of the parent, then parental care may evolve. Therefore, parental care is favoured when it is required by offspring, and the benefits of care are high. [ 3 ] Types of parental care and the amount of resources invested by parents vary considerably across the animal kingdom. The evolution of male-only, female-only, biparental or alloparental care in different groups of animals may be driven by multiple factors. Firstly, different groups may have diverse physiological or evolutionary constraints that may predispose one sex to care more than the other. [ 64 ] For example, mammary glands may make female mammals preadapted to exclusively provide nutritional care to young. [ 67 ] Secondly, the costs and benefits of care by each sex may be influenced by ecological conditions and mating opportunities. Thirdly, operational and adult sex ratios may influence which sex has more mating opportunities, and thus predisposes one sex to care more. Furthermore, parenting decisions may be influenced by the confidence of either sex in being the genetic parent of the offspring, or paternity certainty. [ 67 ] The type of mating system may influence which sex provides care. In monogamous species that establish long-term pair-bonds, parents are likely to cooperate in caring for their offspring. In polyandrous mating systems, paternal or male-only care tends to evolve. Conversely, polygynous mating systems are associated with little or no male contribution. Males rarely provide care for offspring in promiscuous mating systems, since there is high paternity uncertainty. [ 68 ] [ 69 ] Male care is most prevalent in species with external fertilisation, while female care is more common with internal fertilisation. [ 70 ] Explanations include the suggestion by Trivers (1972) that this depends on paternity certainty, [ 63 ] which may be less with internal fertilisation unless the male undertakes "mate guarding" until the female lays eggs or gives birth. [ 71 ] A second explanation is Richard Dawkins and T. R. Carlisle's (1976) theory that the order of gamete release, and therefore the opportunity for each parent to desert, may influence which sex provides care. [ 72 ] Internal fertilisation may provide the male parent with an opportunity to desert first, as is seen in some bird and mammal species; the roles may be reversed with external fertilisation. In fish, males often wait until a female lays her eggs before he can fertilise them, to prevent his small gametes from floating away. This allows the female to desert first, and leave male parents to care for the eggs. [ 64 ] Thirdly, George C. Williams 's (1975) hypothesis indicates that an association with the embryos may predispose one sex to care for the offspring. With internal fertilisation occurring in the mother, the female parent is most closely associated with the embryo, and may be preadapted to care for the young. With external fertilisation, eggs are often laid by the female in a male's territory. [ 73 ] [ page needed ] Male territoriality is particularly common with external fertilisation. Therefore, the male is most closely associated with the embryos. Males may defend their territories and thereby incidentally defend their eggs and young. This may preadapt males to provide care. Male care consequently involves less opportunity costs in this case, since males can still attract mates while simultaneously guarding territory and eggs. Females may even be more attracted to, and preferentially select to mate with, males that already have eggs in their nest. [ 74 ] Male territoriality with internal fertilisation exists in some bird species. Nest size and nest building behaviour are two sexually selected traits that may attract a female to a male's territory for mating. Since the female lays her eggs in the nest within the male's territory, paternal care may evolve, even though fertilisation is internal. [ 75 ] Increasing parental investment in any one young benefits that particular offspring, but decreases resources for other offspring, possibly decreasing parental fitness. [ 64 ] Hence, a trade-off exists between offspring quantity and quality within a brood. [ 23 ] If a parent disperses its limited resources thinly among too many offspring, then few will survive. Alternatively, if the parent uses its resources too generously among one small brood, this reduces the ability of the parent to invest in future broods. [ 76 ] Therefore, there is a theoretical optimal brood size that maximises productivity for each brood. [ 64 ] In groups with biparental care, there is sexual conflict over how much care should be provided. If either parent is temporarily removed, the other parent may increase its work rate. [ 77 ] This demonstrates that both parents have the capacity to work harder and provide greater levels of care. One parent may be tempted to cheat, relying on the other parent. In biparental care, the key theoretical prediction is that parents should respond to reduced partner effort with incomplete compensation. A parent who does not put in their fair share of work then suffers reduced fitness, because their offspring receive less resources from both parents. This has been experimentally demonstrated with birds. [ 78 ] When one parent is not sufficient, both parents may need to care for offspring. Each parent would like to minimise the level of care they must invest at the expense of the other parent. If one parent were to die or cease providing care, then the remaining partner may be obliged to desert the eggs or young. The extent of parental care provided to a current brood may also be influenced by prospects of future reproduction. Field experiments on a passerine bird species indicated that in areas where broods were fed with extra carotenoids, their mouths became redder. This consequently enhanced their begging displays and led parents to increase their provisioning. This was likely because the redder mouths indicated that offspring were healthier, and thus worth investing in. In other territories, the adults were also provided carotenoid-rich sugar diets, which increased the likelihood of them having a second brood in that season. Since parents that had second broods did not respond to the increased begging signals of their current brood, this indicates that parents strategically vary their sensitivity to their current broods demands in relation to their future prospects of reproducing in that season. [ 79 ] The act of eating one's own offspring, or filial cannibalism, may be an adaptive behaviour for a parent to use as an extra source of food. Parents may eat part of a brood to enhance the parental care of the current brood. Alternatively, parents may eat the whole brood to cut their losses and improve their future reproductive success. [ 80 ] In theory, a parent should invest more when paired with a mate of a high phenotypic or genetic quality. This is explained by the differential allocation hypothesis. [ 81 ] This was shown through experimentation on zebra finches. Males were made more attractive to females by experimentally giving them red leg bands. Females increased their provisioning and raised more young when paired with these attractive males compared to when they were paired with less attractive males that had blue or green leg bands. [ 82 ] Further experimentation on mallard ducks has displayed that females lay larger eggs and increase their provisioning when paired with more attractive males. [ 83 ] Female peacocks have also been shown to lay more eggs after mating with males that possess more elaborate tails. [ 84 ] Furthermore, female birds are generally more likely to care for the offspring of males that spend more time nest building, and build more elaborate nests. As a consequence, the reproductive success of males tends to increase with nest size and building behaviour. [ 85 ] Therefore, differential allocation is expected because the offspring of these pairings would likely inherit the quality of the attractive parent, if attractiveness signifies genetic quality. Differential allocation may also work the other way around, where parents may invest less in their offspring if paired with unattractive mates. By reducing the amount of care invested in these offspring, individuals may save resources for future reproductive attempts with a more attractive mate. [ 86 ] Differential allocation is mostly expected from females, since in many animal groups females are more choosy when assessing potential mates. However, in many bird species, males are known to be involved in caring for young, which may lead to differential allocation by males as well as females. [ 82 ]
https://en.wikipedia.org/wiki/Parental_care
Parental investment , in evolutionary biology and evolutionary psychology , is any parental expenditure (e.g. time, energy, resources) that benefits offspring . [ 1 ] [ 2 ] Parental investment may be performed by both males and females (called biparental care ), females alone ( exclusive maternal care ) or males alone ( exclusive paternal care ). Care can be provided at any stage of the offspring's life, from pre-natal (e.g. egg guarding and incubation in birds, and placental nourishment in mammals) to post-natal (e.g. food provisioning and protection of offspring). Parental investment theory, a term coined by Robert Trivers in 1972, predicts that the sex that invests more in its offspring will be more selective when choosing a mate, and the less-investing sex will have intra-sexual competition for access to mates. This theory has been influential in explaining sex differences in sexual selection and mate preferences , throughout the animal kingdom and in humans. [ 2 ] Sexual selection is an evolutionary concept that has been used to explain why, in some species, male and female individuals behave differently in selecting mates. In 1930, Ronald Fisher wrote The Genetical Theory of Natural Selection , [ 3 ] in which he introduced the modern concept of parental investment, introduced the sexy son hypothesis , and introduced Fisher's principle . In 1948, Angus John Bateman published an influential study of fruit flies in which he concluded that because female gametes are more costly to produce than male gametes, the reproductive success of females was limited by the ability to produce ovum, and the reproductive success of males was limited by access to females. [ 4 ] In 1972, Robert Trivers continued this line of thinking with his proposal of parental investment theory, which describes how parental investment affects sexual behavior. He concluded that whichever sex has higher parental investment will be more selective when choosing a mate, while the sex with lower investment will compete intra-sexually for mating opportunities. [ 2 ] In 1974, Trivers extended parental investment theory to explain parent–offspring conflict, the conflict between the amount of investment that is optimal from the parent's perspective, versus from the offspring's perspective. [ 5 ] Parental investment theory is a branch of life history theory . The earliest consideration of parental investment is given by Ronald Fisher in his 1930 book The Genetical Theory of Natural Selection , [ 6 ] wherein Fisher argued that parental expenditure on both sexes of offspring should be equal. Clutton-Brock expanded the concept of parental investment to include costs to any other component of parental fitness. [ citation needed ] Male dunnocks tend to not discriminate between their own young and those of another male in polyandrous or polygynandrous systems. They increase their own reproductive success through feeding the offspring in relation to their own access to the female throughout the mating period, which is generally a good predictor of paternity . [ 7 ] This indiscriminative parental care by males is also observed in redlip blennies . [ 8 ] In some insects, male parental investment is given in the form of a nuptial gift. For instance, ornate moth females receive a spermatophore containing nutrients, sperm and defensive toxins from the male during copulation. This gift, which can account for up to 10% of the male's body mass, constitutes the total parental investment the male provides. [ 9 ] In some species, such as humans and many birds, the offspring are altricial and unable to fend for themselves for an extended period of time after birth. In these species, males invest more in their offspring than do the male parents of precocial species, since reproductive success would otherwise suffer. The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately on reproductive success of the offspring. For example, in the cichlid fish Tropheus moorii , a female has very high parental investment in her young because she mouthbroods the young and while mouthbrooding, all nourishment she takes in goes to feed the young and she effectively starves herself. In doing this, her young are larger, heavier, and faster than they would have been without it. These benefits are very advantageous since it lowers their risk of being eaten by predators and size is usually the determining factor in conflicts over resources. [ 10 ] However, such benefits can come at the cost of parent's ability to reproduce in the future e.g., through increased risk of injury when defending offspring against predators, loss of mating opportunities whilst rearing offspring, and an increase in the time interval until the next reproduction. A special case of parental investment is when young do need nourishment and protection, but the genetic parents do not actually contribute in the effort to raise their own offspring. For example, in Bombus terrestris , oftentimes sterile female workers will not reproduce on their own, but will raise their mother's brood instead. This is common in social Hymenoptera due to haplodiploidy , whereby males are haploid and females are diploid. This ensures that sisters are more related to each other than they ever would be to their own offspring, incentivizing them to help raise their mother's young over their own. [ 11 ] Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will be likely to evolve when the benefits exceed the costs. Reproduction is costly. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival, and further reproductive output. However, such expenditure is typically beneficial to the offspring, since it enhances their condition, survival, and reproductive success. These differences may lead to parent-offspring conflict . Parents are naturally selected to maximize the difference between the benefits and the costs, and parental care will tend to exist when the benefits are substantially greater than the costs. [ citation needed ] Parents are equally related to all offspring, and so in order to optimize their fitness and chance of reproducing their genes, they should distribute their investment equally among current and future offspring. However, any single offspring is more related to themselves (they have 100% of their DNA in common with themselves) than they are to their siblings (siblings usually share 50% of their DNA), so it is best for the offspring's fitness if the parent(s) invest more in them. To optimize fitness, a parent would want to invest in each offspring equally, but each offspring would want a larger share of parental investment. The parent is selected to invest in the offspring up until the point at which investing in the current offspring is costlier than investing in future offspring. [ 12 ] In iteroparous species, where individuals may go through several reproductive bouts during their lifetime, a tradeoff may exist between investment in current offspring and future reproduction. Parents need to balance their offspring's demands against their own self-maintenance. This potential negative effect of parental care was explicitly formalized by Trivers in 1972, who originally defined the term parental investment to mean "any investment by the parent in an individual offspring that increases the offspring's chance of surviving (and hence reproductive success ) at the cost of the parent's ability to invest in other offspring". [ 2 ] Penguins are a prime example of a species that drastically sacrifices their own health and well-being in exchange for the survival of their offspring. This behavior, one that does not necessarily benefit the individual, but the genetic code from which the individual arises, can be seen in the King Penguin. Although some animals do exhibit altruistic behaviors towards individuals that are not of direct relation, many of these behaviors appear mostly in parent-offspring relationships. While breeding, males remain in a fasting-period at the breeding site for five weeks, waiting for the female to return for her own incubation shift. However, during this time period, males may decide to abandon their egg if the female is delayed in her return to the breeding grounds. [ 13 ] It shows that these penguins initially show a trade-off of their own health, in hopes of increasing the survivorship of their egg. But there comes a point where the male penguin's costs become too high in comparison to the gain of a successful breeding season. Olof Olsson investigated the correlation between how many experiences in breeding an individual has and the duration an individual will wait until abandoning his egg. He proposed that the more experienced the individual, the better that individual will be at replenishing his exhausted body reserves, allowing him to remain at the egg for a longer period of time. [ 13 ] The males' sacrifice of their body weight and possible survivorship, in order to increase their offspring's chance of survival is a trade-off between current reproductive success and the parents' future survival. [ 13 ] This trade-off makes sense with other examples of kin-based altruism and is a clear example of the use of altruism in an attempt to increase overall fitness of an individual's genetic material at the expense of the individual's future survival. The maternal-offspring conflict has also been studied in animals species and humans. One such case has been documented in the mid-1970s by ethologist Wulf Schiefenhövel . Eipo women of West New Guinea engage in a cultural practice in which they give birth just outside the village. Following the birth of their child, each woman weighed whether or not she should keep the child or leave the child in the brush nearby, inevitably ending in the death of the child. [ 14 ] Likelihood of survival and availability of resources within the village were factors that played into this decision of whether or not to keep the baby. During one illustrated birth, the mother felt the child was too ill and would not survive, so she wrapped the child up, preparing to leave the child in the brush; however, upon seeing the child moving, the mother unwrapped the child and brought it into the village, demonstrating a shift of life and death. [ 14 ] This conflict between the mother and the child resulted in detachment behaviors in Brazil, seen in Scheper-Hughes work as "many Alto babies remain[ed] not only unchristened but unnamed until they begin to walk or talk", [ 15 ] or if a medical crisis arose and the baby needed an emergency baptism . This conflict between survival, both emotional and physical, prompted a shift in cultural practices, thus resulting in new forms of investment from the mother towards the child. Alloparental care also referred to as 'Allomothering,' is when a member of a community, apart from the biological parents of the infant, partake in offspring care provision. [ 16 ] A range of behaviors fall under the term alloparental care, some of which are: carrying, feeding, watching over, protecting, and grooming. Through alloparental care stress on parents, especially the mother, can be reduced, therefore reducing the negative effects of the parent-offspring conflict on the mother. [ 17 ] The apparent altruistic nature of the behavior may seem at odds with Darwin's theory of natural selection, as taking care of offspring which are not one's own would not increase one's direct fitness, while taking time, energy and resources away from raising one's own offspring. However, the behavior can be explained evolutionarily as increasing indirect fitness, as the offspring is likely to be non-descendent kin, therefore carrying some of the genetics of the alloparent. [ 16 ] Parental investment behavior enhances the chances of survival of offspring, and it does not require underlying mechanisms to be compatible with empathy applicable to adults, or situations involving unrelated offspring, and it does not require the offspring to reciprocate the altruistic behavior in any way. [ 18 ] [ 19 ] Parentally investing individuals are not more vulnerable to being exploited by other adults. Parental investment as defined by Robert Trivers in 1972 [ 20 ] is the investment in offspring by the parent that increases the offspring's chances of surviving and hence reproductive success at the expense of the parent's ability to invest in other offspring. A large parental investment largely decreases the parents' chances of investing in other offspring. Parental investment can be split into two main categories: mating investment and rearing investment. Mating investment consist of the sexual act and the sex cells invested. The rearing investment is the time and energy expended to raise the offspring after conception. In most species, the female's parental investment in both mating and rearing efforts greatly surpasses that of the male. In terms of sex cells (egg and sperms cells), the female's investment is typically a larger portion of both genetic material and overall virility, while typically males produce thousands of sperm cells on a daily basis. Trivers' believed that this theory explained sexual jealousy . [ 20 ] A criticism of the theory comes from Thornhill and Palmer's analysis of it in A Natural History of Rape: Biological Bases of Sexual Coercion , as it seems to rationalise rape and sexual coercion . [ 21 ] Thornhill and Palmer claimed rape is an evolved technique for obtaining mates in an environment where women choose mates. As PIT claims males seek to copulate with as many fertile females as possible, the choice women have could result in a negative effect on the male's reproductive success. If women did not choose their mates, Thornhill and Palmer claim there would be no rape. This ignores a variety of sociocultural factors, such as the fact that not only fertile females are raped – 34% of underage rape victims are under 12, [ 22 ] which means they are not of fertile age, thus there is no evolutionary advantage in raping them. 14% of rapes in England are committed on males, [ 23 ] who cannot increase a man's reproductive success as there will be no conception. [ better source needed ] Trivers' theory does not account for women having short-term relationships such as one-night stands, and that not all men behave promiscuously. An alternative explanation to parental investment theory and mate preferences would be Buss and Schmitt's sexual strategies theory . [ 24 ] Human women have a fixed supply of around 400 ova , while sperm cells in men are supplied at a rate of twelve million per hour. [ 25 ] Also, fertilization and gestation occur in women, investments which outweigh the man's investment of a single effective sperm cell. Furthermore, for women, one act of sexual intercourse could result in a 38-week commitment of human gestation and subsequent commitments related to rearing such as breastfeeding . From Trivers' theory of parental investment, several implications follow. The first implication is that women are often but not always the more investing sex. The fact that they are often the more investing sex leads to the second implication that evolution favors females who are more selective of their mates to ensure that intercourse would not result in unnecessary or wasteful costs. The third implication is that because women invest more and are essential for the reproductive success of their offspring, they are a valuable resource for men; as a result, males often compete for sexual access to females. For many species the only type of male investment received is that of sex cells. In those terms, the female investment greatly exceeds that of male investment as previously mentioned. However, there are other ways in which males invest in their offspring. For example, the male can find food as in the example of balloon flies. [ 26 ] He may find a safe environment for the female to feed or lay her eggs as exemplified in many birds. [ 27 ] [ 28 ] He may also protect the young and provide them with opportunities to learn as young, as is the case with many wolves. Overall, the main role that males overtake is that of protection of the female and their young. That often can decrease the discrepancy of investment caused by the initial investment of sex cells. There are some species such as the Mormon cricket , pipefish seahorse and Panamanian poison arrow frog males invest more. Among the species where the male invests more, the male is also the pickier sex, placing higher demands on their selected female. For example, the female that they often choose usually contain 60% more eggs than rejected females. [ 29 ] Parental investment theory is not only used to explain evolutionary phenomena and human behavior but describes recurrences in international politics as well. Specifically, parental investment is referred to when describing competitive behaviors between states and determining aggressive nature of foreign policies. The parental investment hypothesis states that the size of coalitions and the physical strengths of its male members determines whether its activities with its foreign neighbors are aggressive or amiable. [ 30 ] According to Trivers, men have had relatively low parental investments, and were therefore forced into fiercer competitive situations over limited reproductive resources. Sexual selection naturally took place and men have evolved to address its unique reproductive problems. Among other adaptations, men's psychology has also developed to directly aid men in such intra-sexual competition. [ 30 ] One essential psychological developments involved decision-making of whether to take flight or actively engage in warfare with another rivalry group. The two main factors that men referred to in such situations were (1) whether the coalition they are a part of is larger than its opposition and (2) whether the men in their coalition have greater physical strength than the other. The male psychology conveyed in the ancient past has been passed on to modern times causing men to partly think and behave as they have during ancestral wars. According to this theory, leaders of international politics were not an exception. For example, the United States expected to win the Vietnam War due to its greater military capacity when compared to its enemies. Yet victory, according to the traditional rule of greater coalition size, did not come about because the U.S. did not take enough consideration to other factors, such as the perseverance of the local population. [ 30 ] The parental investment hypothesis contends that male physical strength of a coalition still determines the aggressiveness of modern conflicts between states. While this idea may seem unreasonable upon considering that male physical strength is one of the least determining aspects of today's warfare, human psychology has nevertheless evolved to operate on this basis. Moreover, although it may seem that mate seeking motivation is no longer a determinant, in modern wars sexuality, such as rape, is undeniably evident in conflicts even to this day. [ 30 ] In many species, males can produce a larger number of offspring over the course of their lives by minimizing parental investment in favor of investing time impregnating any reproductive-age female who is fertile. In contrast, a female can have a much smaller number of offspring during her reproductive life, partly due to higher obligate parental investment. Females will be more selective ("choosy") of mates than males will be, choosing males with good fitness (e.g., genes, high status, resources, etc.), so as to help offset any lack of direct parental investment from the male, and therefore increase reproductive success. Robert Trivers ' theory of parental investment predicts that the sex making the largest investment in lactation , nurturing, and protecting offspring will be more discriminating in mating ; and that the sex that invests less in offspring will compete via intrasexual selection for access to the higher-investing sex (see Bateman's principle [ 31 ] ). In species where both sexes invest highly in parental care, mutual choosiness is expected to arise. An example of this is seen in crested auklets , where parents share equal responsibility in incubating their single egg and raising the chick. In crested auklets, both sexes are ornamented. [ 32 ] Humans have evolved increasing levels of parental investment, both biologically and behaviorally. The fetus requires high investment from the mother, and the altricial newborn requires high investment from a community. Species whose newborn young are unable to move on their own and require parental care have a high degree of altriciality . Human children are born unable to care for themselves and require additional parental investment post-birth in order to survive. [ 33 ] Trivers (1972) [ 2 ] hypothesized that greater biologically obligated investment will predict greater voluntary investment. Mothers invest an impressive amount in their children before they are even born. The time and nutrients required to develop the fetus, and the risks associated with both giving these nutrients and undergoing childbirth, are a sizable investment. To ensure that this investment is not for nothing, mothers are likely to invest in their children after they are born, to be sure that they survive and are successful. Relative to most other species, human mothers give more resources to their offspring at a higher risk to their own health, even before the child is born. This is associated with the evolution of a slower life history, in which fewer, larger offspring are born after longer intervals, requiring increased parental investment. [ 34 ] [ 35 ] The developing human fetus––and especially the brain––requires nutrients to grow. In the later weeks of gestation, the fetus requires increasing nutrients as the growth of the brain increases. [ 36 ] Rodents and primates have the most invasive placenta phenotype, the hemochorial placenta, in which the chorion erodes the uterine epithelium and has direct contact with maternal blood. The other placental phenotypes are separated from the maternal bloodstream by at least one layer of tissue. The more invasive placenta allows for a more efficient transfer of nutrients between the mother and fetus, but it comes with risks as well. The fetus is able to release hormones directly into the mother's bloodstream to “demand” increased resources. This can result in health problems for the mother, such as pre-eclampsia . During childbirth, the detachment of the placental chorion can cause excessive bleeding. [ 37 ] The obstetrical dilemma also makes birth more difficult and results in increased maternal investment. Humans have evolved both bipedalism and large brain size. The evolution of bipedalism altered the shape of the pelvis, and shrunk the birth canal at the same time brains were evolving to be larger. The decreasing birth canal size meant that babies are born earlier in development, when they have smaller brains. Humans give birth to babies with brains 25% developed, while other primates give birth to offspring with brains 45-50% developed. [ 38 ] A second possible explanation for the early birth in humans is the energy required to grow and sustain a larger brain. Supporting a larger brain gestationally requires energy the mother may be unable to invest. [ 39 ] The obstetrical dilemma makes birth challenging, and a distinguishing trait of humans is the need for assistance during childbirth. The altered shape of the bipedal pelvis requires that babies leave the birth canal facing away from the mother, contrary to all other primate species. This makes it more difficult for the mother to clear the baby's breathing passageways, to make sure the umbilical cord is not wrapped around the neck, and to pull the baby free without bending its body the wrong way. [ 40 ] The human need to have a birth attendant also requires sociality . In order to guarantee the presence of a birth attendant, humans must aggregate in groups. It has been controversially claimed that humans have eusociality , [ 41 ] like ants and bees, in which there is relatively high parental investment, cooperative care of young, and division of labor. It is unclear which evolved first; sociality, bipedalism, or birth attendance. Bonobos , our closest living relatives alongside chimpanzees , have high female sociality and births among bonobos are also social events. [ 42 ] [ 43 ] Sociality may have been a prerequisite for birth attendance, and bipedalism and birth attendance could have evolved as long as five million years ago. [ 33 ] As female primates age, their ability to reproduce decreases. The grandmother hypothesis describes the evolution of menopause, which may or may not be unique to humans among primates. [ 44 ] As women age, the costs of investing in additional reproduction increase and the benefits decrease. At menopause, it is more beneficial to stop reproduction and begin investing in grandchildren. Grandmothers are certain of their genetic relation to their grandchildren, especially the children of their daughters, because maternal certainty of their own children is high, and their daughters are certain of their maternity to their children as well. It has also been theorized that grandmothers preferentially invest in the daughters of their daughters because X chromosomes carry more DNA and their granddaughters are most closely related to them. [ 45 ] As altriciality increased, investment from individuals other than the mother became more necessary. High sociality meant that female relatives were present to help the mother, but paternal investment increased as well. Paternal investment increases as it becomes more difficult to have additional children, and as the effects of investment on offspring fitness increase. [ 46 ] Men are more likely than women to give no parental investment to their children, and the children of low-investing fathers are more likely to give less parental investment to their own children. Father absence is a risk factor for both early sexual activity and teenage pregnancy. [ 47 ] [ 48 ] [ 49 ] [ 50 ] Father absence raises children's stress levels, which are linked to earlier onset of sexual activity and increased short-term mating orientation. [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] Daughters of absent fathers are more likely to seek short-term partners, and one theory explains this as a preference for outside (non-partner) social support because of the perceived uncertain future and uncertain availability of committing partners in a high-stress environment. [ 56 ] Women can only get pregnant while ovulating. Human ovulation is concealed, or not signaled externally. Concealed ovulation decreases paternity certainty because men are unsure when women ovulate. [ 57 ] The evolution of concealed ovulation has been theorized to be a result of altriciality and increased need for paternal investment—if men are unsure of the time of ovulation, the best way to successfully reproduce would be to repeatedly mate with a woman throughout her cycle, which requires pair bonding, which in turn increases paternal investment. [ 58 ] Sociosexuality was first described by Alfred Kinsey as a willingness to engage in casual and uncommitted sexual relationships. [ 59 ] Sociosexual orientation describes sociosexuality on a scale from unrestricted to restricted. Individuals with an unrestricted sociosexual orientation have higher openness to sex in less committed relationships, and individuals with a restricted sociosexual orientation have lower openness to casual sexual relationships. [ 60 ] [ 61 ] However, today it is acknowledged that sociosexuality does not in reality exist on a one-dimensional scale. Individuals who are less open to casual relationships are not always seeking committed relationships, and individuals who are less interested in committed relationships are not always interested in casual relationships. [ 62 ] Short- and long-term mating orientations are the modern descriptors of openness to uncommitted and committed relationships, respectively. [ 63 ] Parental investment theory, as proposed by Trivers, argues that the sex with higher obligatory investment will be more selective in choosing sex partners, and the sex with lower obligatory investment will be less selective and more interested in "casual" mating opportunities. The more investing sex cannot reproduce as frequently, causing the less investing sex to compete for mating opportunities. [ 20 ] [ 64 ] In humans, women have higher obligatory investment ( pregnancy and childbirth), than men ( sperm production ). [ 24 ] Women are more likely to have higher long-term mating orientations, and men are more likely to have higher short-term mating orientations. [ 62 ] Short- and long-term mating orientations influence women's preferences in men. Studies have found that women put great emphasis on career-orientation, ambition and devotion only when considering a long-term partner. [ 65 ] When marriage is not involved, women put greater emphasis on physical attractiveness. [ 66 ] Generally, women prefer men who are likely to perform high parental investment and have good genes. Women prefer men with good financial status, who are more committed, who are more athletic, and who are healthier. [ 67 ] Some inaccurate theories have been inspired by parental investment theory. The "structural powerlessness hypothesis" [ 68 ] proposes that women strive to find mates with access to high levels of resources because as women, they are excluded from these resources directly. However, this hypothesis has been disproved by studies which found that financially successful women place an even greater importance on financial status, social status, and possession of professional degrees. [ 69 ] Decreased polygyny is associated with increased paternal investment. [ 70 ] [ 71 ] The demographic transition describes the modern decrease in both birth and death rates. From a Darwinian perspective, it does not make sense that families with more resources are having fewer children. One explanation for the demographic transition is the increased parental investment required to raise children who will be able to maintain the same level of resources as their parents. [ 72 ]
https://en.wikipedia.org/wiki/Parental_investment
Within the cells of some members of basidiomycetes fungi are found microscopic structures called parenthesomes or septal pore caps . They are shaped like parentheses and found on either side of pores in the dolipore septum which separates cells within a hypha . Their function has not been established, and their composition has not been fully elucidated. The variations in their appearance are useful in distinguishing individual species. Generally, they are barrel shaped, with an endoplasmic reticulum covering. This Basidiomycota -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parenthesome
Parent–offspring conflict ( POC ) is an expression coined in 1974 by Robert Trivers . It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring . PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases. POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage. However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring. The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection . Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive offspring) than under full siblings. [ 1 ] [ 2 ] In plants , POC over the allocation of resources to the brood members may affect both brood size (number of seeds matured within a single fruit) and seed size. [ 3 ] Concerning brood size, the most economic use of maternal resources is achieved by packing as many seeds as possible in one fruit, i.e., minimizing the cost of packing per seed. In contrast, offspring benefits from low numbers of seeds per fruit, which reduces sibling competition before and after dispersal. Conflict over seed size arises because there usually exists an inverse exponential relationship between seed size and fitness , that is, the fitness of a seed increases at a diminishing rate with resource investment but the fitness of the maternal parent has an optimum, as demonstrated by Smith and Fretwell [ 4 ] (see also marginal value theorem ). However, the optimum resource investment from the offspring's point of view would be the amount that optimizes its inclusive fitness (direct and indirect fitness), which is higher than the maternal parent's optimum. This conflict about resource allocation is most obviously manifested in the reduction of brood size (i.e. a decrease in the proportion of ovules matured into seeds). Such reduction can be assumed to be caused by the offspring: If the maternal parent's interest were to produce as few seeds as observed, selection would not favour the production of extra ovules that do not mature into seeds. (Although other explanations for this phenomenon exist, such as genetic load , resource depletion or maternal regulation of offspring quality, they could not be supported by experiments.) There are several possibilities how the offspring can affect paternal resource allocation to brood members. Evidence exists for siblicide by dominant embryos: [ citation needed ] Embryos formed early kill the remaining embryos through an aborting chemical. In oaks, early fertilized ovules prevent the fertilization of other ovules by inhibiting the pollen tube entry into the embryo sac . In some species, the maternal parent has evolved postfertilization abortion of few seeded pods. Nevertheless, cheating by the offspring is also possible here, namely by late siblicide, when the postfertilization abortion has ceased. According to the general POC model, reduction of brood size – if caused by POC – should depend on genetic relatedness between offspring in a fruit. Indeed, abortion of embryos is more common in out-crossing than in self-pollinating plants (seeds in cross-pollinating plants are less related than in self-pollinating plants). Moreover, the level of solicitation of resources by the offspring is also increased in cross-pollinating plants: There are several reports that the average weight of crossed seeds is greater than of seeds produced by self-fertilization. [ 5 ] Some of the earliest examples of parent-offspring conflict were seen in bird broods and especially in raptor species. While parent birds often lay two eggs and attempt to raise two or more young, the strongest fledgling takes a greater share of the food brought by parents and will often kill the weaker sibling ( siblicide ). Such conflicts have been suggested as a driving force in the evolution of optimal clutch size in birds. [ 6 ] In the blue-footed booby , parent-offspring conflict results in times of food scarcity. When there is less food available in a given year, the older, dominant chick will often kill the younger chick by either attacking directly, or by driving it from the nest. Parents try to prevent siblicide by building nests with steeper sides [ 7 ] and by laying heavier second eggs. [ 8 ] Even before POC theory arose, debates took place over whether infants wean themselves or mothers actively wean their infants. Furthermore, it was discussed whether maternal rejections increase infant independence. It turned out that both mother and infant contribute to infant independence. Maternal rejections can be followed by a short-term increase in infant contact but they eventually result in a long-term decrease of contact as has been shown for several primates: In wild baboons infants that are rejected early and frequently spend less time in contact whereas those that are not rejected stay much longer in the proximity of their mother and suckle or ride even in advanced ages. In wild chimpanzees an abrupt increase in maternal rejections and a decrease in mother-offspring contact is found when mothers resume estrus and consort with males. In rhesus macaques a high probability of conception in the following mating season is associated with a high rate of maternal rejection. Rejection and behavioral conflicts can occur during the first months of an infant's life and when the mother resumes estrus. These findings suggest that the reproduction of the mother is influenced by the interaction with their offspring. So there is a potential for conflicts over PI. It was also observed in rhesus macaques that the number of contacts made by offspring is significantly higher than the number of contacts made by mother during a mating season, whereas the opposite holds for the number of broken contacts. This fact suggests that the mother resists offspring's demands for contact, whereas offspring is apparently more interested in spending time in contact. At three months of infant age a shift from mother to infant in responsibility for maintaining contact takes place. So when the infant becomes more independent, its effort to maintain proximity to its mother increases. This might sound paradoxical but becomes clear when one takes into account that POC increases during the period of PI. In summary, all these findings are consistent with POC-theory. One might object that time in contact is not a reasonable measure for PI and that, for example, time for milk transfer ( lactation ) would be a better one. Here one can argue that mother and infant have different thermoregulatory needs due to the fact that they have different surface-to-volume ratios resulting in more rapid loss of heat in infants compared to adults. So infants may be more sensitive to low temperatures than their mothers. An infant might try to compensate by increased contact time with their mother, which could initiate a behavioral conflict over time. Consistency of this hypothesis has been shown for Japanese macaques where decreasing temperatures result in higher maternal rejections and increased number of contacts made by infants. [ 9 ] In eusocial species, the parent-offspring conflict takes on a unique role because of haplodiploidy and the prevalence of sterile workers. Sisters are more related to each other (0.75) than to their mothers (0.5) or brothers (0.25). In most cases, this drives female workers to try and obtain a sex ratio of 3:1 (females to males) in the colony. However, queens are equally related to both sons and daughters, so they prefer a sex ratio of 1:1. The conflict in social insects is about the level investment the queen should provide for each sex for current and future offspring. It is generally thought that workers will win this conflict and the sex ratio will be closer to 3:1, however there are examples, like in Bombus terrestris , where the queen has considerable control in forcing a 1:1 ratio. [ 10 ] Many species of frogs and salamanders display complex social behavior with highly involved parental care that includes egg attendance, tadpole transport, and tadpole feeding. Both males and females of the strawberry poison-dart frog care for their offspring, however, females invest in more costly ways. [ 11 ] Females of certain poison frog species produce unfertilized, non-developing trophic eggs which provide nutrition to her tadpoles. The tadpoles vibrate vigorously against mother frogs to solicit nutritious eggs. These maternal trophic eggs are beneficial for offspring, positively influencing larval survival, size at metamorphosis, and post metamorphic survival. [ 12 ] In the neotropical, foam-nesting pointedbelly frog ( Leptodactylus podicipinus ), females providing parental care to tadpoles have reduced body condition and food ingestion. Females that are attending to her offspring have significantly lower body mass, ovary mass, and stomach volume. This indicates that the cost of parental care in the pointedbelly frog has the potential to affect future reproduction of females due to the reaction in body condition and food intake. [ 13 ] In the Puerto Rican common coqui , parental care is performed exclusively by males and consists of attending to the eggs and tadpoles at an oviposition site. When brooding, males have a higher frequency of empty stomachs and lose a significant portion of their initial body mass during parental care. Abdominal fat bodies of brooding males during the middle of parental care were significantly smaller than those of non-brooding males. Another major behavioral component of parental care is nest defense against conspecific egg cannibals. This defense behavior includes aggressive calling, sustained biting, wrestling, and blocking directed against the nest intruder. [ 14 ] Females of the Allegheny Mountain dusky salamander exhibit less activity and become associated with the nest site well in advance of oviposition in preparation for the reproductive season. This results in a reduced food intake and a decrease in body weight over the brooding period. Females either stop or greatly reduce their foraging activities and instead will eat opportunistically following oviposition. Since nutritional intake is reduced, there is a decrease in body weight in females. [ 15 ] Females of the red-backed salamander make a substantial parental investment in terms of clutch size and brooding behavior. When brooding, females usually do not leave their eggs to forage but rather rely upon their fat reserves and any resources they encounter at their oviposition site. In addition, females could experience metabolic costs while safeguarding their offspring from desiccation, intruders, and predators. [ 16 ] The plasticity of tadpoles may play a role in the weaning conflict in egg-feeding frogs, in which the offspring prefer to devote resources to growth, while the mother prefers nutrients to help her young become independent. A similar conflict happens in direct-developing frogs that care for clutches, with protected tadpoles having the advantage of a slower, safer development, but they need to be ready to reach independence rapidly due to the risks of predation or desiccation . [ 12 ] In the neotropical Zimmerman’s poison frog , the males provide a specific parental care in the form of transportation. The tadpoles are cannibalistic, hence why the males typically separate them from their siblings after hatching by transporting them to small bodies of water. However, in some cases parents do not transport their tadpoles but let them all hatch into the same pool. In order to escape their cannibalistic siblings, the tadpoles will actively seek transportative parental care. When a male frog approaches the water body in which the tadpoles had been deposited in, tadpoles will almost “jump” on the back of the adult, mimicking an attack, while adults would not assist with this movement. While this is an obvious example of sibling conflict , the one-sided interaction between tadpoles and frogs could be seen as a form of parent-offspring conflict, in which the offspring attempts to extract more from the interaction than the parent is willing to provide. In this scenario, a tadpole climbing onto an unwilling frog— who enters the pool for reasons other than tadpole transportation, such as egg deposition, cooling off, or sleeping— might be analogous to mammalian offspring seeking to nurse after weaning. In times of danger, the tadpoles of Zimmerman’s poison frog don't passively await parental assistance but instead exhibit an almost aggressive approach in mounting the adult frogs. [ 17 ] Reproductive attempts in strawberry poison-dart frog such as courtship activity, significantly decreases or will entirely cease in tadpole-rearing females compared to non-rearing females. [ 12 ] Most brooding males of the common coqui cease calling during parental care while gravid females are still available and known to mate, hence why non-calling males miss potential opportunities to reproduce. [ 18 ] Caring for tadpoles comes at the cost of other current reproductive opportunities for females, leading to the hypothesis that frequent reproduction is associated with reduced survival in frogs. [ 12 ] An important illustration of POC within humans is provided by David Haig ’s (1993) work on genetic conflicts in pregnancy. [ 19 ] Haig argued that fetal genes would be selected to draw more resources from the mother than would be optimal for the mother to give. The placenta , for example, secretes allocrine hormones that decrease the sensitivity of the mother to insulin and thus make a larger supply of blood sugar available to the fetus. The mother responds by increasing the level of insulin in her bloodstream and to counteract this effect the placenta has insulin receptors that stimulate the production of insulin-degrading enzymes . [ 19 ] About 30 percent of human conceptions do not progress to full term (22 percent before becoming clinical pregnancies) [ 20 ] creating a second arena for conflict between the mother and the fetus. The fetus will have a lower quality cut off point for spontaneous abortion than the mother. The mother's quality cut-off point also declines as she nears the end of her reproductive life, which becomes significant for older mothers. Older mothers have a higher incidence of offspring with genetic defects. Indeed, with parental age on both sides, the mutational load increases as well. Initially, the maintenance of pregnancy is controlled by the maternal hormone progesterone , but in later stages it is controlled by the fetal human chorionic gonadotrophin released into the maternal bloodstream. The release of fetal human chorionic gonadotrophin causes the release of maternal progesterone. There is also conflict over blood supply to the placenta, with the fetus being prepared to demand a larger blood supply than is optimal for the mother (or even for itself, since high birth weight is a risk factor). This results in hypertension and, significantly, high birth weight is positively correlated with maternal blood pressure. During pregnancy, there is a two-way traffic of immunologically active cell lines through the placenta. Fetal lymphocyte lines may survive in women even decades after giving birth. [ citation needed ]
https://en.wikipedia.org/wiki/Parent–offspring_conflict
The Parflange F37 system is a technology from the hydraulic area of Parker-Hannifin , which allows a non welded flange connection of hydraulic tubes and pipes. Use-orientated, the connection will be done by either flaring or retaining ring technology. Flaring Technology After putting a F37 flange onto a seamless hydraulic tube or pipe, the tube end will be flared to 37°, what also explains the name of this technology. The flaring is done by a special orbitally flaring process, which compresses the surface of the pipe end, gaining an excellent sealing surface. An insert made of carbon or stainless steel will be placed into the flared pipe end. The insert is soft sealed by an O-Ring to the pipe side. To be sealed against a flat counterpart (e.g. manifold or block) the insert has a groove on the front side for a so-called "F37-Seal" made of Polyurethane or optionally an o-ring or bonded seal made of carbon steel or stainless steel with a Nitrile rubber or FKM sealing lip. Alternatively, the front side of the insert can be flat. For a pipe to pipe connection, a special insert design with soft sealed cones on both sides to fit between two flared pipe ends is available. Afterwards, the flange will be positioned to the pipe end and connected to a hydraulic component or another pipe having a similar flange and corresponding insert. Retaining Ring Technology For the Retaining Ring connection, a groove will be machined onto the pipe end to hold the retaining ring. The retaining ring is made of a segmented stainless steel ring covered by a steainless steel spring and is used to fix the flange. For the assembly, the retaining ring flange has to be put onto the machined pipe end. The retaining ring has to be widened for getting it on the pipe end to snap into the before machined groove. The inside contour of the retaining ring flange will cover the retaining ring from the outside. The sealing of the Parflange F37 retaining ring connection is done by a bonded seal on the face side of the pipe end or alternatively by a pipe seal carrier ("PSC"). The pipe seal carrier has soft seales (o-rings or F37-Seals) on both sides. On one side, the pipe seal carrier has a centering aid to improve assembly. Flaring Technology By flaring the pipe end, the flange gets a supporting surface and is able to build up force to ensure a stable connection. At first, the insert has a sealing function. With its o-ring on the pipe end side, the sealing against the pipe is achieved. The sealing against the connecting part is done by the F37-Seal or a bonded seal. If the connecting part has a soft seal on the face side, an insert with a flat face has to be used. For the connection of two pipes, an insert with cones on both sides, which are soft sealed by o-rings, can be used as well. Simultaneously, the insert stabilises the connection. The achieved pressure by tightening the flange bolts can be spread on a bigger contact surface of the insert, increasing the solidity of the connection. Retaining Ring Technology The special inside contour of the retaining ring flange covers the retaining ring, which is installed onto the machined pipe end. A form-closed connection results from the tightening of the flange, which will be sealed by bonded seal or pipe seal carrier on the face side. The Parflange F37 system is used to connect haydraulic tubes, pipes and components without welding . Depending on pipe and flange size, the F37 system is approved for pressure ratings up to 420 bar (6000 psi, respectively 42 MPa). It is commonly used in shipbuilding, offshore and heavy machinery industry for moving and controlling of e.g. cranes and elevators. Furthermore, the Parflange F37 technology allows to connect tubes and pipes from 16 to 273 millimeter outside diameter (1/2" to 10" flange size). The F37 system is approved by leading classification societies. The flange hole patterns are according to ISO 6162-1/ SAE J 518 Code 61 (3000 psi/210 bar), ISO 6162-2/SAE J518 Code 62 (6000 psi/420 bar) and ISO 6164 (400 bar). Advantages of Parflange F37 compared to welded flange connections are mainly in savings on time and costs. No costly inspection of welds (f.e. by x-ray graphing) and post-weld acid cleaning is needed, making the connection also more environment-friendly and safer than welding. Compared to welding, no welding stress corrosion is possible, resulting in maximum service time of the pipe connection and a reduction of service costs.
https://en.wikipedia.org/wiki/Parflange_F37
The Parikh– Doering oxidation is an oxidation reaction that transforms primary and secondary alcohols into aldehydes and ketones , respectively. [ 1 ] The procedure uses dimethyl sulfoxide (DMSO) as the oxidant and the solvent, activated by the sulfur trioxide pyridine complex (SO 3 •C 5 H 5 N) in the presence of triethylamine or diisopropylethylamine as base. Dichloromethane is frequently used as a cosolvent for the reaction. Compared to other activated DMSO oxidations, the Parikh–Doering oxidation is operationally simple: the reaction can be run at non-cryogenic temperatures, often between 0 °C and room temperature, without formation of significant amounts of methyl thiomethylether side products. [ 2 ] However, the Parikh–Doering oxidation sometimes requires a large excess of DMSO, SO 3 •C 5 H 5 N and/or base as well as prolonged reaction times for high conversions and yields to be obtained. The following example from the total synthesis of (–)-kumausallene by P.A. Evans and coworkers illustrates typical reaction conditions: [ 3 ] The first step of the Parikh–Doering oxidation is the reaction of dimethyl sulfoxide (DMSO), which exists as a hybrid of the resonance structures 1a and 1b , with sulfur trioxide ( 2 ), giving intermediate 3 . Nucleophilic attack by alcohol 4 and deprotonation by pyridine ( 5 ) gives intermediate 6 , an alkoxysulfonium ion associated with the anionic pyridinium sulfate complex. The addition of at least two equivalents of base deprotonates the alkoxysulfonium ion to give sulfur ylide 7 and removes the pyridinium sulfate counterion. In the last step, the ylide goes through a five-membered ring transition state to give the desired ketone or aldehyde 8 , as well as an equivalent of dimethyl sulfide . Parikh–Doering oxidation is widely applied in organic synthesis. Here is an example of the Parikh–Doering oxidation's application in the Nicolaou cortistatin total synthesis, [ 4 ] where the reaction transforms the hydroxyl functional group into an aldehyde. This process leads to Ohira-Bestmann homologation , which is critical in the following 1,4 addition/aldol condensation/dehydration cascade that forms cortistatins' seven-membered ring. The synthetic route is shown below:
https://en.wikipedia.org/wiki/Parikh–Doering_oxidation
Paris' law (also known as the Paris–Erdogan equation ) is a crack growth equation that gives the rate of growth of a fatigue crack. The stress intensity factor K {\displaystyle K} characterises the load around a crack tip and the rate of crack growth is experimentally shown to be a function of the range of stress intensity Δ K {\displaystyle \Delta K} seen in a loading cycle. The Paris equation is [ 1 ] where a {\displaystyle a} is the crack length and d a / d N {\displaystyle {\rm {d}}a/{\rm {d}}N} is the fatigue crack growth for a load cycle N {\displaystyle N} . The material coefficients C {\displaystyle C} and m {\displaystyle m} are obtained experimentally and also depend on environment, frequency, temperature and stress ratio. [ 2 ] The stress intensity factor range has been found to correlate the rate of crack growth from a variety of different conditions and is the difference between the maximum and minimum stress intensity factors in a load cycle and is defined as Being a power law relationship between the crack growth rate during cyclic loading and the range of the stress intensity factor, the Paris–Erdogan equation can be visualized as a straight line on a log-log plot , where the x-axis is denoted by the range of the stress intensity factor and the y-axis is denoted by the crack growth rate. The ability of ΔK to correlate crack growth rate data depends to a large extent on the fact that alternating stresses causing crack growth are small compared to the yield strength. Therefore crack tip plastic zones are small compared to crack length even in very ductile materials like stainless steels. [ 3 ] The equation gives the growth for a single cycle. Single cycles can be readily counted for constant-amplitude loading. Additional cycle identification techniques such as rainflow-counting algorithm need to be used to extract the equivalent constant-amplitude cycles from a variable-amplitude loading sequence. In a 1961 paper, P. C. Paris introduced the idea that the rate of crack growth may depend on the stress intensity factor. [ 4 ] Then in their 1963 paper, Paris and Erdogan indirectly suggested the equation with the aside remark "The authors are hesitant but cannot resist the temptation to draw the straight line slope 1/4 through the data" after reviewing data on a log-log plot of crack growth versus stress intensity range. [ 5 ] The Paris equation was then presented with the fixed exponent of 4. Higher mean stress is known to increase the rate of crack growth and is known as the mean stress effect . The mean stress of a cycle is expressed in terms of the stress ratio R {\displaystyle R} which is defined as or ratio of minimum to maximum stress intensity factors. In the linear elastic fracture regime, R {\displaystyle R} is also equivalent to the load ratio The Paris–Erdogan equation does not explicitly include the effect of stress ratio, although equation coefficients can be chosen for a specific stress ratio. Other crack growth equations such as the Forman equation do explicitly include the effect of stress ratio, as does the Elber equation by modelling the effect of crack closure . The Paris–Erdogan equation holds over the mid-range of growth rate regime, but does not apply for very low values of Δ K {\displaystyle \Delta K} approaching the threshold value Δ K th {\displaystyle \Delta K_{\text{th}}} , or for very high values approaching the material's fracture toughness , K Ic {\displaystyle K_{\text{Ic}}} . The alternating stress intensity at the critical limit is given by Δ K cr = ( 1 − R ) K Ic {\displaystyle {\begin{aligned}\Delta K_{\text{cr}}&=(1-R)K_{\text{Ic}}\end{aligned}}} . [ 6 ] The slope of the crack growth rate curve on log-log scale denotes the value of the exponent m {\displaystyle m} and is typically found to lie between 2 {\displaystyle 2} and 4 {\displaystyle 4} , although for materials with low static fracture toughness such as high-strength steels, the value of m {\displaystyle m} can be as high as 10 {\displaystyle 10} . Because the size of the plastic zone ( r p ≈ K I 2 / σ y 2 ) {\displaystyle (r_{\text{p}}\approx K_{I}^{2}/\sigma _{y}^{2})} is small in comparison to the crack length, a {\displaystyle a} (here, σ y {\displaystyle \sigma _{y}} is yield stress), the approximation of small-scale yielding applies, enabling the use of linear elastic fracture mechanics and the stress intensity factor . Thus, the Paris–Erdogan equation is only valid in the linear elastic fracture regime, under tensile loading and for long cracks. [ 7 ]
https://en.wikipedia.org/wiki/Paris'_law
Paris Métro Line 14 (French: Ligne 14 du métro de Paris ) is one of the sixteen lines on the Paris Métro . It connects Saint-Denis–Pleyel and Aéroport d'Orly on a north-west south-east diagonal via the three major stations of Gare Saint-Lazare , the Châtelet–Les-Halles complex , and Gare de Lyon . The line goes through the centre of Paris , and also serves the communes of Saint-Denis , Saint-Ouen-sur-Seine , Clichy , Le Kremlin-Bicêtre , Gentilly , Villejuif , Chevilly-Larue , L'Haÿ-les-Roses , Thiais and Paray-Vieille-Poste . The first Paris Métro line built from scratch since the 1930s, it has been operated completely automatically since its opening in 1998, and the very positive return of that experiment motivated the retrofitting of Line 1 for full automation. Before the start of its commercial service Line 14 was known as project Météor , an acronym of MÉTro Est-Ouest Rapide . The line has been used as a showcase for the expertise of the RATP (the operator), Alstom , Systra and Siemens Transportation Systems (constructors of the rolling stock and automated equipment respectively) when they bid internationally to build metro systems. A northward extension to Mairie de Saint-Ouen opened in December 2020. [ 1 ] The line extended further north to Saint-Denis–Pleyel and south to Aéroport d'Orly , as part of the Grand Paris Express project, on 24 June 2024. [ 2 ] Those extensions made Line 14 the longest in the Métro, at 27.8 km of length. The original Line 14 linked Invalides with Porte de Vanves until 1976, when it was merged into the southern section of the current Line 13. Paris's east–west axis across has long been heavily travelled: Line 1 of the Métro began approaching saturation in the 1940s, necessitating the construction of Line A of the RER in the 1960s and '70s; which became the busiest urban routes in Europe (by 2010 there were more than a million passengers each working day). To improve service, the SACEM ( Système d'aide à la conduite, à l'exploitation et à la maintenance --"Assisted driving, control and maintenance system") was installed on the central run of Line A in September 1989. This improved efficiency and reduced the interval between trains to just two minutes, though an improvement ultimately insufficient to absorb the increasing demand. To cater permanently to demand on the busy artery between Auber and Gare de Lyon new rail lines would have to be built. Two proposals were made by the transport companies: the SNCF suggested a new tunnel between Châtelet and Gare de Lyon for Line D of the RER allowing traffic to circulate from the north and south-east of Île-de-France . More importantly it proposed "Project EOLE" (" Est-Ouest Liaison Express "), the creation of a new standard gauge line, initially from Paris's eastern suburbs to Saint-Lazare , then an extension onwards to the western suburbs. In 1987, the RATP proposed "project Météor", (" MÉTro-Est-Ouest-Rapide ") the creation of a new Métro line, from Porte Maillot on the edge of the 16th arrondissement to the Maison Blanche district in the 13th, an area poorly served by transport despite its large population. The project would fit well with the regeneration of the Tolbiac district on the left bank around the new Bibliothèque Nationale de France , in that arrondissement. The plans to go to Porte Maillot were eventually abandoned in favour of a terminus at Saint-Lazare, with the later possibility of extending the line to Clichy and assimilating the Asnières branch of Line 13, thus simplifying its complicated operation. Given the pressing need, the council of Ministers of Michel Rocard 's government approved the project in October 1989. However, budgetary constraints forced the reduction of both. In the first stage, EOLE would be but a simple extension of trains from the suburbs to the new underground station at Saint-Lazare and Météor limited to the central Madeleine – Bibliothèque run, thus leaving the main railway station of Saint-Lazare and the heart of the 13th arrondissement unserved. [ N 1 ] From November 1989 until the end of 1992, exploratory shafts and galleries were dug; tunnelling proper lasted from July 1993 until early 1995. In September 1993, Sandrine was baptised near la Bastille ; a tunnel boring machine eighty metres (260 feet) long and eleven metres (36 feet) wide, it was capable of drilling a tunnel 8.6 metres (28 feet) across. Working twenty-four hours a day, five days a week, she bored twenty-five metres (82 feet) below the water table . The terrain, made mostly of loosely packed limestone and marl was favourable to drilling and the tunnel advanced at a respectable 350 metres (380 yards) a month. The tunnel passes underneath seven Métro lines, the sewers, Clichy-Capucines, and four underground carparks and passes over two RER lines. Works at the site and the excavation of excavated material from the bassin de l'Arsenal were delayed two weeks by a flood of the Seine, the waterway route having been chosen to minimise heavy traffic in the city. The tunnel reached the future Pyramides station on 17 January 1995, and Madeleine on 15 March; it stopped underneath boulevard Haussmann in August and was brought to the surface through shafts there the same month. [ N 2 ] At the other end of the line, from Gare de Lyon to Tolbiac the tunnel was excavated directly from the surface . It crossed the Seine upstream from pont de Tolbiac , supported by submerged beams the traditional under fluvial support. The last was implanted on 28 September 1994. As a cost-saving measure, the section from Gare de Lyon to the Bassin de l'Arsenal was excavated at the same time as the tunnels of Line D of the RER Châtelet–Les Halles . The 816,000 m 3 (1,067,000 cu yd) of debris excavated is about twice the volume of the Tour Montparnasse , Paris's largest building; and the 19,000 tonnes (18,700 long tons; 20,900 short tons) of steel needed for re-inforced concrete and structural support is twice the mass of the Eiffel Tower . [ 7 ] Travellers have been largely satisfied with Line 14's speed and service. However, despite its automation it has not been free of accidents. While the platform doors prevent access to the rails, they are susceptible to electric outages which have halted service entirely. On 20 September 2004, two trains stopped entirely in the tunnel after a signalling failure. [ 8 ] On 22 December 2006, passengers were trapped for one and a half hours after an electrical failure on the line which arose from a mechanical failure. [ 9 ] Technological failures have occurred twice: on 21 March 2007 traffic was interrupted between Gare de Lyon and Bibliothèque François Mitterrand; [ 10 ] and again on 21 August 2007 a technical failure stopped service. [ 11 ] Traffic on the line grew quickly; after five years in service, there were 240,000 daily passengers in October 2003. [ 12 ] That same year, service was interrupted several times to allow the installation of material for an extension north from Madeleine to Saint-Lazare . This section was opened on 16 December 2003, and the line saw a 30% increase in traffic thereafter; this northern terminus of Line 14 is the most important node on the network after Gare du Nord. In 2007, the line was extended south to Olympiades , an area of high rise towers in the XIIIe arrondissement poorly served by the Métro. [ 13 ] The construction of the extension was relatively simple, as the tunnel was built at the same time as the rest of the line. Initially planned to open in 2006, work was delayed by the collapse of a primary school courtyard during the night of 14–15 October 2006. [ 14 ] [ 15 ] Since then traffic has grown again: at the end of 2007, an average of 450,000 passengers used the line on a working day. Due to its use as a train maintenance area, a new maintenance area was constructed. A second northern extension to Mairie de Saint-Ouen opened on 14 December 2020, somewhat helping to desaturate the section of Line 13 between this station and Saint-Lazare. This extension was originally supposed to open in 2017, but construction was postponed several times during the course of 2016 and 2017, and the COVID-19 pandemic also hampered opening efforts during the course of 2020. The opening of this extension lengthened line 14 from 9 km (5.6 mi) to just shy of 14 km (8.7 mi). [ 16 ] As part of the Grand Paris Express expansion plans, Line 14 was again expanded both north and south. The northern extension from Saint-Lazare has a principal aim of reducing overcrowding on Line 13 . [ 17 ] The adopted solution crosses the two branches of Line 13 with stations at Porte de Clichy on the Asnières–Gennevilliers branch and Mairie de Saint-Ouen on the Saint-Denis branch. Another station interconnects with the RER C station Saint-Ouen , another one with the Transilien Paris-Saint-Lazare lines at Pont Cardinet , and the last one with the RER D at Saint-Denis–Pleyel . Construction on the extension began in 2014, and it was opened on 14 December 2020, except for Saint-Denis–Pleyel on 24 June 2024. [ 18 ] Line 14 was also extended south-eastwards from Olympiades towards Orly Airport , with 6 intermediate stations. Both future ends of the line were connected with the completion of the digging of the final tunnel of the southern extension on 3 March 2021 and that of the northern extension on 15 April 2021. [ 19 ] The southern extension to Orly, along with the northern extension to Saint-Denis–Pleyel, was opened on 24 June 2024. [ 20 ] [ 21 ] [ 2 ] A fare of €10.30, almost five times the standard Metro fare, applies for journeys starting or ending at Orly Airport. [ 22 ] One station – Villejuif–Gustave Roussy – was not ready to open with the rest of the southern extension in June 2024 and only opened on 18 January 2025. It will provide a future connection on the orbital line 15 . [ 23 ] In February 2012 the STIF announced that, with the two extensions planned, a brand new class of rolling stock, the MP 14 will replace the current line of MP 89CA (and upcoming MP 05) stock along Line 14 around 2020. This new stock will consist of eight-car train formations, longer than used to date on the Métro, with the MP 89CA and MP 05 stock reassigned to other lines (including the possibility of Lines 4 , 6, or 11, should they one day become automated). [ 24 ] The number of passengers grew year-by-year on the line. [ 25 ] [ 26 ] The experience in automated control and doors has inspired several new projects. In 1998, the RATP began planning to automate several existing lines, despite the heavy cost. Automation work on Line 1 began in 2007, along with the introduction of doors on the platform. [ 27 ] The upgrade was finished in 2012. [ 28 ] In 2022, Line 4 was upgraded and automated following the successful Line 1 project. The widespread introduction of platform doors for passenger safety is planned, despite the project's cost. In January 2004, ground level signalling to indicate the doorways was tested on Line 13 at Saint-Lazare station. Several different door models were tested during 2006 and Kaba was chosen to supply them. After testing, platform doors will be rolled out across the network, first in certain stations on Line 13, then on the totality of Line 1 in preparation for its complete automation. [ S 1 ] This new line parallel to Line A took the opportunity to incorporate innovations on the rest of the network: the stations are larger and, at 120 metres (390 ft), longer and thus can accommodate eight carriages. The runs between stations are longer, allowing a rolling speed of close to 40 km/h (25 mph), close to double that of the other Paris metro lines and approaching that of the RER . Lastly, the line is completely automated and runs without any driver , the first large-scale metro line in a capital to do so (although driverless operation had been used on the VAL system in Lille and the MAGGALY technology of Lyon Metro Line D ). [ N 3 ] Some features of Line 14's train control system are run under the OpenVMS operating system . Its control system is noted in the field of software engineering of critical systems because safety properties on some safety-critical parts of the systems were proved using the B-Method , a formal method . Line 14 has some unusual design features – unlike other stations in Paris, its floor tiling is not bitumenised , and platform screen doors at stations prevent passengers from falling onto the track or from committing suicide. Météor as CBTC ( Communication-based train control ) system was supplied by Siemens Transportation Systems including monitoring from an operations control centre, equipment for 7 stations and equipment for 19 six-car trains, resulting in a headway of 85 seconds. [ 29 ] It was the base for the Trainguard MT CBTC , which then equipped other rapid transit lines throughout the world. Line 14 uses rubber-tyre rolling stock. Three types of trains were used: MP 89 CA (21 trains as of 3 November 2013), MP 05 (11 trains as of 20 March 2016), and MP 14 (22 train as of November 2022). The last MP89 and MP05 ran on the line in 2023 and were moved to the newly automated Line 4 . The MP89 and MP05 contained six cars, while the MP14 trains which displaced them have eight cars. All Line 14 stations were designed from the start to accommodate eight cars, and the introduction of the MP 14 greatly increased capacity on the line. [ 6 ] The conceptual design of the stations sought to evoke space and openness. The size of stations, their corridors and transfer halls brings the line architecturally closer to those of the RER rather than the existing Métro lines. The RATP opted for a specific style of the new line, for instance lightly coloured tiling rather than bitumen. The use of space was designed in a contemporary manner: voluminous spaces mixed plenty of light with modern materials and overall eased the flow of passengers. According to the designers, the stations should be the reflection of a "noble public space, monumental in spirit, urban in its choice of forms and materials". Four architects designed the first seven stations on the line: Jean-Pierre Vaysse & Bernard Kohn six of them, and Antoine Grumbach & Pierre Schall the station Bibliothèque. [ S 2 ] Saint-Lazare benefits from a well of natural light visible on the platforms, even though they are five levels below the surface. The station's exit is constructed from a glass bubble designed by Jean-Marie Charpentier and situated just in front of the Gare de Paris-Saint-Lazare , pointing towards the row of bus-stops. [ citation needed ] Pyramides and Madeleine are endowed with a particular lighting, bright sunshine outside falls onto the platforms; a system which evidently does not work at night. Madeleine has several video projectors which allow cultural installations, for example, one on Marlène Dietrich , an actress, during the autumn of 2003. [ citation needed ] Gare de Lyon offers travellers the view of a tropical garden on the right side of trains towards Olympiades, as one enters the station. This garden is situated underneath RATP House at the foot of which the station was built. It occupies a space originally reserved for the Transport Museum. Moreover, it is the only station equipped with a central platform, the only possible layout in light of the area's underground construction density. [ citation needed ] Bibliothèque François Mitterrand has its own unique design: monumental, fifteen metre pillars and stairs forming a semi-circle seventy metres in diameter. [ citation needed ] Olympiades station was developed by the architects Ar.thème Associés following the line's guiding principles, defined by Bernard Kohn from 1991. The station thus is in keeping with others in its choice of materials (polished concrete arches, wood on the ceilings, etc.) as much as in its lighting, height of its ceilings, and platforms larger than the average on other lines. [ citation needed ] On the other hand, certain stations on the line are notable due to the disagreeable odour of humidity and sulfur that one can sometimes find as far as the changeover halls. Due to the line's relative depth, it runs underneath the water-table , creating a constant risk of seepage, similar to that found on Line E of the RER. [ 30 ]
https://en.wikipedia.org/wiki/Paris_Métro_Line_14
In molecular physics , the Pariser–Parr–Pople method applies semi-empirical quantum mechanical methods to the quantitative prediction of electronic structures and spectra, in molecules of interest in the field of organic chemistry . Previous methods existed—such as the Hückel method which led to Hückel's rule —but were limited in their scope, application and complexity, as is the Extended Hückel method . This approach was developed in the 1950s by Rudolph Pariser with Robert Parr and co-developed by John Pople . [ 1 ] [ 2 ] [ 3 ] It is essentially a more efficient method of finding reasonable approximations of molecular orbitals , useful in predicting physical and chemical nature of the molecule under study since molecular orbital characteristics have implications with regards to both the basic structure and reactivity of a molecule. This method used the zero-differential overlap (ZDO) approximation to reduce the problem to reasonable size and complexity but still required modern solid state computers (as opposed to punched card or vacuum tube systems) before becoming fully useful for molecules larger than benzene . Originally, Pariser's goal of using this method was to predict the characteristics of complex organic dyes, but this was never realized. The method has wide applicability in precise prediction of electronic transitions, particularly lower singlet transitions, and found wide application in theoretical and applied quantum chemistry . The two basic papers on this subject were among the top five chemistry and physics citations reported in ISI, Current Contents 1977 for the period of 1961–1977 with a total of 2450 references. In contrast to the Hartree–Fock -based semiempirical method counterparts (i.e.: MOPAC ), the pi-electron theories have a very strong ab initio basis. The PPP formulation is actually an approximate pi-electron effective operator, and the empirical parameters, in fact, include effective electron correlation effects. A rigorous, ab initio theory of the PPP method is provided by diagrammatic, multi-reference, high order perturbation theory (Freed, Brandow, Lindgren, etc.). (The exact formulation is non-trivial, and requires some field theory) Large scale ab initio calculations (Martin and Birge, Martin and Freed, Sheppard and Freed, etc.) have confirmed many of the approximations of the PPP model and explain why the PPP-like models work so well with such a simple formulation. This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pariser–Parr–Pople_method
In mathematical logic , the Paris–Harrington theorem states that a certain claim in Ramsey theory , namely the strengthened finite Ramsey theorem, which is expressible in Peano arithmetic , is not provable in this system. That Ramsey-theoretic claim is, however, provable in slightly stronger systems. This result has been described by some (such as the editor of the Handbook of Mathematical Logic in the references below) as the first "natural" example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel's first incompleteness theorem . The strengthened finite Ramsey theorem is a statement about colorings and natural numbers and states that: Without the condition that the number of elements of Y is at least the smallest element of Y , this is a corollary of the finite Ramsey theorem in K P n ( S ) {\displaystyle K_{{\mathcal {P}}_{n}(S)}} , with N given by: Moreover, the strengthened finite Ramsey theorem can be deduced from the infinite Ramsey theorem in almost exactly the same way that the finite Ramsey theorem can be deduced from it, using a compactness argument (see the article on Ramsey's theorem for details). This proof can be carried out in second-order arithmetic . The Paris–Harrington theorem states that the strengthened finite Ramsey theorem is not provable in Peano arithmetic . Roughly speaking, Jeff Paris and Leo Harrington (1977) showed that the strengthened finite Ramsey theorem is unprovable in Peano arithmetic by showing in Peano arithmetic that it implies the consistency of Peano arithmetic itself. Assuming Peano arithmetic really is consistent, since Peano arithmetic cannot prove its own consistency by Gödel's second incompleteness theorem , this shows that Peano arithmetic cannot prove the strengthened finite Ramsey theorem. The strengthened finite Ramsey theorem can be proven assuming induction up to ε 0 {\displaystyle \varepsilon _{0}} for a relevant class of formulas. Alternatively, it can be proven assuming the reflection principle , for the arithmetic theory, for Σ 1 0 {\displaystyle \Sigma _{1}^{0}} -sentences . The reflection principle also implies the consistency of Peano arithmetic. It is provable in second-order arithmetic (or the far stronger Zermelo–Fraenkel set theory ) and so is true in the standard model. The smallest number N that satisfies the strengthened finite Ramsey theorem is then a computable function of n , m , k , but grows extremely fast. In particular it is not primitive recursive , but it is also far faster-growing than standard examples of non-primitive recursive functions such as the Ackermann function . It dominates every computable function provably total in Peano arithmetic, [ 1 ] which includes functions such as the Ackermann function.
https://en.wikipedia.org/wiki/Paris–Harrington_theorem
In coding theory , a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms. Formally, a parity check matrix H of a linear code C is a generator matrix of the dual code , C ⊥ . This means that a codeword c is in C if and only if the matrix-vector product H c ⊤ = 0 (some authors [ 1 ] would write this in an equivalent form, c H ⊤ = 0 .) The rows of a parity check matrix are the coefficients of the parity check equations. [ 2 ] That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix compactly represents the parity check equations, that must be satisfied for the vector ( c 1 , c 2 , c 3 , c 4 ) {\displaystyle (c_{1},c_{2},c_{3},c_{4})} to be a codeword of C . From the definition of the parity-check matrix it directly follows the minimum distance of the code is the minimum number d such that every d - 1 columns of a parity-check matrix H are linearly independent while there exist d columns of H that are linearly dependent. The parity check matrix for a given code can be derived from its generator matrix (and vice versa). [ 3 ] If the generator matrix for an [ n , k ]-code is in standard form then the parity check matrix is given by because Negation is performed in the finite field F q . Note that if the characteristic of the underlying field is 2 (i.e., 1 + 1 = 0 in that field), as in binary codes , then - P = P , so the negation is unnecessary. For example, if a binary code has the generator matrix then its parity check matrix is It can be verified that G is a k × n {\displaystyle k\times n} matrix, while H is a ( n − k ) × n {\displaystyle (n-k)\times n} matrix. For any (row) vector x of the ambient vector space, s = H x ⊤ is called the syndrome of x . The vector x is a codeword if and only if s = 0 . The calculation of syndromes is the basis for the syndrome decoding algorithm. [ 4 ]
https://en.wikipedia.org/wiki/Parity-check_matrix
In mathematics , parity is the property of an integer of whether it is even or odd . An integer is even if it is divisible by 2, and odd if it is not. [ 1 ] For example, −4, 0, and 82 are even numbers, while −3, 5, 23, and 69 are odd numbers. The above definition of parity applies only to integer numbers, hence it cannot be applied to numbers with decimals or fractions like 1/2 or 4.6978. See the section "Higher mathematics" below for some extensions of the notion of parity to a larger class of "numbers" or in other more general settings. Even and odd numbers have opposite parities, e.g., 22 (even number) and 13 (odd number) have opposite parities. In particular, the parity of zero is even. [ 2 ] Any two consecutive integers have opposite parity. A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it is odd; otherwise it is even—as the last digit of any even number is 0, 2, 4, 6, or 8. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1; and it is even if its last digit is 0. In an odd base, the number is even according to the sum of its digits—it is even if and only if the sum of its digits is even. [ 3 ] An even number is an integer of the form x = 2 k {\displaystyle x=2k} where k is an integer; [ 4 ] an odd number is an integer of the form x = 2 k + 1. {\displaystyle x=2k+1.} An equivalent definition is that an even number is divisible by 2: 2 | x {\displaystyle 2\ |\ x} and an odd number is not: 2 ⧸ | x {\displaystyle 2\not |\ x} The sets of even and odd numbers can be defined as following: [ 5 ] { 2 k : k ∈ Z } {\displaystyle \{2k:k\in \mathbb {Z} \}} { 2 k + 1 : k ∈ Z } {\displaystyle \{2k+1:k\in \mathbb {Z} \}} The set of even numbers is a prime ideal of Z {\displaystyle \mathbb {Z} } and the quotient ring Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } is the field with two elements . Parity can then be defined as the unique ring homomorphism from Z {\displaystyle \mathbb {Z} } to Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } where odd numbers are 1 and even numbers are 0. The consequences of this homomorphism are covered below. The following laws can be verified using the properties of divisibility . They are a special case of rules in modular arithmetic , and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative in modulo 2 arithmetic, and multiplication is distributive over addition. However, subtraction in modulo 2 is identical to addition, so subtraction also possesses these properties, which is not true for normal integer arithmetic. By construction in the previous section, the structure ({even, odd}, +, ×) is in fact the field with two elements . The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts of even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor. [ 6 ] The ancient Greeks considered 1, the monad , to be neither fully odd nor fully even. [ 7 ] Some of this sentiment survived into the 19th century: Friedrich Wilhelm August Fröbel 's 1826 The Education of Man instructs the teacher to drill students with the claim that 1 is neither even nor odd, to which Fröbel attaches the philosophical afterthought, It is well to direct the pupil's attention here at once to a great far-reaching law of nature and of thought. It is this, that between two relatively different things or ideas there stands always a third, in a sort of balance, seeming to unite the two. Thus, there is here between odd and even numbers one number (one) which is neither of the two. Similarly, in form, the right angle stands between the acute and obtuse angles; and in language, the semi-vowels or aspirants between the mutes and vowels. A thoughtful teacher and a pupil taught to think for himself can scarcely help noticing this and other important laws. [ 8 ] Integer coordinates of points in Euclidean spaces of two or more dimensions also have a parity, usually defined as the parity of the sum of the coordinates. For instance, the face-centered cubic lattice and its higher-dimensional generalizations (the D n lattices ) consist of all of the integer points whose coordinates have an even sum. [ 9 ] This feature also manifests itself in chess , where the parity of a square is indicated by its color: bishops are constrained to moving between squares of the same parity, whereas knights alternate parity between moves. [ 10 ] This form of parity was famously used to solve the mutilated chessboard problem : if two opposite corner squares are removed from a chessboard, then the remaining board cannot be covered by dominoes, because each domino covers one square of each parity and there are two more squares of one parity than of the other. [ 11 ] The parity of an ordinal number may be defined to be even if the number is a limit ordinal, or a limit ordinal plus a finite even number, and odd otherwise. [ 12 ] Let R be a commutative ring and let I be an ideal of R whose index is 2. Elements of the coset 0 + I {\displaystyle 0+I} may be called even , while elements of the coset 1 + I {\displaystyle 1+I} may be called odd . As an example, let R = Z (2) be the localization of Z at the prime ideal (2). Then an element of R is even or odd if and only if its numerator is so in Z . The even numbers form an ideal in the ring of integers, [ 13 ] but the odd numbers do not—this is clear from the fact that the identity element for addition, zero, is an element of the even numbers only. An integer is even if it is congruent to 0 modulo this ideal, in other words if it is congruent to 0 modulo 2, and odd if it is congruent to 1 modulo 2. All prime numbers are odd, with one exception: the prime number 2. [ 14 ] All known perfect numbers are even; it is unknown whether any odd perfect numbers exist. [ 15 ] Goldbach's conjecture states that every even integer greater than 2 can be represented as a sum of two prime numbers. Modern computer calculations have shown this conjecture to be true for integers up to at least 4 × 10 18 , but still no general proof has been found. [ 16 ] The parity of a permutation (as defined in abstract algebra ) is the parity of the number of transpositions into which the permutation can be decomposed. [ 17 ] For example (ABC) to (BCA) is even because it can be done by swapping A and B then C and A (two transpositions). It can be shown that no permutation can be decomposed both in an even and in an odd number of transpositions. Hence the above is a suitable definition. In Rubik's Cube , Megaminx , and other twisting puzzles, the moves of the puzzle allow only even permutations of the puzzle pieces, so parity is important in understanding the configuration space of these puzzles. [ 18 ] The Feit–Thompson theorem states that a finite group is always solvable if its order is an odd number. This is an example of odd numbers playing a role in an advanced mathematical theorem where the method of application of the simple hypothesis of "odd order" is far from obvious. [ 19 ] The parity of a function describes how its values change when its arguments are exchanged with their negations. An even function, such as an even power of a variable, gives the same result for any argument as for its negation. An odd function, such as an odd power of a variable, gives for any argument the negation of its result when given the negation of that argument. It is possible for a function to be neither odd nor even, and for the case f ( x ) = 0, to be both odd and even. [ 20 ] The Taylor series of an even function contains only terms whose exponent is an even number, and the Taylor series of an odd function contains only terms whose exponent is an odd number. [ 21 ] In combinatorial game theory , an evil number is a number that has an even number of 1's in its binary representation , and an odious number is a number that has an odd number of 1's in its binary representation; these numbers play an important role in the strategy for the game Kayles . [ 22 ] The parity function maps a number to the number of 1's in its binary representation, modulo 2 , so its value is zero for evil numbers and one for odious numbers. The Thue–Morse sequence , an infinite sequence of 0's and 1's, has a 0 in position i when i is evil, and a 1 in that position when i is odious. [ 23 ] In information theory , a parity bit appended to a binary number provides the simplest form of error detecting code . If a single bit in the resulting value is changed, then it will no longer have the correct parity: changing a bit in the original number gives it a different parity than the recorded one, and changing the parity bit while not changing the number it was derived from again produces an incorrect result. In this way, all single-bit transmission errors may be reliably detected. [ 24 ] Some more sophisticated error detecting codes are also based on the use of multiple parity bits for subsets of the bits of the original encoded value. [ 25 ] In wind instruments with a cylindrical bore and in effect closed at one end, such as the clarinet at the mouthpiece, the harmonics produced are odd multiples of the fundamental frequency . (With cylindrical pipes open at both ends, used for example in some organ stops such as the open diapason , the harmonics are even multiples of the same frequency for the given bore length, but this has the effect of the fundamental frequency being doubled and all multiples of this fundamental frequency being produced.) See harmonic series (music) . [ 26 ] In some countries, house numberings are chosen so that the houses on one side of a street have even numbers and the houses on the other side have odd numbers. [ 27 ] Similarly, among United States numbered highways , even numbers primarily indicate east–west highways while odd numbers primarily indicate north–south highways. [ 28 ] Among airline flight numbers , even numbers typically identify eastbound or northbound flights, and odd numbers typically identify westbound or southbound flights. [ 29 ]
https://en.wikipedia.org/wiki/Parity_(mathematics)
In physics , a parity transformation (also called parity inversion ) is the flip in the sign of one spatial coordinate . In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection ): P : ( x y z ) ↦ ( − x − y − z ) . {\displaystyle \mathbf {P} :{\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}-x\\-y\\-z\end{pmatrix}}.} It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles , with the exception of the weak interaction , are symmetric under parity transformation. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu , the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force. By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions. A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation , which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation . In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions. Under rotations , classical geometrical objects can be classified into scalars , vectors , and tensors of higher rank. In classical physics , physical configurations need to transform under representations of every symmetry group. Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations . The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states. The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors. If one adds to this a classification by parity, these can be extended, for example, into notions of One can define reflections such as V x : ( x y z ) ↦ ( − x y z ) , {\displaystyle V_{x}:{\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}-x\\y\\z\end{pmatrix}},} which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing x -, y -, and z -reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used. Parity forms the abelian group Z 2 {\displaystyle \mathbb {Z} _{2}} due to the relation P ^ 2 = 1 ^ {\displaystyle {\hat {\mathcal {P}}}^{2}={\hat {1}}} . All Abelian groups have only one-dimensional irreducible representations . For Z 2 {\displaystyle \mathbb {Z} _{2}} , there are two irreducible representations: one is even under parity, P ^ ϕ = + ϕ {\displaystyle {\hat {\mathcal {P}}}\phi =+\phi } , the other is odd, P ^ ϕ = − ϕ {\displaystyle {\hat {\mathcal {P}}}\phi =-\phi } . These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase . An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism ρ {\displaystyle \rho } which defines the representation. For a matrix R ∈ O ( 3 ) , {\displaystyle R\in {\text{O}}(3),} When the representation is restricted to SO ( 3 ) {\displaystyle {\text{SO}}(3)} , scalars and pseudoscalars transform identically, as do vectors and pseudovectors. Newton's equation of motion F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity. However, angular momentum L {\displaystyle \mathbf {L} } is an axial vector , L = r × p P ^ ( L ) = ( − r ) × ( − p ) = L . {\displaystyle {\begin{aligned}\mathbf {L} &=\mathbf {r} \times \mathbf {p} \\{\hat {P}}\left(\mathbf {L} \right)&=(-\mathbf {r} )\times (-\mathbf {p} )=\mathbf {L} .\end{aligned}}} In classical electrodynamics , the charge density ρ {\displaystyle \rho } is a scalar, the electric field, E {\displaystyle \mathbf {E} } , and current j {\displaystyle \mathbf {j} } are vectors, but the magnetic field, B {\displaystyle \mathbf {B} } is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector. The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the number of dimensions of space is either an odd or even number. The categories of odd or even given below for the parity transformation is a different, but intimately related issue. The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides. Classical variables whose signs flip under space inversion are predominantly vectors. They include: Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include: In quantum mechanics, spacetime transformations act on quantum states . The parity transformation, P ^ {\displaystyle {\hat {\mathcal {P}}}} , is a unitary operator , in general acting on a state ψ {\displaystyle \psi } as follows: P ^ ψ ( r ) = e i ϕ / 2 ψ ( − r ) {\displaystyle {\hat {\mathcal {P}}}\,\psi {\left(r\right)}=e^{{i\phi }/{2}}\psi {\left(-r\right)}} . One must then have P ^ 2 ψ ( r ) = e i ϕ ψ ( r ) {\displaystyle {\hat {\mathcal {P}}}^{2}\,\psi {\left(r\right)}=e^{i\phi }\psi {\left(r\right)}} , since an overall phase is unobservable. The operator P ^ 2 {\displaystyle {\hat {\mathcal {P}}}^{2}} , which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases e i ϕ {\displaystyle e^{i\phi }} . If P ^ 2 {\displaystyle {\hat {\mathcal {P}}}^{2}} is an element e i Q {\displaystyle e^{iQ}} of a continuous U(1) symmetry group of phase rotations, then e − i Q {\displaystyle e^{-iQ}} is part of this U(1) and so is also a symmetry. In particular, we can define P ^ ′ ≡ P ^ e − i Q / 2 {\displaystyle {\hat {\mathcal {P}}}'\equiv {\hat {\mathcal {P}}}\,e^{-{iQ}/{2}}} , which is also a symmetry, and so we can choose to call P ^ ′ {\displaystyle {\hat {\mathcal {P}}}'} our parity operator, instead of P ^ {\displaystyle {\hat {\mathcal {P}}}} . Note that P ^ ′ 2 = 1 {\displaystyle {{\hat {\mathcal {P}}}'}^{2}=1} and so P ^ ′ {\displaystyle {\hat {\mathcal {P}}}'} has eigenvalues ± 1 {\displaystyle \pm 1} . Wave functions with eigenvalue + 1 {\displaystyle +1} under a parity transformation are even functions , while eigenvalue − 1 {\displaystyle -1} corresponds to odd functions. [ 1 ] However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than ± 1 {\displaystyle \pm 1} . For electronic wavefunctions, even states are usually indicated by a subscript g for gerade (German: even) and odd states by a subscript u for ungerade (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H 2 + ) is labelled 1 σ g {\displaystyle 1\sigma _{g}} and the next-closest (higher) energy level is labelled 1 σ u {\displaystyle 1\sigma _{u}} . [ 2 ] The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions. [ 3 ] The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei, because the weak nuclear interaction violates parity. [ 4 ] The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum , and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum. [ 3 ] When parity generates the Abelian group Z 2 {\displaystyle \mathbb {Z} _{2}} , one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number. In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if P ^ {\displaystyle {\hat {\mathcal {P}}}} commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., V = V ( r ) {\displaystyle V=V{\left(r\right)}} , hence the potential is spherically symmetric. The following facts can be easily proven: Some of the non-degenerate eigenfunctions of H ^ {\displaystyle {\hat {H}}} are unaffected (invariant) by parity P ^ {\displaystyle {\hat {\mathcal {P}}}} and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute: P ^ | ψ ⟩ = c | ψ ⟩ , {\displaystyle {\hat {\mathcal {P}}}|\psi \rangle =c\left|\psi \right\rangle ,} where c {\displaystyle c} is a constant, the eigenvalue of P ^ {\displaystyle {\hat {\mathcal {P}}}} , P ^ 2 | ψ ⟩ = c P ^ | ψ ⟩ . {\displaystyle {\hat {\mathcal {P}}}^{2}\left|\psi \right\rangle =c\,{\hat {\mathcal {P}}}\left|\psi \right\rangle .} The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules. Atomic orbitals have parity (−1) ℓ , where the exponent ℓ is the azimuthal quantum number . The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s 2 2s 2 2p 3 , and is identified by the term symbol 4 S o , where the superscript o denotes odd parity. However the third excited term at about 83,300 cm −1 above the ground state has electron configuration 1s 2 2s 2 2p 2 3s has even parity since there are only two 2p electrons, and its term symbol is 4 P (without an o superscript). [ 6 ] The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins [ 7 ] ) and its eigenvalues can be given the parity symmetry label + or − as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass. Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene , benzene , xenon tetrafluoride and sulphur hexafluoride . For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i , or they are changed in sign by i . The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho - para transitions [ 8 ] [ 9 ] In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model . As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17 O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d 5/2 shell, which has even parity since ℓ = 2 for a d orbital. [ 10 ] If one can show that the vacuum state is invariant under parity, P ^ | 0 ⟩ = | 0 ⟩ {\displaystyle {\hat {\mathcal {P}}}\left|0\right\rangle =\left|0\right\rangle } , the Hamiltonian is parity invariant [ H ^ , P ^ ] {\displaystyle \left[{\hat {H}},{\hat {\mathcal {P}}}\right]} and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction. To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator: [ citation needed ] P a ( p , ± ) P + = a ( − p , ± ) {\displaystyle \mathbf {Pa} (\mathbf {p} ,\pm )\mathbf {P} ^{+}=\mathbf {a} (-\mathbf {p} ,\pm )} where p {\displaystyle \mathbf {p} } denotes the momentum of a photon and ± {\displaystyle \pm } refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity . Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity. A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, P ϕ ( − x , t ) P − 1 = ϕ ( x , t ) {\displaystyle {\mathsf {P}}\phi (-\mathbf {x} ,t){\mathsf {P}}^{-1}=\phi (\mathbf {x} ,t)} , since P a ( p ) P + = a ( − p ) {\displaystyle \mathbf {Pa} (\mathbf {p} )\mathbf {P} ^{+}=\mathbf {a} (-\mathbf {p} )} This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation , where it is shown that fermions and antifermions have opposite intrinsic parity.) With fermions , there is a slight complication because there is more than one spin group . Applying the parity operator twice leaves the coordinates unchanged, meaning that P 2 must act as one of the internal symmetries of the theory, at most changing the phase of a state. [ 11 ] For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number B , the lepton number L , and the electric charge Q . Therefore, the parity operator satisfies P 2 = e iαB + iβL + iγQ for some choice of α , β , and γ . This operator is also not unique in that a new parity operator P' can always be constructed by multiplying it by an internal symmetry such as P' = P e iαB for some α . To see if the parity operator can always be defined to satisfy P 2 = 1 , consider the general case when P 2 = Q for some internal symmetry Q present in the theory. The desired parity operator would be P' = P Q −1/2 . If Q is part of a continuous symmetry group then Q −1/2 exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible. [ 12 ] The Standard Model exhibits a (−1) F symmetry, where F is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy F = B + L , the discrete symmetry is also part of the e iα ( B + L ) continuous symmetry group. If the parity operator satisfied P 2 = (−1) F , then it can be redefined to give a new parity operator satisfying P 2 = 1 . But if the Standard Model is extended by incorporating Majorana neutrinos , which have F = 1 and B + L = 0 , then the discrete symmetry (−1) F is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies P 4 = 1 so the Majorana neutrinos would have intrinsic parities of ± i . In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity. [ 13 ] They studied the decay of an "atom" made from a deuteron ( 2 1 H + ) and a negatively charged pion ( π − ) in a state with zero orbital angular momentum L = 0 {\displaystyle ~\mathbf {L} ={\boldsymbol {0}}~} into two neutrons ( n {\displaystyle n} ). Neutrons are fermions and so obey Fermi–Dirac statistics , which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum L = 1 . {\displaystyle ~L=1~.} The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function ( − 1 ) L . {\displaystyle ~\left(-1\right)^{L}~.} Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to + 1 {\displaystyle ~+1~} they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly ( − 1 ) ( 1 ) 2 ( 1 ) 2 = − 1 , {\textstyle {\frac {(-1)(1)^{2}}{(1)^{2}}}=-1~,} from which they concluded that the pion is a pseudoscalar particle . Although parity is conserved in electromagnetism and gravity , it is violated in weak interactions, and perhaps, to some degree, in strong interactions . [ 14 ] [ 15 ] The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way. An obscure 1928 experiment, undertaken by R. T. Cox , G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays , but, since the appropriate concepts had not yet been developed, those results had no impact. [ 16 ] In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli , because it implied parity violation. [ 17 ] By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang [ 18 ] went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions , it was untested in the weak interaction . They proposed several possible direct experimental tests. They were mostly ignored, [ citation needed ] but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. [ citation needed ] She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards . Wu , Ambler , Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60 . [ 19 ] As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. [ 20 ] Three of them, R. L. Garwin , L. M. Lederman , and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. [ 21 ] They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal. The discovery of parity violation explained the outstanding τ–θ puzzle in the physics of kaons . In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas . An experiment conducted by several physicists in the STAR collaboration , suggested that parity may also be violated in the strong interaction. [ 15 ] It is predicted that this local parity violation manifests itself by chiral magnetic effect . [ 22 ] [ 23 ] To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions . Footnotes Citations
https://en.wikipedia.org/wiki/Parity_(physics)
A parity bit , or check bit , is a bit added to a string of binary code . Parity bits are a simple form of error detecting code . Parity bits are generally applied to the smallest units of a communication protocol, typically 8-bit octets (bytes), although they can also be applied separately to an entire message string of bits. The parity bit ensures that the total number of 1-bits in the string is even or odd . [ 1 ] Accordingly, there are two variants of parity bits: even parity bit and odd parity bit . In the case of even parity, for a given set of bits, the bits whose value is 1 are counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set of bits is already even, the parity bit's value is 0. In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set (including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the parity bit's value is 0. Even parity is a special case of a cyclic redundancy check (CRC), where the 1-bit CRC is generated by the polynomial x +1. In mathematics parity can refer to the evenness or oddness of an integer, which, when written in its binary form , can be determined just by examining only its least significant bit . In information technology parity refers to the evenness or oddness, given any set of binary digits, of the number of those bits with value one. Because parity is determined by the state of every one of the bits, this property of parity—being dependent upon all the bits and changing its value from even to odd parity if any one bit changes—allows for its use in error detection and correction schemes. In telecommunications the parity referred to by some protocols is for error- detection . The transmission medium is preset, at both end points, to agree on either odd parity or even parity. For each string of bits ready to transmit (data packet) the sender calculates its parity bit, zero or one, to make it conform to the agreed parity, even or odd. The receiver of that packet first checks that the parity of the packet as a whole is in accordance with the preset agreement, then, if there was a parity error in that packet, requests a retransmission of that packet. In computer science the parity stripe or parity disk in a RAID provides error- correction . Parity bits are written at the rate of one parity bit per n bits, where n is the number of disks in the array. When a read error occurs, each bit in the error region is recalculated from its set of n bits. In this way, using one parity bit creates "redundancy" for a region from the size of one bit to the size of one disk. See § RAID array below. In electronics, transcoding data with parity can be very efficient, as XOR gates output what is equivalent to a check bit that creates an even parity, and XOR logic design easily scales to any number of inputs. XOR and AND structures comprise the bulk of most integrated circuitry. If an odd number of bits (including the parity bit) are transmitted incorrectly, the parity bit will be incorrect, thus indicating that a parity error occurred in the transmission. The parity bit is suitable only for detecting errors; it cannot correct any errors, as there is no way to determine the particular bit that is corrupted. The data must be discarded entirely, and retransmitted from scratch . On a noisy transmission medium, successful transmission can therefore take a long time or even never occur. However, parity has the advantage that it uses only a single bit and requires only a number of XOR gates to generate. See Hamming code for an example of an error-correcting code. Parity bit checking is used occasionally for transmitting ASCII characters, which have 7 bits, leaving the 8th bit as a parity bit. For example, the parity bit can be computed as follows. Assume Alice and Bob are communicating and Alice wants to send Bob the simple 4-bit message 1001. Alice wants to transmit: 1001 and 1011 Alice computes parity bit value: 1+0+0+1 (mod 2) = 0 1+0+1+1 (mod 2) = 1 Alice adds parity bit and sends: 1001 0 and 1011 1 Bob receives: 10010 and 10111 Bob computes parity: 1+0+0+1+0 (mod 2) = 0 1+0+1+1+1 (mod 2) = 0 Bob reports correct transmission after observing expected even result. Alice wants to transmit: 1001 and 1011 Alice computes parity bit value: 1+0+0+1 (+ 1 mod 2) = 1 1+0+1+1 (+ 1 mod 2) = 0 Alice adds parity bit and sends: 1001 1 and 1011 0 Bob receives: 10011 and 10110 Bob computes overall parity: 1+0+0+1+1 (mod 2) = 1 1+0+1+1+0 (mod 2) = 1 Bob reports correct transmission after observing expected odd result. This mechanism enables the detection of single bit errors, because if one bit gets flipped due to line noise, there will be an incorrect number of ones in the received data. In the two examples above, Bob's calculated parity value matches the parity bit in its received value, indicating there are no single bit errors. Consider the following example with a transmission error in the second bit using XOR: Error in the second bit Alice computes parity bit value: 1^0^0^1 = 0 Alice adds parity bit and sends: 10010 ...TRANSMISSION ERROR... Bob receives: 1 1 010 Bob computes overall parity: 1^1^0^1^0 = 1 Bob reports incorrect transmission after observing unexpected odd result. Error in the parity bit Alice computes even parity value: 1^0^0^1 = 0 Alice sends: 10010 ...TRANSMISSION ERROR... Bob receives: 1001 1 Bob computes overall parity: 1^0^0^1^1 = 1 Bob reports incorrect transmission after observing unexpected odd result. There is a limitation to parity schemes. A parity bit is guaranteed to detect only an odd number of bit errors. If an even number of bits have errors, the parity bit records the correct number of ones even though the data is corrupt. (See also error detection and correction .) Consider the same example as before but with an even number of corrupted bits: Two corrupted bits Alice computes even parity value: 1^0^0^1 = 0 Alice sends: 10010 ...TRANSMISSION ERROR... Bob receives: 1 1 01 1 Bob computes overall parity: 1^1^0^1^1 = 0 Bob reports correct transmission though actually incorrect. Bob observes even parity, as expected, thereby failing to catch the two bit errors. Because of its simplicity, parity is used in many hardware applications in which an operation can be repeated in case of difficulty, or simply detecting the error is helpful. For example, the SCSI and PCI buses use parity to detect transmission errors, and many microprocessor instruction caches include parity protection. Because the Instruction cache data is just a copy of the main memory , it can be disregarded and refetched if it is found to be corrupted. In serial data transmission , a common format is 7 data bits, an even parity bit, and one or two stop bits . That format accommodates all the 7-bit ASCII characters in an 8-bit byte. Other formats are possible; 8 bits of data plus a parity bit can convey all 8-bit byte values. In serial communication contexts, parity is usually generated and checked by interface hardware (such as a UART ) and, on reception, the result made available to a processor such as the CPU (and so too, for instance, the operating system ) via a status bit in a hardware register in the interface hardware. Recovery from the error is usually done by retransmitting the data, the details of which are usually handled by software (such as the operating system I/O routines). When the total number of transmitted bits, including the parity bit, is even, odd parity has the advantage that both all-zeros and all-ones patterns are detected as errors. If the total number of bits is odd, only one of the patterns is detected as an error, and the choice can be made based on what the more common error is expected to be. Parity data is used by RAID arrays ( redundant array of independent/inexpensive disks ) to achieve redundancy . If a drive in the array fails, remaining data on the other drives can be combined with the parity data (using the Boolean XOR function) to reconstruct the missing data. For example, suppose two drives in a three-drive RAID 4 array contained the following data: To calculate parity data for the two drives, an XOR is performed on their data: The resulting parity data, 10111001 , is then stored on Drive 3. Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were to fail, its data could be rebuilt using the XOR results of the contents of the two remaining drives, Drive 1 and Drive 3: as follows: The result of that XOR calculation yields Drive 2's contents. 11010100 is then stored on Drive 2, fully repairing the array. XOR logic is also equivalent to even parity (because a XOR b XOR c XOR ... may be treated as XOR( a , b , c ,...), which is an n-ary operator that is true if and only if an odd number of arguments is true). So the same XOR concept above applies similarly to larger RAID arrays with parity, using any number of disks. In the case of a RAID 3 array of 12 drives, 11 drives participate in the XOR calculation shown above and yield a value that is then stored on the dedicated parity drive. Extensions and variations on the parity bit mechanism "double," "dual," or "diagonal" parity, are used in RAID-DP . A parity track was present on the first magnetic-tape data storage in 1951. Parity in this form, applied across multiple parallel signals, is known as a transverse redundancy check . This can be combined with parity computed over multiple bits sent on a single signal, a longitudinal redundancy check . In a parallel bus, there is one longitudinal redundancy check bit per parallel signal. Parity was also used on at least some paper-tape ( punched tape ) data entry systems (which preceded magnetic-tape systems). On the systems sold by British company ICL (formerly ICT) the 1-inch-wide (25 mm) paper tape had 8 hole positions running across it, with the 8th being for parity. 7 positions were used for the data, e.g., 7-bit ASCII. The 8th position had a hole punched in it depending on the number of data holes punched.
https://en.wikipedia.org/wiki/Parity_bit
Parity measurement (also referred to as Operator measurement ) is a procedure in quantum information science used for error detection in quantum qubits. A parity measurement checks the equality of two qubits to return a true or false answer, which can be used to determine whether a correction needs to occur. [ 1 ] Additional measurements can be made for a system greater than two qubits. Because parity measurement does not measure the state of singular bits but rather gets information about the whole state, it is considered an example of a joint measurement. Joint measurements do not have the consequence of destroying the original state of a qubit as normal quantum measurements do. [ 2 ] Mathematically speaking, parity measurements are used to project a state into an eigenstate of an operator and to acquire its eigenvalue . [ citation needed ] Parity measurement is an essential concept of quantum error correction . From the parity measurement, an appropriate unitary operation can be applied to correct the error without knowing the beginning state of the qubit. [ 3 ] A qubit is a two-level system, and when we measure one qubit, we can have either 1 or 0 as a result. One corresponds to odd parity, and zero corresponds to even parity. This is what a parity check is. This idea can be generalized beyond single qubits. This can be generalized beyond a single qubit and it is useful in QEC. The idea of parity checks in QEC is to have just parity information of multiple data qubits over one (auxiliary) qubit without revealing any other information. Any unitary can be used for the parity check. If we want to have the parity information of a valid quantum observable U, we need to apply the controlled-U gates between the ancilla qubit and the data qubits sequentially. For example, for making parity check measurement in the X basis, we need to apply CNOT gates between the ancilla qubit and the data qubits sequentially since the controlled gate in this case is a CNOT (CX) gate. [ 4 ] The unique state of the ancillary qubit is then used to determine either even or odd parity of the qubits. When the qubits of the input states are equal, an even parity will be measured, indicating that no error has occurred. When the qubits are unequal, an odd parity will be measured, indicating a single bit-flip error. [ 5 ] With more than two qubits, additional parity measurements can be performed to determine if the qubits are the same value, and if not, to find which is the outlier. For example, in a system of three qubits, one can first perform a parity measurement on the first and second qubit, and then on the first and third qubit. Specifically, one is measuring Z ⊗ Z ⊗ I {\displaystyle Z\otimes Z\otimes I} to determine if an X {\displaystyle X} error has occurred on the first two qubits, and then Z ⊗ I ⊗ Z {\displaystyle Z\otimes I\otimes Z} to determine if an X {\displaystyle X} error has occurred on the first and third qubits. [ citation needed ] In a circuit, an ancillary qubit is prepared in the | 0 ⟩ {\displaystyle |0\rangle } state. During measurement, a CNOT gate is performed on the ancillary bit dependent on the first qubit being checked, followed by a second CNOT gate performed on the ancillary bit dependent on the second qubit being checked. If these qubits are the same, the double CNOT gates will revert the ancillary qubit to its initial | 0 ⟩ {\displaystyle |0\rangle } state, which indicates even parity. If these qubits are not the same, the double CNOT gates will alter the ancillary qubit to the opposite | 1 ⟩ {\displaystyle |1\rangle } state, which indicates odd parity. [ 1 ] Looking at the ancillary qubits, a corresponding correction can be performed. Alternatively, the parity measurement can be thought of as a projection of a qubit state into an eigenstate of an operator and to acquire its eigenvalue. For the Z ⊗ Z ⊗ I {\displaystyle Z\otimes Z\otimes I} measurement, checking the ancillary qubit in the basis | 0 ⟩ ± | 1 ⟩ {\displaystyle |0\rangle \pm \ |1\rangle } will return the eigenvalue of the measurement. If the eigenvalue here is measured to be +1, this indicates even parity of the bits without error. If the eigenvalue is measured to be -1, this indicates odd parity of the bits with a bit-flip error. [ citation needed ] Alice, a sender, wants to transmit a qubit to Bob, a receiver. The state of any qubit that Alice would wish to send can be written as a | 0 ⟩ + b | 1 ⟩ {\displaystyle a\ |0\rangle +b\ |1\rangle } where a {\displaystyle a\ } and b {\displaystyle b\ } are coefficients. Alice encodes this into three qubits, so that the initial state she transmits is a | 000 ⟩ + b | 111 ⟩ {\displaystyle a\ |000\rangle +b\ |111\rangle } . Following noise in the channel, the three qubits state can be seen in the following table with the corresponding probability: [ 1 ] A parity measurement can be performed on the altered state, with two ancillary qubits storing the measurement. First, the first and second qubits' parity is checked. If they are equal, a | 0 ⟩ {\displaystyle |0\rangle } is stored in the first ancillary qubit. If they are not equal, a | 1 ⟩ {\displaystyle |1\rangle } is stored in the first ancillary qubit. The same action is performed comparing the first and third qubits, with the check being stored in the second ancillary qubit. Important to note is that we do not actually need to know the input qubit state, and can perform the CNOT operations indicating the parity without this knowledge. The ancillary qubits are what indicates what bit has been altered, and the σ x {\displaystyle \sigma _{x}} correction operation can be performed as needed. [ 1 ] An easy way to visualize this is in the circuit above. First, the input state | ψ ⟩ {\displaystyle |\psi \rangle } is encoded into 3 bits, and parity checks are performed with subsequent error correction performed based on the results of the ancilla qubits at the bottom. Finally, decoding is performing to put get back to the same basis of the input state. A parity check matrix for a quantum circuit can also be constructed using these principles. For some message x encoded as Gx , where G corresponds to the generator matrix , Hx = 0 where H is the parity matrix containing 0's and 1's for a situation where there is no error. However, if an error occurs at one component, then the pattern in the errors can be used to find which bit is incorrect. [ 3 ] Two types of parity measurement are indirect and direct. Indirect parity measurements coincide with the typical way we think of parity measurement as described above, by measuring an ancilla qubit to determine the parity of the input bits. Direct parity measurements differ from the previous type in that a common mode with the parities coupled to the qubits is measured, without the need for an ancilla qubit. While indirect parity measurements can put a strain on experimental capacity, direct measurements may interfere with the fidelity of the initial states. [ 6 ] For example, given a Hermitian and Unitary operator U {\displaystyle U} (whose eigenvalues are ± 1 {\displaystyle \pm 1} ) and a state | ψ ⟩ {\displaystyle |\psi \rangle } , the circuit on the top right performs a Parity measurement on U {\displaystyle U} . After the first Hadamard gate, the state of the circuit is After applying the controlled-U gate, the state of the circuit evolves to After applying the second Hadamard gate, the state of the circuit turns into If the state of the top qubit after measurement is | 0 ⟩ {\displaystyle |0\rangle } , then | ϕ ⟩ = | ψ ⟩ + U | ψ ⟩ {\displaystyle |\phi \rangle =|\psi \rangle +U|\psi \rangle } ; which is the + 1 {\displaystyle +1} eigenstate of U {\displaystyle U} . If the state of the top qubit is | 1 ⟩ {\displaystyle |1\rangle } , then | ϕ ⟩ = | ψ ⟩ − U | ψ ⟩ {\displaystyle |\phi \rangle =|\psi \rangle -U|\psi \rangle } ; which is the − 1 {\displaystyle -1} eigenstate of U {\displaystyle U} . [ 5 ] In experiments, parity measurements are not only a mechanism for quantum error correction, but they can also help combat non-ideal conditions. Given the existent possibility for bit flip errors, there is an additional likelihood for errors as a result of leakage. This phenomenon is due to unused high-energy qubits becoming excited. It has been demonstrated in superconducting transmon [ broken anchor ] qubits that parity measurements can be applied repetitively during quantum error correction to remove leakage errors. [ 7 ] Repetitive parity measurements can be used to stabilize an entangled state and prevent leakage errors (which normally is not possible with typical quantum error correction), but the first group to accomplish this did so in 2020. By performing interleaving XX and ZZ checks, which can ultimately tell whether an X (bit), Y (iXZ), or Z (phase) flip error occurs. The outcomes of these parity measurements of ancilla qubits are used with Hidden Markov Models to complete leakage detection and correction. [ 8 ]
https://en.wikipedia.org/wiki/Parity_measurement
In mathematics , when X is a finite set with at least two elements, the permutations of X (i.e. the bijective functions from X to X ) fall into two classes of equal size: the even permutations and the odd permutations . If any total ordering of X is fixed, the parity ( oddness or evenness ) of a permutation σ {\displaystyle \sigma } of X can be defined as the parity of the number of inversions for σ , i.e., of pairs of elements x , y of X such that x < y and σ ( x ) > σ ( y ) . The sign , signature , or signum of a permutation σ is denoted sgn( σ ) and defined as +1 if σ is even and −1 if σ is odd. The signature defines the alternating character of the symmetric group S n . Another notation for the sign of a permutation is given by the more general Levi-Civita symbol ( ε σ ), which is defined for all maps from X to X , and has value zero for non-bijective maps . The sign of a permutation can be explicitly expressed as where N ( σ ) is the number of inversions in σ . Alternatively, the sign of a permutation σ can be defined from its decomposition into the product of transpositions as where m is the number of transpositions in the decomposition. Although such a decomposition is not unique, the parity of the number of transpositions in all decompositions is the same, implying that the sign of a permutation is well-defined . [ 1 ] Consider the permutation σ of the set {1, 2, 3, 4, 5} defined by σ ( 1 ) = 3 , {\displaystyle \sigma (1)=3,} σ ( 2 ) = 4 , {\displaystyle \sigma (2)=4,} σ ( 3 ) = 5 , {\displaystyle \sigma (3)=5,} σ ( 4 ) = 2 , {\displaystyle \sigma (4)=2,} and σ ( 5 ) = 1. {\displaystyle \sigma (5)=1.} In one-line notation , this permutation is denoted 34521. It can be obtained from the identity permutation 12345 by three transpositions: first exchange the numbers 2 and 4, then exchange 3 and 5, and finally exchange 1 and 3. This shows that the given permutation σ is odd. Following the method of the cycle notation article, this could be written, composing from right to left, as There are many other ways of writing σ as a composition of transpositions, for instance but it is impossible to write it as a product of an even number of transpositions. The identity permutation is an even permutation. [ 1 ] An even permutation can be obtained as the composition of an even number (and only an even number) of exchanges (called transpositions ) of two elements, while an odd permutation can be obtained by (only) an odd number of transpositions. The following rules follow directly from the corresponding rules about addition of integers: [ 1 ] From these it follows that Considering the symmetric group S n of all permutations of the set {1, ..., n }, we can conclude that the map that assigns to every permutation its signature is a group homomorphism . [ 2 ] Furthermore, we see that the even permutations form a subgroup of S n . [ 1 ] This is the alternating group on n letters, denoted by A n . [ 3 ] It is the kernel of the homomorphism sgn. [ 4 ] The odd permutations cannot form a subgroup, since the composite of two odd permutations is even, but they form a coset of A n (in S n ). [ 5 ] If n > 1 , then there are just as many even permutations in S n as there are odd ones; [ 3 ] consequently, A n contains n ! /2 permutations. (The reason is that if σ is even then (1  2) σ is odd, and if σ is odd then (1  2) σ is even, and these two maps are inverse to each other.) [ 3 ] A cycle is even if and only if its length is odd. This follows from formulas like In practice, in order to determine whether a given permutation is even or odd, one writes the permutation as a product of disjoint cycles. The permutation is odd if and only if this factorization contains an odd number of even-length cycles. Another method for determining whether a given permutation is even or odd is to construct the corresponding permutation matrix and compute its determinant. The value of the determinant is the same as the parity of the permutation. Every permutation of odd order must be even. The permutation (1 2)(3 4) in A 4 shows that the converse is not true in general. This section presents proofs that the parity of a permutation σ can be defined in two equivalent ways: Let σ be a permutation on a ranked domain S . Every permutation can be produced by a sequence of transpositions (2-element exchanges). Let the following be one such decomposition We want to show that the parity of k is equal to the parity of the number of inversions of σ . Every transposition can be written as a product of an odd number of transpositions of adjacent elements, e.g. Generally, we can write the transposition ( i i+d ) on the set {1,..., i ,..., i+d ,...} as the composition of 2 d −1 adjacent transpositions by recursion on d : If we decompose in this way each of the transpositions T 1 ... T k above, we get the new decomposition: where all of the A 1 ... A m are adjacent. Also, the parity of m is the same as that of k . This is a fact: for all permutation τ and adjacent transposition a, aτ either has one less or one more inversion than τ . In other words, the parity of the number of inversions of a permutation is switched when composed with an adjacent transposition. Therefore, the parity of the number of inversions of σ is precisely the parity of m , which is also the parity of k . This is what we set out to prove. An alternative proof uses the Vandermonde polynomial So for instance in the case n = 3 , we have Now for a given permutation σ of the numbers {1, ..., n }, we define Since the polynomial P ( x σ ( 1 ) , … , x σ ( n ) ) {\displaystyle P(x_{\sigma (1)},\dots ,x_{\sigma (n)})} has the same factors as P ( x 1 , … , x n ) {\displaystyle P(x_{1},\dots ,x_{n})} except for their signs, it follows that sgn( σ ) is either +1 or −1. Furthermore, if σ and τ are two permutations, we see that A third approach uses the presentation of the group S n in terms of generators τ 1 , ..., τ n −1 and relations Recall that a pair x , y such that x < y and σ ( x ) > σ ( y ) is called an inversion. We want to show that the count of inversions has the same parity as the count of 2-element swaps. To do that, we can show that every swap changes the parity of the count of inversions, no matter which two elements are being swapped and what permutation has already been applied. Suppose we want to swap the i th and the j th element. Clearly, inversions formed by i or j with an element outside of [ i , j ] will not be affected. For the n = j − i − 1 elements within the interval ( i , j ) , assume v i of them form inversions with i and v j of them form inversions with j . If i and j are swapped, those v i inversions with i are gone, but n − v i inversions are formed. The count of inversions i gained is thus n − 2 v i , which has the same parity as n . Similarly, the count of inversions j gained also has the same parity as n . Therefore, the count of inversions gained by both combined has the same parity as 2 n or 0. Now if we count the inversions gained (or lost) by swapping the i th and the j th element, we can see that this swap changes the parity of the count of inversions, since we also add (or subtract) 1 to the number of inversions gained (or lost) for the pair (i,j) . Consider the elements that are sandwiched by the two elements of a transposition. Each one lies completely above, completely below, or in between the two transposition elements. An element that is either completely above or completely below contributes nothing to the inversion count when the transposition is applied. Elements in-between contribute 2 {\displaystyle 2} . The parity of a permutation of n {\displaystyle n} points is also encoded in its cycle structure . Let σ = ( i 1 i 2 ... i r +1 )( j 1 j 2 ... j s +1 )...( ℓ 1 ℓ 2 ... ℓ u +1 ) be the unique decomposition of σ into disjoint cycles , which can be composed in any order because they commute. A cycle ( a b c ... x y z ) involving k + 1 points can always be obtained by composing k transpositions (2-cycles): so call k the size of the cycle, and observe that, under this definition, transpositions are cycles of size 1. From a decomposition into m disjoint cycles we can obtain a decomposition of σ into k 1 + k 2 + ... + k m transpositions, where k i is the size of the i th cycle. The number N ( σ ) = k 1 + k 2 + ... + k m is called the discriminant of σ , and can also be computed as if we take care to include the fixed points of σ as 1-cycles. Suppose a transposition ( a b ) is applied after a permutation σ . When a and b are in different cycles of σ then and if a and b are in the same cycle of σ then In either case, it can be seen that N (( a b ) σ ) = N ( σ ) ± 1 , so the parity of N (( a b ) σ ) will be different from the parity of N ( σ ). If σ = t 1 t 2 ... t r is an arbitrary decomposition of a permutation σ into transpositions, by applying the r transpositions t 1 {\displaystyle t_{1}} after t 2 after ... after t r after the identity (whose N is zero) observe that N ( σ ) and r have the same parity. By defining the parity of σ as the parity of N ( σ ), a permutation that has an even length decomposition is an even permutation and a permutation that has one odd length decomposition is an odd permutation. Parity can be generalized to Coxeter groups : one defines a length function ℓ( v ), which depends on a choice of generators (for the symmetric group, adjacent transpositions ), and then the function v ↦ (−1) ℓ( v ) gives a generalized sign map.
https://en.wikipedia.org/wiki/Parity_of_a_permutation
The Park Grass Experiment is a biological study originally set up to test the effect of fertilizers and manures on hay yields . The scientific experiment is located at the Rothamsted Research in the English county of Hertfordshire , and is notable as one of the longest-running experiments of modern science, as it was initiated in 1856 and has been continually monitored ever since. The experiment was originally designed to answer agricultural questions but has since proved an invaluable resource for studying natural selection and biodiversity . The treatments under study were found to be affecting the botanical make-up of the plots and the ecology of the 28,000-square-metre (6.9-acre) field and it has been studied ever since. In spring, the field is a colourful tapestry of flowers and grasses, some plots still having the wide range of plants that most meadows probably contained hundreds of years ago. Over its history, Park Grass has:
https://en.wikipedia.org/wiki/Park_Grass_Experiment
Parkeol is a relatively uncommon sterol secondary metabolite found mostly in plants, particularly noted in Butyrospermum parkii (now called Vitellaria paradoxa , or the shea tree). [ 1 ] It can be synthesized as a minor product by several oxidosqualene cyclase enzymes , and is the sole product of the enzyme parkeol synthase . [ 2 ] Parkeol is the dominant sterol found in the planctomycete Gemmata obscuriglobus , a rare example of a sterol-synthesizing prokaryote . The only other sterol identified in this organism is lanosterol , a key component of the sterol biosynthetic pathway in animals and fungi; this relatively limited sterol repertoire may resemble the early evolution of sterol synthesis, which is ubiquitous in eukaryotes . [ 3 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parkeol
The Parker variable wing is a wing configuration in biplane or triplane aircraft designed by H.F. Parker in 1920. [ 1 ] His design allows a supplement in lift while landing or taking-off. The system is depicted in the figure. The figure shows the biplane configuration. The lower airfoil is rigid. The upper airfoil is flexible. At high angle of attack the flow over the lower airfoil will cause the airflow to bend up and create an upward force at the lower surface of the upper airfoil. This upward force will cause the flexible section to be pushed upward. The flexible wing section is held at points A and B. The trailing edge is rigid and can rotate about point B. Due to this effect the camber of the airfoil is increased, and hence the lift it creates is increased.
https://en.wikipedia.org/wiki/Parker_variable_wing
In mathematics , especially the field of group theory , the Parker vector is an integer vector that describes a permutation group in terms of the cycle structure of its elements, defined by Richard A. Parker . The Parker vector P of a permutation group G acting on a set of size n , is the vector whose k th component for k = 1, ..., n is given by: For the group of even permutations on three elements, the Parker vector is (1,0,2). The group of all permutations on three elements has Parker vector (1,1,1). For any of the subgroups of the above with just two elements, the Parker vector is (2,1,0).The trivial subgroup has Parker vector (3,0,0). This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parker_vector
Phosphate conversion coating is a chemical treatment applied to steel parts that creates a thin adhering layer of iron , zinc , or manganese phosphates to improve corrosion resistance or lubrication or as a foundation for subsequent coatings or painting. [ 1 ] [ 2 ] [ 3 ] It is one of the most common types of conversion coating . The process is also called phosphate coating , phosphatization , [ 4 ] phosphatizing , or phosphating . It is also known by the trade name Parkerizing , especially when applied to firearms and other military equipment . [ 5 ] : 393 A phosphate coating is usually obtained by applying to the steel part a dilute solution of phosphoric acid , possibly with soluble iron, zinc, and/or manganese salts. The solution may be applied by sponging, spraying, or immersion. [ 6 ] Phosphate conversion coatings can also be used on aluminium , zinc , cadmium , silver and tin . [ 7 ] [ 8 ] The phosphatizing of firearms was discovered around 1910, when it was found that the surface of steel if changed to a phosphate acquires significant corrosion resistance. [ 5 ] : 393 Until the 1940s it was very popular in the USA until more modern but similar methods of metal finishes were introduced. [ 5 ] : 393 The main types of phosphate coatings are manganese, iron, and zinc. [ 9 ] The process takes advantage of the low solubility of phosphates at medium or high pH . The bath is a solution of phosphoric acid ( H 3 PO 4 ), containing the desired iron, zinc or manganese cations and other additives. [ 10 ] The acid reacts with the iron metal producing hydrogen and iron cations: The reaction consuming protons raises the pH of the solution in the immediate vicinity of the surface, until eventually the phosphates become insoluble and get deposited over it. The acid and metal reaction also creates iron phosphate locally which may also be deposited. When depositing zinc phosphate or manganese phosphate the additional iron phosphate may be an undesired impurity. The bath often includes an oxidizer, such as sodium nitrite ( NaNO 2 ), to consume the hydrogen gas ( H 2 ) — which otherwise would form a layer of tiny bubbles over the surface, slowing down the reaction. [ 10 ] The main phosphating step can be preceded by an "activation" bath that creates tiny particles of titanium compounds on the surface. [ 10 ] The performance of a phosphate coating depends on its crystal structure as well as its thickness. A dense microcrystalline structure with a low porosity is usually best for corrosion resistance or subsequent painting. A coarse grain structure impregnated with oil may be best for wear resistance. These factors can be controlled by varying the bath concentration, composition, temperature, and time. [ 6 ] Parkerizing is a method of protecting a steel surface from corrosion and increasing its resistance to wear through the application of a chemical phosphate conversion coating. It was usually applied to firearms. [ 5 ] : 393 Parkerizing is usually considered to be an improved zinc or manganese phosphating process, and not to be an improved iron phosphating process, although some use the term parkerizing as a generic term for applying phosphating (or phosphatizing) coatings that do include the iron phosphating process. Bonderizing , phosphating , and phosphatizing are other terms associated with the Parkerizing process but were often used for finishes of car parts as it gave finer grain on the surface. [ 5 ] : 394 It has also been known as pickling in the context of wrought iron and steel . [ 11 ] Parkerizing is commonly used on firearms as a more effective alternative to bluing , which is an earlier-developed chemical conversion coating . It is also used extensively on automobiles to protect unfinished metal parts from corrosion. The Parkerizing process cannot be used to protect non-ferrous metals such as aluminium , brass , or copper but can be used for chemical polishing or etching instead. It similarly cannot be applied to steels containing a large amount of nickel , or on stainless steel . Passivation can be used for protecting other metals. Development of the process was started in England and continued by the Parker family in the United States . The terms Parkerizing , Parkerize , and Parkerized are all registered U.S. trademarks of Henkel Adhesives Technologies , although the terminology has largely passed into generic use for many years. The process was first used on a large scale in the manufacture of firearms for the United States military during World War II . [ 12 ] The earliest work on phosphating processes was developed by British inventors William Alexander Ross, British patent 3119, in 1869, and by Thomas Watts Coslett, British patent 8667, in 1906. Coslett, of Birmingham, England , subsequently filed a patent based on this same process in America in 1907, which was granted U.S. patent 870,937 in 1907. It essentially provided an iron phosphating process, using phosphoric acid . An improved patent application for manganese phosphating based in large part on this early British iron phosphating process was filed in the US in 1912, and issued in 1913 to Frank Rupert Granville Richards as U.S. patent 1,069,903 . Clark W. Parker acquired the rights to Coslett's and Richards' U.S. patents, and experimented in the family kitchen with these and other rust-resisting formulations. The ultimate result was that Parker, along with his son Wyman C. Parker, working together, set up the Parker Rust-Proof Phosphating Company of America in 1915. R. D. Colquhoun of the Parker Rust-Proof Phosphating Company of America then filed another improved phosphating patent application in 1919. This patent was issued in 1919 as U.S. patent 1,311,319 , for an improved manganese phosphating (Parkerizing) technique. Similarly, Baker and Dingman of the Parker Rust-Proof Company filed an improved manganese phosphating (Parkerizing) process patent in 1928 that reduced the processing time to 1 ⁄ 3 of the original time that had been required through heating the solution to a temperature in the precisely controlled range of 500 to 550 °F (260 to 288 °C). This patent was issued as U.S. patent 1,761,186 in 1930. Manganese phosphating, even with these process improvements, still required the use of expensive and difficult-to-obtain manganese compounds. Subsequently, an alternative technique was developed by the Parker Company to use easier-to-obtain compounds at less expense through using zinc phosphating in place of manganese phosphating. The patent for this zinc phosphating process (using strategic compounds that would remain available in America during a war) was granted to inventor Romig of the American Chemical Paint Company in 1938 as U.S. patent 2,132,883 , just prior to the loss of easy access to manganese compounds that occurred during World War II . Somewhat analogous to the improved manganese phosphating process improvements discovered by Baker and Dingman, a similarly improved method was found for an improved zinc phosphating process as well. This improvement was discovered by Darsey of the Parker Rust Proof Company, who filed a patent in February 1941, which was granted in August 1942, U.S. patent 2,293,716 , that improved upon the zinc phosphatizing (Parkerizing) process further. He discovered that adding copper reduced the acidity requirement over what had been required, and that also adding a chlorate to the nitrates that were already used would additionally permit running the process at a much lower temperature in the range of 115 to 130 °F (46 to 54 °C), reducing the cost of running the process further. With these process improvements, the end result was that a low-temperature (energy-efficient) zinc phosphating (Parkerizing) process, using strategic materials to which the United States had ready access, became the most common phosphating process used during World War II to protect American war materials such as firearms and planes from rust and corrosion. Glock Ges.m.b.H. , an Austrian firearms manufacturer, uses a black Parkerizing process as a topcoat to a Tenifer process to protect the slides of the pistols they manufacture. After applying the Tenifer process, a black Parkerized finish is applied and the slide is protected even if the Parkerized finish were to wear off. Used this way, Parkerizing is thus becoming a protective and decorative finishing technique that is used over other underlying improved techniques of metal protection. Various of similar recipes for stovetop kitchen Parkerizing circulate in gun publications at times, and Parkerizing kits are sold by major gun-parts distributors such as Brownells. Phosphate coatings are also commonly used as an effective surface preparation for further coating and/or painting, providing excellent adhesion and electric isolation. [ 6 ] Phosphate coatings are often used to protect steel parts against rusting and other types of corrosion. However, they are somewhat porous, so this use requires impregnating the coating with oil, paint, or some other sealing substance. The result is a tightly adhering dielectric (electrically insulating) layer that can protect the part from electrochemical and under-paint corrosion. [ 6 ] Zinc and manganese coatings are used to help break in components subject to wear [ 1 ] and help prevent galling . [ 6 ] While a zinc phosphate coating by itself is somewhat abrasive , it can be turned into a lubricating layer for cold forming operations by treatment with sodium stearate ( soap ). The soap reacts with the phosphate crystals forming a very thin insoluble and hydrophobic zinc stearate layer, that helps to hold the unreacted sodium stearate even under extreme deformation of the part, such as in wire drawing . [ 1 ] [ 13 ]
https://en.wikipedia.org/wiki/Parkerizing
In mathematics , the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki , of the James Madison University Mathematics Department. The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The Parker–Sochacki method rests on two simple observations: Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats. The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients. The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.) The order and step size can be easily changed from one step to the next. It is possible to calculate a guaranteed error bound on the solution. Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions. With the Parker–Sochacki method, information between integration steps is developed at high order. As the Parker–Sochacki method integrates, the program can be designed to save the power series coefficients that provide a smooth solution between points in time. The coefficients can be saved and used so that polynomial evaluation provides the high order solution between steps. With most other classical integration methods, one would have to resort to interpolation to get information between integration steps, leading to an increase of error. There is an a priori error bound for a single step with the Parker–Sochacki method. [ 1 ] This allows a Parker–Sochacki program to calculate the step size that guarantees that the error is below any non-zero given tolerance. Using this calculated step size with an error tolerance of less than half of the machine epsilon yields a symplectic integration. Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The Parker–Sochacki method requires more work to put the equations into the proper form, and cannot use the same calling sequence.
https://en.wikipedia.org/wiki/Parker–Sochacki_method
The Parkes process is a pyrometallurgical industrial process for removing silver from lead during the production of bullion . It is an example of liquid–liquid extraction . The process takes advantage of two liquid-state properties of zinc . The first is that zinc is immiscible with lead, and the other is that silver is 3000 times more soluble in zinc than it is in lead. When zinc is added to liquid lead that contains silver as a contaminant, the silver preferentially migrates into the zinc. Because the zinc is immiscible in the lead it remains in a separate layer and is easily removed. The zinc-silver solution is then heated until the zinc vaporizes, leaving nearly pure silver. If gold is present in the liquid lead, it can also be removed and isolated by the same process. [ 1 ] The process [ 2 ] was patented by Alexander Parkes in 1850. [ 3 ] [ 4 ] [ 5 ] [ 6 ] Parkes received two additional patents in 1852. [ 7 ] The Parkes process was not adopted in the United States, due to the low native production of lead. [ 8 ] The problems were overcome during the 1880s and by 1923 only when the Parkes process was used. [ 9 ] This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parkes_process
Parking guidance and information ( PGI ) systems, or car park guidance systems , present drivers with dynamic information on parking within controlled areas. The systems combine traffic monitoring, communication, processing and variable message sign technologies to provide the service. Modern parking lots utilize a variety of technologies to help motorists find unoccupied parking spaces, car location when returning to the vehicle and improve their experience. This includes adaptive lighting sensors and parking space indicators (red for occupied, green for available and blue is reserved for the disabled; above every parking space), and indoor positioning systems (IPS). PGI systems are a product of the worldwide initiative for the development of intelligent transportation system in urban areas. PGI systems can assist in the development of safe, efficient and environmentally friendly transportation network. [ 1 ] PGI systems are designed to aid in the search for vacant parking spaces by directing drivers to car parks where occupancy levels are low. The objective is to reduce search time, which in turn reduces congestion on the surrounding roads for other traffic with related benefits to air pollution with the ultimate aim of enhancement of the urban area. Parking guidance systems have evolved a lot in recent times. Ultrasound and laser technologies provide information on the availability of parking spaces throughout the parking facility. At the same time, new camera-based technologies now make it possible to read the license plate of the vehicle in each parking space. This is an added value since it allows the identification of a specific vehicle in a specific parking space and, in addition, sometimes records possible incidents occurring in that space. These new methods increase security and revenue for the parking owners. Parking guidance systems (PGS) have different elements: [ citation needed ]
https://en.wikipedia.org/wiki/Parking_guidance_and_information
The METU Parlar Foundation Science Award ( Turkish : ODTÜ Mustafa Parlar Ödülü ) is a science award issued by the Middle East Technical University Prof. Dr. Mustafa N. Parlar Foundation established in 1981 in Ankara , Turkey , commemorates Professor Mustafa N. Parlar , who served as the Dean of Engineering at the Middle East Technical University. The Foundation's mission is to promote advancements in science and technology and their applications in industry. Annually, it confers several prestigious awards to recognize significant contributions to science and service. These include the Service to Science and Honour Award, the Science Award, and the Service Award. Notable recipients of the Science Award include İoanna Kuçuradi in philosophy, Cahit Arf , Feza Gürsey and Erol Gelenbe in sciences, as well as Halil İnalcık in history. [ 1 ] The METU Prof. Dr. Mustafa N. Parlar Education and Research Foundation was established on January 12, 1981, in memory of Prof. Dr. Mustafa N. Parlar , a scientist and early faculty member of Middle East Technical University . [ 2 ] The foundation aims to foster relationships between the university and industry, support research and researchers, provide technical hardware and tools, offer scholarships to METU students and financially assist lecturers. Prof. Dr. Mustafa N. Parlar was a Turkish scientist and educator, born in 1925 in Çamlıhemşin , Rize . He studied at the Illinois Institute of Technology , Northwestern University , and Brooklyn Polytechnic Institute , earning degrees from each institution. Parlar worked as a research engineer and assistant professor in the Unites Satates before returning to Türkiye. He held various academic and administrative roles at Middle East Technical University , including Director of Education, Chair of the Department of Electrical Engineering, and Dean of the Faculty of Engineering. He also served as Rector of METU and held positions within the Scientific and Technological Research Council of Turkey , contributing significantly to the fields of energy and telecommunications in Turkey. Parlar was a member of the Grand National Assembly of Turkey from 1973 to 1977 and was instrumental in fostering collaboration between universities and industries. He authored numerous scientific papers, books, and articles, advocating for the application of scientific thought to improve society and quality of life. In his honor, the METU Prof. Dr. Mustafa N. Parlar Education and Research Foundation was established in 1981 to support education and research. [ 3 ] Annually, the foundation awards several distinctions to recognize excellence within METU. These include the Honor , Science , Service , Research , and Technology Encouragement Awards . Additionally, it presents the Thesis of the Year Award to outstanding graduate and postgraduate students, the METU Lecturer of the Year Award to distinguished lecturers, the Thesis Advisor Award initiated in the 1996-1997 academic year, and the METU Excellence in Teaching Award , which is given to lecturers who have been recognized as the Year's Lecturer three times. [ 2 ] These awards aim to evaluate the contributions of exceptional scientists and practitioners, certify their competencies, and inspire future generations. Honorary award winners [ 7 ] Science award winners [ 7 ]
https://en.wikipedia.org/wiki/Parlar_Foundation_Science_Award
The Parliamentary Office of Science and Technology ( POST ) is an impartial research and knowledge exchange service based in the Parliament of the United Kingdom [ 1 ] POST serves both Houses of Parliament (the House of Commons and the House of Lords ). It produces concise briefings focusing on topical issues where the research evidence is emerging or particularly complex. POST briefings provide an overview of the latest research evidence and are produced in consultation with experts and stakeholders. Reports are designed to be used by MPs and Peers and are publicly available on its website. [ 2 ] POST also helps parliament to draw on the expertise of experts and academics and helps academics to understand parliament and contribute to its work. POST is advised by a Board of parliamentarians, external research experts and senior parliamentary staff. [ 3 ] Since 1939, a group of MPs and peers interested in science and technology, through the first parliamentary "All Party Group", the UK Parliamentary and Scientific Committee [ 4 ] (P&S), had encouraged UK Parliamentarians to explore the implications of scientific developments for society and public policy. As the UK economy became more dependent on technological progress, and the varied effects of technology (especially on the environment) became more apparent, it was felt that UK Parliament needed its own resources on such issues. Parliamentarians not only required access to knowledge and insights into the implications of technology for their constituents and society, but also needed to exercise their scrutiny functions over UK government legislation and administration. This thinking was also influenced by the fact that specialised parliamentary science and technology organisations already existed overseas. P&S members ( Sir Ian Lloyd MP , Sir Trevor Skeet MP , Sir Gerry Vaughan MP , Lords Kennet , Gregson and Flowers among others) visited already established organisations in the US, Germany and France, and this reinforced their view that modern Parliaments needed their own ‘intelligence’ on science and technology-related issues. Initially, they asked the then Thatcher government to fund such services at Westminster but were asked first to demonstrate a real need. This led to the P&S creating a charitable foundation to raise funds from P&S members. The parliamentary reaction was positive and led to the appointment of a first Director, Dr Michael Norton. In 1989, POST was formally established as a charitable foundation, though not an internal part of Parliament. POST had attracted more resources by 1992 and then recruited 3 specialist science advisers and began its fellowship programme with the UK research councils . In 1992, the House of Commons Information Committee, supported by the House of Lords, recommended that Parliament should itself fund POST for 3 years, and a subsequent review in 1995 extended this for a further 5 years. This was the result of POST demonstrating real interest and demand from MPs and peers. POST's financial reliance on donations from bodies external to Parliament, even those as prestigious as the Royal Society, had always slightly compromised the perceived independence of the office. In 2000, both Houses decided that POST should be established as a permanent bicameral institution, funded exclusively by Parliament. In 2009, POST celebrated its 20th anniversary with, among other events, a conference on "Images of the Future". The keynote participants were the Hon. Bart Gordon, Chair of the US House of Representatives' Committee on Science and Technology and Dr Jim Dator of the University of Hawaii Futures Research Centre, along with many other Members and staff of Parliaments across the world. Most parliamentarians do not have a scientific or technological background but science and technology issues are increasingly integral to public policy. Parliamentarians are bombarded daily with lobbying, public enquiries and media stories about science and technology. These cover diverse areas such as medical advances, environmental issues and global communications. POST helps parliamentarians examine such issues effectively by providing information resources, in depth analysis and impartial advice. POST works closely with a wide range of organisations involved in science and technology, including select committees , all-party parliamentary groups , government departments , scientific societies, policy think tanks , business, academia and research funders. POST informs parliamentary debate through: POST authors and the Head of POST are responsible for deciding what topics to produce briefings on, sometimes in consultation with parliamentary colleagues and the POST Board. Decisions on what to publish are informed by the resources available to the team, as well as the following factors: Briefings are produced following a process involving a literature review, consultation with experts and stakeholders, and peer review. POST liaises with science and technology organisations across the world. [ 8 ] POST is a member of the European Parliamentary Technology Assessment network, which brings together parliamentary organisations throughout Europe sharing information and working on joint projects. [ 9 ] The POST Board provides advice on POST's objectives, outputs and future work programme. It meets quarterly. The Board comprises: Senior parliamentary officials, including Grant Hill-Cawthorne The House of Commons Librarian and Managing Director Research & Information, House of Commons. [ 11 ] POST has eight science advisers, covering the fields of biology and health; physical sciences and digital; environment and energy; and social sciences. [ 12 ] Science advisers generally have a postgraduate qualification and science policy experience. POST has four knowledge exchange professionals grouped as the Knowledge Exchange Unit. [ 13 ] POST runs fellowship schemes with scientific societies and research councils, whereby PhD students and academics can spend three months or more working in parliament. [ 14 ]
https://en.wikipedia.org/wiki/Parliamentary_Office_of_Science_and_Technology
Parrondo's paradox , a paradox in game theory , has been described as: A combination of losing strategies becomes a winning strategy . [ 1 ] It is named after its creator, Juan Parrondo , who discovered the paradox in 1996. A more explanatory description is: Parrondo devised the paradox in connection with his analysis of the Brownian ratchet , a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman . However, the paradox disappears when rigorously analyzed. [ 2 ] Winning strategies consisting of various combinations of losing strategies were explored in biology before Parrondo's paradox was published. [ 3 ] Consider an example in which there are two points A and B having the same altitude, as shown in Figure 1. In the first case, we have a flat profile connecting them. Here, if we leave some round marbles in the middle that move back and forth in a random fashion, they will roll around randomly but towards both ends with an equal probability. Now consider the second case where we have a saw-tooth-like profile between the two points. Here also, the marbles will roll towards either end depending on the local slope. Now if we tilt the whole profile towards the right, as shown in Figure 2, it is quite clear that both these cases will become biased towards B . Now consider the game in which we alternate the two profiles while judiciously choosing the time between alternating from one profile to the other. When we leave a few marbles on the first profile at point E , they distribute themselves on the plane showing preferential movements towards point B . However, if we apply the second profile when some of the marbles have crossed the point C , but none have crossed point D , we will end up having most marbles back at point E (where we started from initially) but some also in the valley towards point A given sufficient time for the marbles to roll to the valley. Then we again apply the first profile and repeat the steps (points C , D and E now shifted one step to refer to the final valley closest to A ). If no marbles cross point C before the first marble crosses point D , we must apply the second profile shortly before the first marble crosses point D , to start over. It easily follows that eventually we will have marbles at point A , but none at point B . Hence if we define having marbles at point A as a win and having marbles at point B as a loss, we clearly win by alternating (at correctly chosen times) between playing two losing games. A third example of Parrondo's paradox is drawn from the field of gambling. Consider playing two games, Game A and Game B with the following rules. For convenience, define C t {\displaystyle C_{t}} to be our capital at time t , immediately before we play a game. It is clear that by playing Game A, we will almost surely lose in the long run. Harmer and Abbott [ 1 ] show via simulation that if M = 3 {\displaystyle M=3} and ϵ = 0.005 , {\displaystyle \epsilon =0.005,} Game B is an almost surely losing game as well. In fact, Game B is a Markov chain , and an analysis of its state transition matrix (again with M=3) shows that the steady state probability of using coin 2 is 0.3836, and that of using coin 3 is 0.6164. [ 4 ] As coin 2 is selected nearly 40% of the time, it has a disproportionate influence on the payoff from Game B, and results in it being a losing game. However, when these two losing games are played in some alternating sequence - e.g. two games of A followed by two games of B (AABBAABB...), the combination of the two games is, paradoxically, a winning game. Not all alternating sequences of A and B result in winning games. For example, one game of A followed by one game of B (ABABAB...) is a losing game, while one game of A followed by two games of B (ABBABB...) is a winning game. This coin-tossing example has become the canonical illustration of Parrondo's paradox – two games, both losing when played individually, become a winning game when played in a particular alternating sequence. The apparent paradox has been explained using a number of sophisticated approaches, including Markov chains, [ 5 ] flashing ratchets, [ 6 ] simulated annealing , [ 7 ] and information theory. [ 8 ] One way to explain the apparent paradox is as follows: The role of M {\displaystyle M} now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence. A more detailed exposition of this point, along with several related examples, can be found in Philips and Feldman. [ 9 ] Parrondo's paradox is used extensively in game theory, and its application to engineering, population dynamics, [ 3 ] financial risk, etc., are areas of active research. Parrondo's games are of little practical use such as for investing in stock markets [ 10 ] as the original games require the payoff from at least one of the interacting games to depend on the player's capital. However, the games need not be restricted to their original form and work continues in generalizing the phenomenon. Similarities to volatility pumping and the two envelopes problem [ 11 ] have been pointed out. Simple finance textbook models of security returns have been used to prove that individual investments with negative median long-term returns may be easily combined into diversified portfolios with positive median long-term returns. [ 12 ] Similarly, a model that is often used to illustrate optimal betting rules has been used to prove that splitting bets between multiple games can turn a negative median long-term return into a positive one. [ 13 ] In evolutionary biology, both bacterial random phase variation [ 14 ] and the evolution of less accurate sensors [ 15 ] have been modelled and explained in terms of the paradox. In ecology, the periodic alternation of certain organisms between nomadic and colonial behaviors has been suggested as a manifestation of the paradox. [ 16 ] There has been an interesting application in modelling multicellular survival as a consequence of the paradox [ 17 ] and some interesting discussion on the feasibility of it. [ 18 ] [ 19 ] Applications of Parrondo's paradox can also be found in reliability theory. [ 20 ] In the early literature on Parrondo's paradox, it was debated whether the word 'paradox' is an appropriate description given that the Parrondo effect can be understood in mathematical terms. The 'paradoxical' effect can be mathematically explained in terms of a convex linear combination. However, Derek Abbott , a leading researcher on the topic, provides the following answer regarding the use of the word 'paradox' in this context: Is Parrondo's paradox really a "paradox"? This question is sometimes asked by mathematicians, whereas physicists usually don't worry about such things. The first thing to point out is that "Parrondo's paradox" is just a name, just like the " Braess's paradox " or " Simpson's paradox ." Secondly, as is the case with most of these named paradoxes they are all really apparent paradoxes. People drop the word "apparent" in these cases as it is a mouthful, and it is obvious anyway. So no one claims these are paradoxes in the strict sense. In the wide sense, a paradox is simply something that is counterintuitive. Parrondo's games certainly are counterintuitive—at least until you have intensively studied them for a few months. The truth is we still keep finding new surprising things to delight us, as we research these games. I have had one mathematician complain that the games always were obvious to him and hence we should not use the word "paradox." He is either a genius or never really understood it in the first place. In either case, it is not worth arguing with people like that. [ 21 ]
https://en.wikipedia.org/wiki/Parrondo's_paradox
In mathematics , the Parry–Daniels map is a function studied in the context of dynamical systems . Typical questions concern the existence of an invariant or ergodic measure for the map. [ 1 ] It is named after the English mathematician Bill Parry [ 2 ] and the British statistician Henry Daniels , [ 3 ] who independently studied the map in papers published in 1962. Given an integer n ≥ 1, let Σ denote the n - dimensional simplex in R n +1 given by Let π be a permutation such that Then the Parry–Daniels map is defined by This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parry–Daniels_map
In mathematics , the Parry–Sullivan invariant (or Parry–Sullivan number ) is a numerical quantity of interest in the study of incidence matrices in graph theory , and of certain one-dimensional dynamical systems . It provides a partial classification of non-trivial irreducible incidence matrices. It is named after the English mathematician Bill Parry and the American mathematician Dennis Sullivan , who introduced the invariant in a joint paper published in the journal Topology in 1975. [ 1 ] [ 2 ] Let A be an n × n incidence matrix . Then the Parry–Sullivan number of A is defined to be where I denotes the n × n identity matrix. It can be shown that, for nontrivial irreducible incidence matrices, flow equivalence is completely determined by the Parry–Sullivan number and the Bowen–Franks group . This graph theory -related article is a stub . You can help Wikipedia by expanding it . This article about matrices is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parry–Sullivan_invariant
Pars destruens and pars construens (Latin) are complementary parts of argumentation . The negative part of criticizing views is the pars destruens . And the positive part of stating one's own position and arguments is the pars construens . The distinction goes back to Francis Bacon and his work Novum Organum (1620). There he puts forth his inductive method that has two parts. A negative part, pars destruens , that removes all prejudices and errors. And the positive part, pars construens , that is about gaining knowledge and truth. This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pars_destruens_and_pars_construens
The parsec (symbol: pc ) is a unit of length used to measure the large distances to astronomical objects outside the Solar System , approximately equal to 3.26 light-years or 206,265 astronomical units (AU), i.e. 30.9 trillion kilometres (19.2 trillion miles ). [ a ] The parsec unit is obtained by the use of parallax and trigonometry , and is defined as the distance at which 1 AU subtends an angle of one arcsecond [ 1 ] ( ⁠ 1 / 3600 ⁠ of a degree ). The nearest star, Proxima Centauri , is about 1.3 parsecs (4.2 light-years) from the Sun : from that distance, the gap between the Earth and the Sun spans slightly less than one arcsecond. [ 2 ] Most stars visible to the naked eye are within a few hundred parsecs of the Sun, with the most distant at a few thousand parsecs, and the Andromeda Galaxy at over 700,000 parsecs. [ 3 ] The word parsec is a shortened form of a distance corresponding to a parallax of one second , coined by the British astronomer Herbert Hall Turner in 1913. [ 4 ] The unit was introduced to simplify the calculation of astronomical distances from raw observational data. Partly for this reason, it is the unit preferred in astronomy and astrophysics , though in popular science texts and common usage the light-year remains prominent. Although parsecs are used for the shorter distances within the Milky Way , multiples of parsecs are required for the larger scales in the universe, including kilo parsecs (kpc) for the more distant objects within and around the Milky Way, mega parsecs (Mpc) for mid-distance galaxies, and giga parsecs (Gpc) for many quasars and the most distant galaxies. In August 2015, the International Astronomical Union (IAU) passed Resolution B2 which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as exactly ⁠ 648 000 / π ⁠ au, or approximately 30 856 775 814 913 673 metres, given the IAU 2012 exact definition of the astronomical unit in metres. This corresponds to the small-angle definition of the parsec found in many astronomical references. [ 5 ] [ 6 ] Imagining an elongated right triangle in space, where the shorter leg measures one au ( astronomical unit , the average Earth – Sun distance) and the subtended angle of the vertex opposite that leg measures one arcsecond ( 1 ⁄ 3600 of a degree), the parsec is defined as the length of the adjacent leg. The value of a parsec can be derived through the rules of trigonometry . The distance from Earth whereupon the radius of its solar orbit subtends one arcsecond. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky. The first measurement is taken from the Earth on one side of the Sun, and the second is taken approximately half a year later, when the Earth is on the opposite side of the Sun. [ b ] The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, which is formed by lines from the Sun and Earth to the star at the distant vertex . Then the distance to the star could be calculated using trigonometry. [ 7 ] The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni . [ 8 ] The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit. Substituting the star's parallax for the one arcsecond angle in the imaginary right triangle, the long leg of the triangle will measure the distance from the Sun to the star. A parsec can be defined as the length of the right triangle side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond. The use of the parsec as a unit of distance follows naturally from Bessel's method, because the distance in parsecs can be computed simply as the reciprocal of the parallax angle in arcseconds (i.e.: if the parallax angle is 1 arcsecond, the object is 1 pc from the Sun; if the parallax angle is 0.5 arcseconds, the object is 2 pc away; etc.). No trigonometric functions are required in this relationship because the very small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance. He proposed the name astron , but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec . [ 4 ] It was Turner's proposal that stuck. By the 2015 definition, 1 au of arc length subtends an angle of 1″ at the center of the circle of radius 1 pc . That is, 1 pc = 1 au/tan( 1″ ) ≈ 206,264.8 au by definition. [ 9 ] Converting from degree/minute/second units to radians , Therefore, π p c = 180 × 60 × 60 a u = 180 × 60 × 60 × 149 597 870 700 m = 96 939 420 213 600 000 m {\displaystyle \pi ~\mathrm {pc} =180\times 60\times 60~\mathrm {au} =180\times 60\times 60\times 149\,597\,870\,700~\mathrm {m} =96\,939\,420\,213\,600\,000~\mathrm {m} } (exact by the 2015 definition) Therefore, 1 p c = 96 939 420 213 600 000 π m = 30 856 775 814 913 673 m {\displaystyle 1~\mathrm {pc} ={\frac {96\,939\,420\,213\,600\,000}{\pi }}~\mathrm {m} =30\,856\,775\,814\,913\,673~\mathrm {m} } (to the nearest metre ). Approximately, In the diagram above (not to scale), S represents the Sun, and E the Earth at one point in its orbit (such as to form a right angle at S [ b ] ). Thus the distance ES is one astronomical unit (au). The angle SDE is one arcsecond ( ⁠ 1 / 3600 ⁠ of a degree ) so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: S D = E S tan ⁡ 1 ″ = E S tan ⁡ ( 1 60 × 60 × π 180 ) ≈ 1 a u 1 60 × 60 × π 180 = 648 000 π a u ≈ 206 264.81 a u . {\displaystyle {\begin{aligned}\mathrm {SD} &={\frac {\mathrm {ES} }{\tan 1''}}\\&={\frac {\mathrm {ES} }{\tan \left({\frac {1}{60\times 60}}\times {\frac {\pi }{180}}\right)}}\\&\approx {\frac {1\,\mathrm {au} }{{\frac {1}{60\times 60}}\times {\frac {\pi }{180}}}}={\frac {648\,000}{\pi }}\,\mathrm {au} \approx 206\,264.81~\mathrm {au} .\end{aligned}}} Because the astronomical unit is defined to be 149 597 870 700 m , [ 10 ] the following can be calculated: Therefore, if 1 ly ≈ 9.46 × 10 15 m, A corollary states that a parsec is also the distance from which a disc that is one au in diameter must be viewed for it to have an angular diameter of one arcsecond (by placing the observer at D and a disc spanning ES ). Mathematically, to calculate distance, given obtained angular measurements from instruments in arcseconds, the formula would be: Distance star = Distance earth-sun tan ⁡ θ 3600 {\displaystyle {\text{Distance}}_{\text{star}}={\frac {{\text{Distance}}_{\text{earth-sun}}}{\tan {\frac {\theta }{3600}}}}} where θ is the measured angle in arcseconds, Distance earth-sun is a constant ( 1 au or 1.5813 × 10 −5 ly). The calculated stellar distance will be in the same measurement unit as used in Distance earth-sun (e.g. if Distance earth-sun = 1 au , unit for Distance star is in astronomical units; if Distance earth-sun = 1.5813 × 10 −5 ly, unit for Distance star is in light-years). The length of the parsec used in IAU 2015 Resolution B2 [ 11 ] (exactly ⁠ 648 000 / π ⁠ astronomical units) corresponds exactly to that derived using the small-angle calculation. This differs from the classic inverse- tangent definition by about 200 km , i.e.: only after the 11th significant figure . As the astronomical unit was defined by the IAU (2012) as an exact length in metres, so now the parsec corresponds to an exact length in metres. To the nearest metre, the small-angle parsec corresponds to 30 856 775 814 913 673 m . The parallax method is the fundamental calibration step for distance determination in astrophysics ; however, the accuracy of ground-based telescope measurements of parallax angle is limited to about 0.01″ , and thus to stars no more than 100 pc distant. [ 12 ] This is because the Earth's atmosphere limits the sharpness of a star's image. [ citation needed ] Space-based telescopes are not limited by this effect and can accurately measure distances to objects beyond the limit of ground-based observations. Between 1989 and 1993, the Hipparcos satellite, launched by the European Space Agency (ESA), measured parallaxes for about 100 000 stars with an astrometric precision of about 0.97 mas , and obtained accurate measurements for stellar distances of stars up to 1000 pc away. [ 13 ] [ 14 ] ESA's Gaia satellite , which launched on 19 December 2013, is intended to measure one billion stellar distances to within 20 microarcsecond s, producing errors of 10% in measurements as far as the Galactic Centre , about 8000 pc away in the constellation of Sagittarius . [ 15 ] Distances expressed in fractions of a parsec usually involve objects within a single star system. So, for example: Distances expressed in parsecs (pc) include distances between nearby stars, such as those in the same spiral arm or globular cluster . A distance of 1,000 parsecs (3,262 ly) is denoted by the kiloparsec (kpc). Astronomers typically use kiloparsecs to express distances between parts of a galaxy or within groups of galaxies . [ 16 ] So, for example: Astronomers typically express the distances between neighbouring galaxies and galaxy clusters in megaparsecs (Mpc). A megaparsec is one million parsecs, or about 3,260,000 light years. [ 22 ] Sometimes, galactic distances are given in units of Mpc/ h (as in "50/ h Mpc", also written " 50 Mpc h −1 "). h is a constant (the " dimensionless Hubble constant ") in the range 0.5 < h < 0.75 reflecting the uncertainty in the value of the Hubble constant H for the rate of expansion of the universe: h = ⁠ H / 100 (km/s)/Mpc ⁠ . The Hubble constant becomes relevant when converting an observed redshift z into a distance d using the formula d ≈ ⁠ c / H ⁠ × z . [ 23 ] One gigaparsec (Gpc) is one billion parsecs — one of the largest units of length commonly used. One gigaparsec is about 3.26 billion ly, or roughly ⁠ 1 / 14 ⁠ of the distance to the horizon of the observable universe (dictated by the cosmic microwave background radiation ). Astronomers typically use gigaparsecs to express the sizes of large-scale structures such as the size of, and distance to, the CfA2 Great Wall ; the distances between galaxy clusters; and the distance to quasars . For example: To determine the number of stars in the Milky Way, volumes in cubic kiloparsecs [ c ] (kpc 3 ) are selected in various directions. All the stars in these volumes are counted and the total number of stars statistically determined. The number of globular clusters, dust clouds, and interstellar gas is determined in a similar fashion. To determine the number of galaxies in superclusters , volumes in cubic megaparsecs [ c ] (Mpc 3 ) are selected. All the galaxies in these volumes are classified and tallied. The total number of galaxies can then be determined statistically. The huge Boötes void is measured in cubic megaparsecs. [ 26 ] In physical cosmology , volumes of cubic gigaparsecs [ c ] (Gpc 3 ) are selected to determine the distribution of matter in the visible universe and to determine the number of galaxies and quasars. The Sun is currently the only star in its cubic parsec, [ c ] (pc 3 ) but in globular clusters the stellar density could be from 100–1000 pc −3 . The observational volume of gravitational wave interferometers (e.g., LIGO , Virgo ) is stated in terms of cubic megaparsecs [ c ] (Mpc 3 ) and is essentially the value of the effective distance cubed. The parsec was used incorrectly as a measurement of time by Han Solo in the first Star Wars film, when he claimed his ship, the Millennium Falcon "made the Kessel Run in less than 12 parsecs", originally with the intention of presenting Solo as "something of a bull artist who didn't always know precisely what he was talking about". The claim was repeated in The Force Awakens , but this was retconned in Solo: A Star Wars Story , by stating the Millennium Falcon traveled a shorter distance (as opposed to a quicker time) due to a more dangerous route through the Kessel Run, enabled by its speed and maneuverability. [ 27 ] It is also used incorrectly in The Mandalorian . [ 28 ]
https://en.wikipedia.org/wiki/Parsec
The Parshall flume is an open channel flow-metering device that was developed to measure the flow of surface water and irrigation flow. The Parshall flume is a modified version of the Venturi flume . Named after its creator, Dr. Ralph L. Parshall of the U.S. Soil Conservation Service , the Parshall flume is a fixed hydraulic structure used in measuring volumetric flow rate in surface water, industrial discharges, municipal sewer lines, and influent/effluent flows in wastewater treatment plants. The Parshall flume accelerates the flow by contracting both the parallel sidewalls and a drop in the floor at the flume throat. Under free-flow conditions, the depth of water at a specified location upstream of the flume throat can be converted to a rate of flow. Some states specify the use of Parshall flumes, by law, for certain situations (commonly water rights). [ 1 ] Differences between the Venturi and Parshall flume include reduction of the inlet converging angle, lengthening the throat section, reduction of the discharge divergence angle, and introducing a drop through the throat (and subsequent partial recovery in the discharge section). [ 2 ] Beginning in 1915, Dr. Ralph Parshall of the U.S. Soil Conservation Service altered the subcritical Venturi flume to include a drop in elevation through the throat of the flume. This created a transition from subcritical flow conditions to supercritical flow conditions through the throat of the flume. Modifications to the Venturi flume that Parshall made include: [ 3 ] In 1930, the improved flume was named the Parshall Measuring Flume by the Irrigation Committee of the American Society of Civil Engineers (ASCE) in recognition of Parshall's accomplishments. Parshall was additionally honored as a Life Member of the ASCE. [ 4 ] Dr. Parshall's initial focus was on the use of his namesake flume to measure flows in irrigation channels and other surface waters. Over time, however, the Parshall flume has proven to be applicable to a wide variety of open channel flows including: A wide variety of materials are used to make Parshall flumes, including: [ 5 ] Smaller Parshall flumes tend to be fabricated from fiberglass and galvanized steel (depending upon the application), while larger Parshall flumes tend to be fabricated from fiberglass (sizes up to 144 in) or concrete (160–600 in). [ 10 ] By the 1960s, several different companies began to commercially offer Parshall flumes. These manufacturers have typically produced flumes from one type of material only (typically glass-reinforce plastic or steel), although currently a few, offer Parshall flumes in a variety of materials. When used for stream gauging, aluminium is the typical material of construction - primarily due to its light weight. An example can be found via google earth: 50°58'41.34"N, 5°51'36.81"E, eye altitude 200 m. This is in the Geleenbeek , near Geleen in the Netherlands. The design of the Parshall flume is standardized under ASTM D1941, ISO 9826:1992, and JIS B7553-1993. The flumes are not patented, and the discharge tables are not copyright protected. Parshall flumes come in twenty-two standard sizes, spanning flow ranges from 0.005 to 3,280 cubic feet per second (0.142 to 92,900 litres per second). [ 11 ] Submergence transitions for Parshall flumes range from 50% (1–3-inch sizes) to 80% (10–50-foot sizes), [ 12 ] beyond which point level measurements must be taken at both the primary and secondary points of measurement, and a submergence correction must be applied to the flow equations . The secondary point of measurement (Hb) for a Parshall flume is located in the throat, measuring Hb can be difficult as flow in the throat of the flume is turbulent and prone to fluctuations in the water level. Typically, 90% is viewed as the upper limit for which corrections for submerged flow are practical. [ 13 ] Under laboratory conditions, Parshall flumes can be expected to exhibit accuracies to within ±2%, although field conditions make accuracies better than 5% doubtful. The free-flow discharge can be summarized in this equation: Where: When the downstream depth is high enough that the transition to subcritical flow advances upstream into the throat and the hydraulic jump disappears, the flume is operating in a "submerged flow" regime, and the discharge is instead given by the function Where Q E {\displaystyle Q_{E}} is the "submergence correction" and is found using pre-determined tables for a particular flume geometry. The Parshall Flume acts essentially as a constriction, a downward step, and then an expansion: the upstream section is uniformly convergent and flat, the throat is a short parallel section that slopes downward, and the downstream section is uniformly diverging and slopes upward to an ending elevation that is less than the upstream starting elevation. The width of the throat determines the flume size; 22 standardized sizes have been developed, ranging from 1 in to 50 ft (0.005 ft 3 /s to 3,280 ft 3 /s). A venturi flume is similar to the Parshall flume, without the contoured base, but the cross section is usually rectangular, the inlet shorter, and there is a general taper on the outlet similar to the venturi meter . [ 14 ] Because of their size, it is usual for these meters to be open to their surroundings just like a river or stream and therefore this type of measurement is referred to as open-channel flow measurement. Parshall flumes are much more efficient than standard flumes and generate a standard wave to affect a measurement. There are two conditions of flow that can occur in a Parshall flume: free flow and submerged flow. When free flow conditions exist, the user only needs to collect one head measurement (Ha, the primary point of measurement) to determine the discharge. For submerged flow, a secondary head measurement (Hb) is required to determine the flume is submerged and the degree of submergence. The primary point of measurement (Ha) is located in the inlet of the flume, two-thirds of the length of the converging section from the flume crest. The secondary point of measurement (Hb) is located in the throat of the flume. A hydraulic jump occurs downstream of the flume for free flow conditions. As the flume becomes submerged, the hydraulic jump diminishes and ultimately disappears as the downstream conditions increasingly restrict the flow out of the flume. Not all Parshall flumes have the energy-recovering divergence section. These flumes, called Montana flumes , or short-section Parshall flumes , must instead have a free-spilling discharge at all expected flow rates, which increases the drop along the whole flume system. The measurement calculations are the same as for free flow in a standard Parshall flume, but submerged flow cannot be adjusted for. [ 15 ] A Parshall Flume relies on the conservation of energy principle. The sum of the kinetic and potential energy at a given point must be equal to the energy at any other point along the stream. The total energy or head must be equal. Using the equations, we will solve for Q. Where E 1 is the energy at H a , E 2 at the flume crest, and E 3 at H b respectively. Since E 2 is located at the flume crest where there is a steep drop, critical flow conditions occur. Rearranging and substituting in the above equations, we get Or Since that Q = v⋅y⋅b and v = √ gy c at critical depth, these relationships to solve for the discharge can be used. Broken further down, realizing that And Since this is measured upstream, where flow is sub-critical, it can be stated that y 1 ≫ v 2 /2g Therefore, for a rough approximation; This equation simplifies to: These final two equations are very similar to the Q = CH a n equations that are used for Parshall flumes. In fact, when looking at the flume tables, n has a value equal to or slightly greater than 1.5, while the value of C is larger than (3.088 b 2 ) but still in a rough estimation. The derived equations above will always underestimate actual flow since both the derived C and n values are lower than their respective chart values. For the Parshall flume equation used to calculate the flow rate, both empirical values C and n are known constants (with various values for each Parshall flume size) leaving Ha (depth upstream) as the only variable needing to be measured. Likewise, in the energy conservation equation, y 1 (or the depth of flow) is needed. Free flow occurs when there is no “backwater” to restrict flow through a flume. Only the upstream depth needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume. Submerged flow occurs when the water surface downstream of the flume is high enough to restrict flow through a flume, submerged flume conditions exist. A backwater buildup effect occurs in a submerged flume. For a flow calculation, a depth measurement both upstream and downstream is needed. Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume. For free flow, the equation to determine the flow rate is simply Q = CH a n where: (See Figure 1 above) Parshall flume discharge table for free flow conditions: [ 16 ] For submerged flow, a depth of flow needs to be taken upstream (H a ) and downstream (H b ). See locations of H a and H b in Figure 1. [ 16 ] If H b /H a is greater or equal to S t then it is a submerged flow. If there is submerged flow, adjustments need to be made in order for the Parshall Flume to work properly. The discharge (Q) can be found using the following equations and table: where: (Note: All various Q values are in ft 3 /s, Ha is in feet, and M varies in units) Parshall Flume Free Flow Example Problem: Using the Parshall flume free flow equation, determine the discharge of a 72-inch flume with a depth, Ha of 3 feet. From Table 1: Throat width = 72 in = 6 ft, C = 24, and n = 1.59. So, if there is a depth of 3 feet, the flow rate is ≈ 140 ft 3 /s Approximate the discharge using the derived discharge equation shown above (Equation 5). This equation was derived using the principles of specific energy and is only to serve as an estimate for the actual discharge of the Parshall flume. Again, equations 5 and 6 will always underestimate the actual flow since both the derived C and n values are lower than their respective empirically derived chart values. Parshall flume submerged flow example problem: Using the Parshall flume flow equations and Tables 1–3, determine the flow type (free flow or submerged flow) and discharge for a 36-inch flume with an upstream depth, Ha of 1.5 ft and a downstream depth, H b of 1.4 ft. For reference of locations H a and H b , refer to Figure 1. From Table 2, the Parshall Flume submergence transition (St) for a 36-inch = 3 feet flume is 0.7. Since H b /H a is greater than or equal to 0.7, it is a submerged flow. From Table 1: Throat width = 36 in = 3 ft, C = 12, and n = 1.57. Where S = H b /H a = 1.4 ft/1.5 ft = 0.93 From Table 3, M = 2.4 for a flume size of 3 ft An illustration exists of a unitless E – Y diagram and how Energy and depth of flow change throughout a Parshall Flume. The two blue lines represent the q values, q 1 for the flow before the constriction, and q 2 representing the value at the constriction (q = Q/b = ft 2 /s, or flow over width in a rectangular channel). When a constriction (decrease in width) happens Between E 1 and E 2 , the q value changes (and becomes the new critical depth), while the energy remains the same. Then the flume experiences a downward step which results in a gain in energy. This gain in energy is equal to the size of the step (or Δz). From this, the principles of conservation of energy are used to develop a set of calculations to predict the flow rate. Two variations of the Parshall flume have been developed over time: the Montana flume and the Short Section (USGS / Portable) Parshall flume. [ 22 ] The Montana flume omits the throat and discharge sections of the Parshall. [ 23 ] By omitting these sections, the flume is shortened by more than half while retaining the free-flow characteristics of the same-size Parshall. With the deletion of the throat and discharge section, the Montana flume has little resistance to submersion and, like the H flume, should be used where free-spilling discharge is present under all flow conditions. The Montana flume is described in US Bureau of Reclamation's Water Measurement Manual [ 24 ] and two technical standards MT199127AG [ 25 ] and MT199128AG [ 26 ] by Montana State University (note that Montana State University has currently withdrawn both standards for updating/review). The short-section Parshall (sometimes referred to as a USGS or Portable Parshall) omits the discharge section of the flume. Originally designed by Troxell and Taylor in 1931 and published under "Venturi Flume" as a memorandum from the office of the Ground Water Branch, USGS, the design was again brought to the attention of potential users in Taylors' paper "Portable Venturi Flume for Measuring Small Flows in 1954. [ 27 ] This modification - supplied by the USGS Hydrologic Instrumentation Facility - is available in two sizes: the original 3" and the recently added 6". [ 28 ] Kilpatrick notes that the discharge for this modification of the Parshall flume is slightly greater than for a standard Parshall flume of the same size. [ 29 ] This has been attributed to potential manufacturing tolerance variations rather than the actual operation of the flume itself and users are cautioned to verify the flume's dimensions before proceeding with data collection. As with any Parshall flume, flumes varying from the standard dimensions flumes should be individually rated.
https://en.wikipedia.org/wiki/Parshall_flume
A part number (often abbreviated PN , P/N , part no. , or part # ) is an identifier of a particular part design or material used in a particular industry. Its purpose is to simplify reference that item. A part number unambiguously identifies a part design within a single corporation, sometimes across several corporations. [ 1 ] For example, when specifying a screw , it is easier to refer to "HSC0424PP" than saying "Hardware, screw, machine, 4-40, 3/4" long , pan head, Phillips". In this example, "HSC0424PP" is the part number. It may be prefixed in database fields as "PN HSC0424PP" or "P/N HSC0424PP". The "Part Number" term is often used loosely to refer to items or components (assemblies or parts), and it's equivalent to "Item Number", and overlaps with other terms like SKU (Stock Keeping Unit). As a part number is an identifier of a part design (independent of its instantiations ), a serial number is a unique identifier of a particular instantiation of that part design. In other words, a part number identifies any particular (physical) part as being made to that one unique design; a serial number, when used, identifies a particular (physical) part (one physical instance), as differentiated from the next unit that was stamped, machined, or extruded right after it. This distinction is not always clear, as natural language blurs it by typically referring to both part designs , and particular instantiations of those designs, by the same word, "part(s)". Thus if you buy a muffler of P/N 12345 today, and another muffler of P/N 12345 next Tuesday, you have bought "two copies of the same part", or "two parts", depending on the sense implied. A business using a part will often use a different part number than the various manufacturers of that part do. This is especially common for catalog hardware, because the same or similar part design (say, a screw with a certain standard thread, of a certain length) might be made by many corporations (as opposed to unique part designs, made by only one or a few). For example, when referring to a "Hardware, screw, machine, 4-40, 3/4" long, Phillips": The business using such a screw may buy screws from any of those manufacturers, because each supplier manufactures the parts to the same specification. To identify such screws, the user doesn't want to use any of those manufacturer's part numbers, because Therefore, the user devises its own part numbering system. In such a system, the user may use the part number "HSC0424PP" for that screw. There are also some national and industry-association initiatives which help producers and consumers codify the product based on a unified scheme to establish a common language between industrial and commercial sectors. For example: In general, there are two types of part numbering systems: significant (a.k.a. "intelligent") and non-significant (a.k.a. "non-intelligent"). In a company, significant numbering systems help identify an item from its code rather than from a long description. However, variations can arise when codes are used by other companies, which may be your distributors, and can cause confusion. Non-significant part numbers are easier to assign and manage. They can still have some structure, such as a numeric category followed by a sequential number. Eg: 231-1002 (2=Hardware 3=Screw 1=Phillips, 1002 = sequential number). This enables more efficient data entry, using a keypad , which normally includes digits and dashes, and is operated one-handed, leaving the other hand free. Other benefits: people find numbers easier [ citation needed ] ; in a warehouse, one can store products in numeric order (for example, in an aisle, numbers can increase from one end to the other). There is a strong tradition in part numbering practice, in use across many corporations, to use suffixes consisting of a "dash" followed by a number comprising 1 or 2 digits (occasionally more). These suffixes are called dash numbers , and they are a common way of logically associating a set of detail parts or subassemblies that belong to a common assembly or part family. For example, the part numbers 12345-1, 12345-2, and 12345-3 are three different dash numbers of the same part family. In precise typographical and character encoding terms, it is actually a hyphen , not a dash , that is usually used; but the word "dash" is firmly established in the spoken and written usage of the engineering and manufacturing professions; "dash number", not "hyphen number", is the standard term. This comes from the era before computers, when most typographical laypeople did not need to differentiate the characters or glyphs precisely. Some companies follow a convention of circling the dash numbers on a drawing, such as in view designators and subpart callouts. Another widespread tradition is using the drawing number as the root (or stem) of the part number; in this tradition, the various dash-number parts usually appear as views on the self-same drawing. For example, drawing number 12345 may show an assembly, P/N 12345-1, which comprises detail parts -2 ("dash two"), -3, -4, -8, and -11. Even drawings for which there is currently only one part definition existing will often designate that part with a part number comprising drawing number plus -1 ("dash one"). This is to provide extensibility of the part numbering system, in anticipation of a day when it might be desired to add another part definition to the family, which can then become -2 ("dash two"), followed by -3 ("dash three"), and so on. Some corporations make no attempt to encode part numbers and drawing numbers with common encoding; they are paired arbitrarily. In other numbering schemes there is no separate drawing number, the drawing simply reuses the part number. Often more than one version of a part design will be specified on one drawing. This allows for easy updating of one drawing that covers a family of parts, and it keeps the specifications for similar parts on one drawing. For example: A common application of tabulation of part families is multiple dimensions within a general design, e.g. bushing : It is a common concept in many corporations to add certain suffixes beyond, or in place of, the regular dash numbers, in order to designate a part that is mostly in conformance with the part design (that is, mostly "to print"), but intentionally lacks certain features. The suffixes are usually "intelligent", that is, they use an encoding system , although the encoding systems are usually corporation-specific (and thus cryptic, and of little use, to outsiders). An example of such a design modification suffix is adding "V" or "Z" to the end of the part number to designate the variant of the part that is purchased "less paint", "less plating ", "with the holes not yet drilled", "intentionally oversize by 0.01 mm (0.00039 in)", or any of countless other modifications. The intent is usually that the feature in question (such as holes not yet drilled, or paint not yet sprayed) will be added at a higher assembly level; or that maintenance workers in the field will choose from a kit of undersize and oversize parts (such as bushings) in order to achieve a certain fit (sliding fit, light press fit, etc.). Sometimes the terms "engineering part number" and "manufacturing part number" are used to differentiate the "normal" or "basic" part number (engineering PN) from the modification-suffixed part number (manufacturing PN). Many assemblies with reflection symmetry , such as the fuselages and wings of aircraft, the hulls of ships and boats, and the bodies of cars and trucks, require matched pairs of parts that are identical, or nearly identical, except for being mirror images of each other. (For example, the left and right wings of an airplane, or the left and right fenders or doors of a car.) Often these related parts are designated left-hand ( LH ) and right-hand ( RH ) parts. It is a common practice to give them sequential dash numbers , or -LH and -RH part number suffixes. It is also not uncommon to show only one of them on the drawing, and to define the symmetrical counterpart simply by stating that it is "opposite". Common notations include "left-hand shown, right-hand opposite" or "-1, LH (shown); -2, RH (opposite)". The term phantom part is sometimes used to describe a series of parts that collectively make up an assembly or subassembly. This concept is helpful in the database management of engineering and production (such as in product data management applications) when it is useful to think of a certain combination of subparts as "one part" (and thus one database record ) for ordering, production, or billing purposes. It is common in the engineering of parts, subassemblies, and higher assemblies to treat the definition of a certain part as a very well-defined concept, with every last detail controlled by the engineering drawing or its accompanying technical product documentation (TPD). This is necessary because of the separation of concerns that often exists in production, in which the maker of each part (whether an in-house department or a vendor) does not have all the information needed to decide whether any particular small variation is acceptable or not (that is, "whether the part will still work" or "whether it will still fit into the assembly" interchangeably). The sizes of fillets and edge breaks are common examples of such details where production staff must say, "it may easily be trivial, but it could possibly matter, and we're not the ones who can tell which is true in this case". However, a challenge to this paradigm (of perfectly frozen part definition) is that sometimes it is necessary to obtain a part that is "mostly like" part A but that also incorporates some of the features of parts B and C. For example, a new variant of model of next-higher assembly may require this. Although this "blending" of part designs could happen very informally in a non-mass-production environment (such as an engineering lab, home business, or prototyping toolroom), it requires more forethought when the concerns are more thoroughly separated (such as when some production is outsourced to vendors). In the latter case, a new part definition, termed a synthetic part (because its definition synthesizes features from various other parts), is created. Ideally it is then formally defined with a new drawing; but often in the imperfect reality of the business world, to save time and expense, an improvised TPD will be prepared for it consisting of several existing drawings and some notes about which features to synthesize. It is common today for part numbers (as well as serial numbers or other information) to be marked on the part in ways that facilitate machine-readability , such as barcodes or QR codes . Today's advanced state of optical character recognition (OCR) technology also means that machines can often read the human-readable format of Arabic numerals and Latin script . Current revisions of major part marking standards (such as the U.S. military's MIL-STD-130 ) take pains to codify the most advantageous combinations of machine-readable information (MRI) and human-readable information (HRI).
https://en.wikipedia.org/wiki/Part_number
Parthanatos (derived from the Greek Θάνατος, " Death ") is a form of programmed cell death that is distinct from other cell death processes such as necrosis and apoptosis . While necrosis is caused by acute cell injury resulting in traumatic cell death and apoptosis is a highly controlled process signalled by apoptotic intracellular signals , parthanatos is caused by the accumulation of Poly (ADP ribose) (PAR) and the nuclear translocation of apoptosis-inducing factor (AIF) from mitochondria . [ 1 ] Parthanatos is also known as PARP-1 dependent cell death. PARP-1 mediates parthanatos when it is over-activated in response to extreme genomic stress and synthesizes PAR which causes nuclear translocation of AIF. [ 2 ] Parthanatos is involved in diseases that afflict hundreds of millions of people worldwide. Well known diseases involving parthanatos include Parkinson's disease , stroke , heart attack , and diabetes . [ citation needed ] It also has potential use as a treatment for ameliorating disease and various medical conditions such as diabetes and obesity . [ citation needed ] The term parthanatos was not coined until a review in 2009. [ 1 ] The word parthanatos is derived from Thanatos , the personification of death in Greek mythology. Parthanatos was first discovered in a 2006 paper by Yu et al. studying the increased production of mitochondrial reactive oxygen species (ROS) by hyperglycemia . [ 3 ] This phenomenon is linked with negative effects arising from clinical complications of diabetes and obesity . Researchers noticed that high glucose concentrations led to overproduction of reactive oxygen species and rapid fragmentation of mitochondria . Inhibition of mitochondrial pyruvate uptake blocked the increase of ROS, but did not prevent mitochondrial fragmentation. After incubating cells with the non-metabolizable stereoisomer L-glucose, neither reactive oxygen species increase nor mitochondrial fragmentation were observed. Ultimately, the researchers found that mitochondrial fragmentation mediated by the fission process is a necessary component for high glucose-induced respiration increase and ROS overproduction. [ citation needed ] Extended exposure to high glucose conditions are similar to untreated diabetic conditions, and so the effects mirror each other. In this condition, the exposure creates a periodic and prolonged increase in ROS production along with mitochondrial morphology change. If mitochondrial fission was inhibited, the periodic fluctuation of ROS production in a high glucose environment was prevented. This research shows that when cell damage to the ROS is too great, PARP-1 will initiate cell death. [ citation needed ] Poly(ADP-ribose) polymerase-1 ( PARP-1 ) is a nuclear enzyme that is found universally in all eukaryotes and is encoded by the PARP-1 gene. It belongs to the PARP family, which is a group of catalysts that transfer ADP-ribose units from NAD (nicotinamide dinucleotide) to protein targets, thus creating branched or linear polymers. [ 4 ] The major domains of PARP-1 impart the ability to fulfill its functions. These protein sections include the DNA-binding domain on the N-terminus (allows PARP-1 to detect DNA breaks), the automodification domain (has a BRCA1 C terminus motif which is key for protein-protein interactions), and a catalytic site with the NAD+-fold (characteristic of mono-ADP ribosylating toxins). [ 1 ] Normally, PARP-1 is involved in a variety of functions that are important for cell homeostasis such as mitosis. Another of these roles is DNA repair , including the repair of base lesions and single-strand breaks. [ 5 ] PARP-1 interacts with a wide variety of substrates including histones , DNA helicases , high mobility group proteins, topoisomerases I and II, single-strand break repair factors, base-excision repair factors , and several transcription factors . [ 1 ] PARP-1 accomplishes many of its roles through regulating poly(ADP-ribose) (PAR). PAR is a polymer that varies in length and can be either linear or branched. [ 6 ] It is negatively charged which allows it to alter the function of the proteins it binds to either covalently or non-covalently. [ 1 ] PAR binding affinity is strongest for branched polymers, weaker for long linear polymers and weakest for short linear polymers. [ 7 ] PAR also binds selectively with differing strengths to the different histones. [ 7 ] It is suspected that PARP-1 modulates processes (such as DNA repair , DNA transcription , and mitosis ) through the binding of PAR to its target proteins. The parthanatos pathway is activated by DNA damage caused by genotoxic stress or excitotoxicity . [ 8 ] This damage is recognized by the PARP-1 enzyme which causes an upregulation in PAR. PAR causes translocation of apoptosis-inducing factor (AIF) from the mitochondria to the nucleus where it induces DNA fragmentation and ultimately cell death . [ 9 ] This general pathway has been outlined now for almost a decade. While considerable success has been made in understanding the molecular events in parthanatos, efforts are still ongoing to completely identify all of the major players within the pathway, as well how spatial and temporal relationships between mediators affect them. Extreme damage of DNA causing breaks and changes in chromatin structure have been shown to induce the parthanatos pathway. [ 8 ] Stimuli that causes the DNA damage can come from a variety of different sources. Methylnitronitrosoguanidine , an alkylating agent , has been widely used in several studies to induce the parthanatos pathway. [ 10 ] [ 11 ] [ 12 ] A noted number of other stimuli or toxic conditions have also been used to cause DNA damage such as H 2 O 2 , NO , and ONOO − generation (oxygenglucose deprivation). [ 10 ] [ 13 ] [ 14 ] The magnitude, length of exposure, type of cell used, and purity of the culture, are all factors that can influence the activation of the pathway. [ 15 ] The damage must be extreme enough for the chromatin structure to be altered. This change in structure is recognized by the N-terminal zinc-finger domain on the PARP-1 protein. [ 16 ] The protein can recognize both single and double strand DNA breaks. Once the PARP-1 protein recognizes the DNA damage, it catalyzes post-transcriptional modification of PAR. [ 9 ] PAR will be formed either as a branched or linear molecule. Branching and long-chain polymers will be more toxic to the cell than simple short polymers. [ 17 ] The more extreme the DNA damage, the more PAR accumulates in the nucleus. Once enough PAR has accumulated, it will translocate from the nucleus into the cytosol . One study has suggested that PAR can translocate as a free polymer, [ 17 ] however translocation of a protein-conjugated PAR cannot be ruled out and is in fact a topic of active research. [ 8 ] PAR moves through the cytosol and enters the mitochondria through depolarization. [ 9 ] Within the mitochondria, PAR binds directly to the AIF which has a PAR polymer binding site, causing the AIF to dissociate from the mitochondria. [ 18 ] AIF is then translocated to the nucleus where it induces chromatin condensation and large scale (50Kb) DNA fragmentation. [ 9 ] How AIF induces these effects is still unknown. It is thought that an AIF associated nuclease (PAAN) that is currently unidentified may be present. [ 8 ] Human AIF have a DNA binding site [ 10 ] that would indicate that AIF binds directly to the DNA in the nucleus directly causing the changes. However, as mice AIF do not have this binding domain and are still able to undergo parthanatos, [ 19 ] it is evident that there must be another mechanism involved. PAR, which is responsible for the activation of AIF, is regulated in the cell by the enzyme poly(ADP-ribose) glycohydrolase ( PARG ). After PAR is synthesized by PARP-1, it is degraded through a process catalyzed by PARG. [ 20 ] PARG has been found to protect against PAR-mediated cell death [ 9 ] while its deletion has increased toxicity through the accumulation of PAR. [ 9 ] Before the discovery of the PAR and AIF pathway, it was thought that the overactivation of PARP-1 lead to over consumption of NAD+ . [ 21 ] As a result of NAD+ depletion, a decrease of ATP production would occur, and the resulting loss of energy would kill the cell. [ 22 ] [ 23 ] However it is now known that this loss of energy would not be enough to account for cell death. In cells lacking PARG , activation of PARP-1 leads to cell death in the presence of ample NAD+. [ 24 ] Parthanatos is defined as a unique cell death pathway from apoptosis for a few key reasons. Primarily, apoptosis is dependent on the caspase pathway activated by cytochrome c release, while the parthanatos pathway is able to act independently of caspase. [ 8 ] Furthermore, unlike apoptosis, parthanatos causes large scale DNA fragmentation (apoptosis only produces small scale fragmentation) and does not form apoptotic bodies . [ 25 ] While parthanatos does share similarities with necrosis , is also has several differences. Necrosis is not a regulated pathway and does not undergo any controlled nuclear fragmentation. While parthanatos does involve loss of cell membrane integrity like necrosis , it is not accompanied by cell swelling. [ 26 ] The PAR enzyme was originally connected to neural degradation pathways in 1993. Elevated levels of nitric oxide (NO) have been shown to cause neurotoxicity in samples of rat hippocampal neurons . [ 27 ] A deeper look into the effects of NO on neurons showed that nitric oxides cause damage to DNA strands; the damage in turn elicits PAR enzyme activity that leads to further degradation and neuronal death. PAR- blockers halted the cell death mechanisms in the presence of elevated NO levels. [ 27 ] PARP activity has also been linked to the neurodegenerative properties of toxin induced Parkinsonism . 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine ( MPTP ) is a neurotoxin that has been linked to neurodegeneration and development of Parkinson Disease-like symptoms in patients since 1983. The MPTP toxin's effects were discovered when four people were intravenously injecting the toxin that they produced inadvertently when trying to street-synthesise the merpyridine ( MPPP ) drug. [ 28 ] The link between MPTP and PARP was found later when research showed that the MPTP effects on neurons were reduced in mutated cells lacking the PARP gene. [ 29 ] The same research also showed highly increased PARP activation in dopamine producing cells in the presence of MPTP. Alpha-synuclein is a protein that binds to DNA and modulates DNA repair . [ 30 ] A key feature of Parkinson's disease is the pathologic accumulation and aggregation of alpha-synuclein. In the neurons of individuals with Parkinson's disease, alpha-synuclein is deposited as fibrils in intracytoplasmic structures referred to as Lewy bodies . Formation of pathologic alpha-synuclein is associated with activation of PARP1 , increased poly(ADP) ribose generation and further acceleration of pathologic alpha-synuclein formation. [ 31 ] This process can lead to cell death by parthanatos. [ 31 ] Parthanatos, as a cell death pathway, is being increasingly linked to several syndromes connected with specific tissue damage outside of the nervous system . This is highlighted in the mechanism of streptozotocin (STZ) induced diabetes . STZ is a chemical that is naturally produced by the human body. However, in high doses, STZ has been shown to produce diabetic symptoms by damaging pancreatic β cells, which are insulin-producing. [ 32 ] The degradation of β cells by STZ was linked to PARP in 1980 when studies showed that a PAR synthesis inhibitor reduced STZ's effects on insulin synthesis. Inhibition of PARP causes pancreatic tissue to sustain insulin synthesis levels, and reduce β cell degradation even with elevated STZ toxin levels. [ 33 ] PARP activation has also been preliminarily connected with arthritis , [ 34 ] colitis , [ 35 ] and liver toxicity . [ 36 ] The multi-step nature of the parthanatos pathway allows for chemical manipulation of its activation and inhibition for use in therapy. This rapidly developing field seems to be currently focused on the use of PARP blockers as treatments for chronically degenerative illnesses. This culminated in 3rd generation inhibitors such as midazoquinolinone and isoquinolindione currently going to clinical trials. [ 8 ] Another path for treatments is to recruit the parthanatos pathway to induce apoptosis into cancer cells, however no treatments have passed the theoretical stage. [ 8 ]
https://en.wikipedia.org/wiki/Parthanatos
In mathematics – and in particular the study of games on the unit square – Parthasarathy's theorem is a generalization of Von Neumann's minimax theorem . It states that a particular class of games has a mixed value, provided that at least one of the players has a strategy that is restricted to absolutely continuous distributions with respect to the Lebesgue measure (in other words, one of the players is forbidden to use a pure strategy ). The theorem is attributed to Thiruvenkatachari Parthasarathy . Let X {\displaystyle X} and Y {\displaystyle Y} stand for the unit interval [ 0 , 1 ] {\displaystyle [0,1]} ; M X {\displaystyle {\mathcal {M}}_{X}} denote the set of probability distributions on X {\displaystyle X} (with M Y {\displaystyle {\mathcal {M}}_{Y}} defined similarly); and A X {\displaystyle A_{X}} denote the set of absolutely continuous distributions on X {\displaystyle X} (with A Y {\displaystyle A_{Y}} defined similarly). Suppose that k ( x , y ) {\displaystyle k(x,y)} is bounded on the unit square X × Y = { ( x , y ) : 0 ≤ x , y ≤ 1 } {\displaystyle X\times Y=\{(x,y):0\leq x,y\leq 1\}} and that k ( x , y ) {\displaystyle k(x,y)} is continuous except possibly on a finite number of curves of the form y = ϕ k ( x ) {\displaystyle y=\phi _{k}(x)} (with k = 1 , 2 , … , n {\displaystyle k=1,2,\ldots ,n} ) where the ϕ k ( x ) {\displaystyle \phi _{k}(x)} are continuous functions. For μ ∈ M X , λ ∈ M Y {\displaystyle \mu \in M_{X},\lambda \in M_{Y}} , define Then This is equivalent to the statement that the game induced by k ( ⋅ , ⋅ ) {\displaystyle k(\cdot ,\cdot )} has a value. Note that one player ( WLOG Y {\displaystyle Y} ) is forbidden from using a pure strategy. Parthasarathy goes on to exhibit a game in which which thus has no value. There is no contradiction because in this case neither player is restricted to absolutely continuous distributions (and the demonstration that the game has no value requires both players to use pure strategies). This game theory article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Parthasarathy's_theorem
Parthenogenesis ( / ˌ p ɑːr θ ɪ n oʊ ˈ dʒ ɛ n ɪ s ɪ s , - θ ɪ n ə -/ ; [ 1 ] [ 2 ] from the Greek παρθένος , parthénos , 'virgin' + γένεσις , génesis , 'creation' [ 3 ] ) is a natural form of asexual reproduction in which the embryo develops directly from an egg without need for fertilization . In animals , parthenogenesis means development of an embryo from an unfertilized egg cell . In plants , parthenogenesis is a component process of apomixis . In algae , parthenogenesis can mean the development of an embryo from either an individual sperm or an individual egg. Parthenogenesis occurs naturally in some plants, algae , invertebrate animal species (including nematodes , some tardigrades , water fleas , some scorpions , aphids , some mites, some bees , some Phasmatodea , and parasitic wasps ), and a few vertebrates , such as some fish , amphibians , and reptiles . This type of reproduction has been induced artificially in animal species that naturally reproduce through sex, including fish, amphibians, and mice. Normal egg cells form in the process of meiosis and are haploid , with half as many chromosomes as their mother's body cells. Haploid individuals, however, are usually non-viable, and parthenogenetic offspring usually have the diploid chromosome number. Depending on the mechanism involved in restoring the diploid number of chromosomes, parthenogenetic offspring may have anywhere between all and half of the mother's alleles . In some types of parthenogenesis the offspring having all of the mother's genetic material are called full clones and those having only half are called half clones. Full clones are usually formed without meiosis. If meiosis occurs, the offspring get only a fraction of the mother's alleles since crossing over of DNA takes place during meiosis, creating variation. Parthenogenetic offspring in species that use either the XY or the X0 sex-determination system have two X chromosomes and are female. In species that use the ZW sex-determination system , they have either two Z chromosomes (male) or two W chromosomes (mostly non-viable but rarely a female), or they could have one Z and one W chromosome (female). Parthenogenesis is a form of asexual reproduction in which the embryo develops directly from an egg without need for fertilization . [ 4 ] [ 5 ] It occurs naturally in some plants, algae , invertebrate animal species (including nematodes , some tardigrades , water fleas , some scorpions , aphids , some mites, some bees , some Phasmatodea , and parasitic wasps ), and a few vertebrates , such as some fish , amphibians , reptiles , [ 6 ] [ 7 ] [ 8 ] and birds . [ 9 ] [ 10 ] [ 11 ] This type of reproduction has been induced artificially in a number of animal species that naturally reproduce through sex, including fish, amphibians, and mice. [ 12 ] [ 13 ] Some species reproduce exclusively by parthenogenesis (such as the bdelloid rotifers ), while others can switch between sexual reproduction and parthenogenesis. This is called facultative parthenogenesis (other terms are cyclical parthenogenesis, heterogamy [ 14 ] [ 15 ] or heterogony [ 16 ] [ 17 ] ). The switch between sexuality and parthenogenesis in such species may be triggered by the season ( aphid , some gall wasps ), or by a lack of males or by conditions that favour rapid population growth ( rotifers and cladocerans like Daphnia ). In these species asexual reproduction occurs either in summer (aphids) or as long as conditions are favourable. This is because in asexual reproduction a successful genotype can spread quickly without being modified by sex or wasting resources on male offspring who will not give birth. Some species can produce both sexually and through parthenogenesis, and offspring in the same clutch of a species of tropical lizard can be a mix of sexually produced offspring and parthenogenically produced offspring. [ 18 ] In California condors, facultative parthenogenesis can occur even when a male is present and available for a female to breed with. [ 19 ] In times of stress, offspring produced by sexual reproduction may be fitter as they have new, possibly beneficial gene combinations. In addition, sexual reproduction provides the benefit of meiotic recombination between non- sister chromosomes , a process associated with repair of DNA double-strand breaks and other DNA damages that may be induced by stressful conditions. [ 20 ] Many taxa with heterogony have within them species that have lost the sexual phase and are now completely asexual. Many other cases of obligate parthenogenesis (or gynogenesis) are found among polyploids and hybrids where the chromosomes cannot pair for meiosis. [ 21 ] The production of female offspring by parthenogenesis is referred to as thelytoky (e.g., aphids) while the production of males by parthenogenesis is referred to as arrhenotoky (e.g., bees). When unfertilized eggs develop into both males and females, the phenomenon is called deuterotoky. [ 22 ] Parthenogenesis can occur without meiosis through mitotic oogenesis. This is called apomictic parthenogenesis . Mature egg cells are produced by mitotic divisions, and these cells directly develop into embryos. In flowering plants, cells of the gametophyte can undergo this process. The offspring produced by apomictic parthenogenesis are full clones of their mother, as in aphids. [ 23 ] Parthenogenesis involving meiosis is more complicated. In some cases, the offspring are haploid (e.g., male ants ). In other cases, collectively called automictic parthenogenesis , the ploidy is restored to diploidy by various means. This is because haploid individuals are not viable in most species. In automictic parthenogenesis, the offspring differ from one another and from their mother. They are called half clones of their mother. [ 24 ] Automixis includes several reproductive mechanisms, some of which are parthenogenetic. [ 25 ] [ 26 ] Diploidy can be restored by the doubling of the chromosomes without cell division before meiosis begins or after meiosis is completed. This is an endomitotic cycle. Diploidy can also be restored by fusion of the first two blastomeres , or by fusion of the meiotic products. The chromosomes may not separate at one of the two anaphases (restitutional meiosis)l; or the nuclei produced may fuse; or one of the polar bodies may fuse with the egg cell at some stage during its maturation. [ citation needed ] Some authors consider all forms of automixis sexual as they involve recombination. Many others classify the endomitotic variants as asexual and consider the resulting embryos parthenogenetic. Among these authors, the threshold for classifying automixis as a sexual process depends on when the products of anaphase I or of anaphase II are joined. The criterion for sexuality varies from all cases of restitutional meiosis, [ 27 ] to those where the nuclei fuse or to only those where gametes are mature at the time of fusion. [ 26 ] Those cases of automixis that are classified as sexual reproduction are compared to self-fertilization in their mechanism and consequences. [ citation needed ] The genetic composition of the offspring depends on what type of automixis takes place. When endomitosis occurs before meiosis [ 28 ] [ 29 ] or when central fusion occurs (restitutional meiosis of anaphase I or the fusion of its products), the offspring get all [ 28 ] [ 30 ] to more than half of the mother's genetic material and heterozygosity is mostly preserved [ 31 ] (if the mother has two alleles for a locus, it is likely that the offspring will get both). This is because in anaphase I the homologous chromosomes are separated. Heterozygosity is not completely preserved when crossing over occurs in central fusion. [ 32 ] In the case of pre-meiotic doubling, recombination, if it happens, occurs between identical sister chromatids. [ 28 ] If terminal fusion (restitutional meiosis of anaphase II or the fusion of its products) occurs, a little over half the mother's genetic material is present in the offspring and the offspring are mostly homozygous. [ 33 ] This is because at anaphase II the sister chromatids are separated and whatever heterozygosity is present is due to crossing over. In the case of endomitosis after meiosis, the offspring is completely homozygous and has only half the mother's genetic material. [ citation needed ] This can result in parthenogenetic offspring being unique from each other and from their mother. [ citation needed ] In apomictic parthenogenesis, the offspring are clones of the mother and hence (except for aphids) are usually female. In the case of aphids, parthenogenetically produced males and females are clones of their mother except that the males lack one of the X chromosomes (XO). [ 34 ] When meiosis is involved, the sex of the offspring depends on the type of sex determination system and the type of apomixis. In species that use the XY sex-determination system , parthenogenetic offspring have two X chromosomes and are female. In species that use the ZW sex-determination system the offspring genotype may be one of ZW (female), [ 30 ] [ 31 ] ZZ (male), or WW (non-viable in most species, [ 33 ] but a fertile, [ dubious – discuss ] viable female in a few, e.g., boas ). [ 33 ] ZW offspring are produced by endoreplication before meiosis or by central fusion. [ 30 ] [ 31 ] ZZ and WW offspring occur either by terminal fusion [ 33 ] or by endomitosis in the egg cell. [ citation needed ] In polyploid obligate parthenogens, like the whiptail lizard, all the offspring are female. [ 29 ] In many hymenopteran insects such as honeybees, female eggs are produced sexually, using sperm from a drone father, while the production of further drones (males) depends on the queen (and occasionally workers) producing unfertilized eggs. This means that females (workers and queens) are always diploid, while males (drones) are always haploid, and produced parthenogenetically. [ citation needed ] Facultative parthenogenesis occurs when a female can produce offspring either sexually or via asexual reproduction. [ 35 ] Facultative parthenogenesis is extremely rare in nature, with only a few examples of animal taxa capable of facultative parthenogenesis. [ 35 ] One of the best-known examples of taxa exhibiting facultative parthenogenesis are mayflies ; presumably, this is the default reproductive mode of all species in this insect order. [ 36 ] Facultative parthenogenesis has generally been believed to be a response to a lack of a viable male. A female may undergo facultative parthenogenesis if a male is absent from the habitat or if it is unable to produce viable offspring. However, California condors and the tropical lizard Lepidophyma smithii both can produce parthenogenic offspring in the presence of males, indicating that facultative parthenogenesis may be more common than previously thought and is not simply a response to a lack of males. [ 18 ] [ 10 ] In aphids , a generation sexually conceived by a male and a female produces only females. The reason for this is the non-random segregation of the sex chromosomes 'X' and 'O' during spermatogenesis . [ 37 ] Facultative parthenogenesis is often used to describe cases of spontaneous parthenogenesis in normally sexual animals. [ 38 ] For example, many cases of spontaneous parthenogenesis in sharks , some snakes , Komodo dragons , and a variety of domesticated birds were widely attributed to facultative parthenogenesis. [ 39 ] These cases are examples of spontaneous parthenogenesis. [ 35 ] [ 38 ] The occurrence of such asexually produced eggs in sexual animals can be explained by a meiotic error, leading to eggs produced via automixis . [ 38 ] [ 40 ] Obligate parthenogenesis is the process in which organisms exclusively reproduce through asexual means. [ 41 ] Many species have transitioned to obligate parthenogenesis over evolutionary time. Well documented transitions to obligate parthenogenesis have been found in numerous metazoan taxa, albeit through highly diverse mechanisms. These transitions often occur as a result of inbreeding or mutation within large populations. [ 42 ] Some documented species, specifically salamanders and geckos, that rely on obligate parthenogenesis as their major method of reproduction. As such, there are over 80 species of unisex reptiles (mostly lizards but including a single snake species), amphibians and fishes in nature for which males are no longer a part of the reproductive process. [ 43 ] A female produces an ovum with a full set (two sets of genes) provided solely by the mother. Thus, a male is not needed to provide sperm to fertilize the egg. This form of asexual reproduction is thought in some cases to be a serious threat to biodiversity for the subsequent lack of gene variation and potentially decreased fitness of the offspring. [ 41 ] Some invertebrate species that feature (partial) sexual reproduction in their native range are found to reproduce solely by parthenogenesis in areas to which they have been introduced . [ 44 ] [ 45 ] Relying solely on parthenogenetic reproduction has several advantages for an invasive species : it obviates the need for individuals in a very sparse initial population to search for mates; and an exclusively female sex distribution allows a population to multiply and invade more rapidly (potentially twice as fast). Examples include several aphid species [ 44 ] and the willow sawfly, Nematus oligospilus , which is sexual in its native Holarctic habitat but parthenogenetic where it has been introduced into the Southern Hemisphere. [ 45 ] Parthenogenesis does not apply to isogamous species. [ 46 ] Parthenogenesis occurs naturally in aphids , Daphnia , rotifers , nematodes , and some other invertebrates, as well as in many plants. Among vertebrates , strict parthenogenesis is only known to occur in lizards, snakes, [ 47 ] birds, [ 48 ] and sharks. [ 49 ] Fish, amphibians, and reptiles make use of various forms of gynogenesis and hybridogenesis (an incomplete form of parthenogenesis). [ 50 ] The first all-female (unisexual) reproduction in vertebrates was described in the fish Poecilia formosa in 1932. [ 51 ] Since then at least 50 species of unisexual vertebrate have been described, including at least 20 fish, 25 lizards, a single snake species, frogs, and salamanders. [ 50 ] Use of an electrical or chemical stimulus can produce the beginning of the process of parthenogenesis in the asexual development of viable offspring. [ 52 ] During oocyte development, high metaphase promoting factor (MPF) activity causes mammalian oocytes to arrest at the metaphase II stage until fertilization by a sperm. The fertilization event causes intracellular calcium oscillations, and targeted degradation of cyclin B, a regulatory subunit of MPF, thus permitting the MII-arrested oocyte to proceed through meiosis. [ 53 ] [ 54 ] To initiate unfertilised development of swine oocytes, various methods exist to induce an artificial activation that mimics sperm entry, such as calcium ionophore treatment, microinjection of calcium ions, or electrical stimulation. Treatment with cycloheximide, a non-specific protein synthesis inhibitor, enhances the development of unfertilised eggs in swine presumably by continual inhibition of MPF/cyclin B. [ 54 ] As meiosis proceeds, extrusion of the second polar is blocked by exposure to cytochalasin B. This treatment results in a diploid (2 maternal genomes) parthenote [ 53 ] The resulting embryos can be surgically transferred to a recipient oviduct for further development, but will succumb to developmental failure after ≈30 days of gestation. The swine placenta in these cases often appears hypo-vascular: see free image (Figure 1) in linked reference. [ 53 ] Induced parthenogenesis of this type in mice and monkeys results in abnormal development. This is because mammals have imprinted genetic regions, where either the maternal or the paternal chromosome is inactivated in the offspring for development to proceed normally. A mammal developing from parthenogenesis would have double doses of maternally imprinted genes and lack paternally imprinted genes, leading to developmental abnormalities. It has been suggested that defects in placental folding or interdigitation are one cause of swine parthenote abortive development. [ 53 ] As a consequence, research on the induced development of unfertilised eggs in humans is focused on the production of embryonic stem cells for use in medical treatment, not as a reproductive strategy. In 2022, researchers reported that they have produced viable offspring born from unfertilized eggs in mice, addressing the problems of genomic imprinting by "targeted DNA methylation rewriting of seven imprinting control regions". [ 13 ] In 1955, Helen Spurway , a geneticist specializing in the reproductive biology of the guppy ( Lebistes reticulatus ), claimed that parthenogenesis may occur (though very rarely) in humans, leading to so-called "virgin births". This created some sensation among her colleagues and the lay public alike. [ 55 ] Sometimes an embryo may begin to divide without fertilization, but it cannot fully develop on its own; so while it may create some skin and nerve cells, it cannot create others (such as skeletal muscle) and becomes a type of benign tumor called an ovarian teratoma . [ 56 ] Spontaneous ovarian activation is not rare and has been known about since the 19th century. Some teratomas can even become primitive fetuses (fetiform teratoma) with imperfect heads, limbs and other structures, but are non-viable. [ citation needed ] In 1995, there was a reported case of partial human parthenogenesis; a boy was found to have some of his cells (such as white blood cells ) to be lacking in any genetic content from his father. Scientists believe that an unfertilized egg began to self-divide but then had some (but not all) of its cells fertilized by a sperm cell; this must have happened early in development, as self-activated eggs quickly lose their ability to be fertilized. The unfertilized cells eventually duplicated their DNA, boosting their chromosomes to 46. When the unfertilized cells hit a developmental block, the fertilized cells took over and developed that tissue. The boy had asymmetrical facial features and learning difficulties but was otherwise healthy. This would make him a parthenogenetic chimera (a child with two cell lineages in his body). [ 57 ] While over a dozen similar cases have been reported since then (usually discovered after the patient demonstrated clinical abnormalities), there have been no scientifically confirmed reports of a non-chimeric, clinically healthy human parthenote (i.e. produced from a single, parthenogenetic-activated oocyte). [ 56 ] In 2007, the International Stem Cell Corporation of California announced that Elena Revazova had intentionally created human stem cells from unfertilized human eggs using parthenogenesis. The process may offer a way for creating stem cells genetically matched to a particular female to treat degenerative diseases. The same year, Revazova and ISCC published an article describing how to produce human stem cells that are homozygous in the HLA region of DNA. [ 58 ] These stem cells are called HLA homozygous parthenogenetic human stem cells (hpSC-Hhom) and would allow derivatives of these cells to be implanted without immune rejection. With selection of oocyte donors according to HLA haplotype , it would be possible to generate a bank of cell lines whose tissue derivatives, collectively, could be MHC-matched with a significant number of individuals within the human population. [ 59 ] After an independent investigation, it was revealed that the discredited South Korean scientist Hwang Woo-Suk unknowingly produced the first human embryos resulting from parthenogenesis. Initially, Hwang claimed he and his team had extracted stem cells from cloned human embryos, a result later found to be fabricated. Further examination of the chromosomes of these cells show indicators of parthenogenesis in those extracted stem cells, similar to those found in the mice created by Tokyo scientists in 2004. Although Hwang deceived the world about being the first to create artificially cloned human embryos, he contributed a major breakthrough to stem cell research by creating human embryos using parthenogenesis. [ 60 ] A form of asexual reproduction related to parthenogenesis is gynogenesis. Here, offspring are produced by the same mechanism as in parthenogenesis, but with the requirement that the egg merely be stimulated by the presence of sperm in order to develop. However, the sperm cell does not contribute any genetic material to the offspring. Since gynogenetic species are all female, activation of their eggs requires mating with males of a closely related species for the needed stimulus. Some salamanders of the genus Ambystoma are gynogenetic and appear to have been so for over a million years. The success of those salamanders may be due to rare fertilization of eggs by males, introducing new material to the gene pool, which may result from perhaps only one mating out of a million. In addition, the Amazon molly is known to reproduce by gynogenesis. [ 61 ] Hybridogenesis is a mode of reproduction of hybrids . Hybridogenetic hybrids (for example AB genome ), usually females, during gametogenesis exclude one of parental genomes (A) and produce gametes with unrecombined [ 62 ] genome of second parental species (B), instead of containing mixed recombined parental genomes. First genome (A) is restored by fertilization of these gametes with gametes from the first species (AA, sexual host, [ 62 ] usually male). [ 62 ] [ 64 ] [ 65 ] Hybridogenesis is not completely asexual, but hemiclonal: half the genome is passed to the next generation clonally , unrecombined, intact (B), other half sexually , recombined (A). This process continues, so that each generation is half (or hemi-) clonal on the mother's side and has half new genetic material from the father's side. [ 62 ] [ 66 ] This form of reproduction is seen in some live-bearing fish of the genus Poeciliopsis [ 64 ] [ 67 ] as well as in some of the Pelophylax spp. ("green frogs" or "waterfrogs"): Other examples where hybridogenesis is at least one of modes of reproduction include i.e. Parthenogenesis, in the form of reproduction from a single individual (typically a god), is common in mythology, religion, and folklore around the world, including in ancient Greek myth ; for example, Athena was born from the head of Zeus . [ 73 ] [ clarification needed ] In Christianity and Islam, there is the virgin birth of Jesus , and stories of miraculous births also appear in other global religions. [ 74 ] The theme is one of several aspects of reproductive biology explored in science fiction . [ 75 ]
https://en.wikipedia.org/wiki/Parthenogenesis
Partial-wave analysis , in the context of quantum mechanics , refers to a technique for solving scattering problems by decomposing each wave into its constituent angular-momentum components and solving using boundary conditions . Partial wave analysis is typically useful for low energy scattering where only a few angular momentum components dominate. At high energy were scattering is weak, an alternative called the Born approximation is used. [ 1 ] : 507 A steady beam of particles scatters off a spherically symmetric potential V ( r ) {\displaystyle V(r)} , which is short-ranged, so that for large distances r → ∞ {\displaystyle r\to \infty } , the particles behave like free particles. The incoming beam is assumed to be a collimated plane wave exp ⁡ ( i k z ) {\displaystyle \exp(ikz)} traveling along the z axis. Because the beam is switched on for times long compared to the time of interaction of the particles with the scattering potential, a steady state is assumed. This means that the stationary Schrödinger equation for the wave function Ψ ( r ) {\displaystyle \Psi (\mathbf {r} )} representing the particle beam should be solved: We make the following ansatz : where Ψ 0 ( r ) ∝ exp ⁡ ( i k z ) {\displaystyle \Psi _{0}(\mathbf {r} )\propto \exp(ikz)} is the incoming plane wave, and Ψ s ( r ) {\displaystyle \Psi _{\text{s}}(\mathbf {r} )} is a scattered part perturbing the original wave function. It is the asymptotic form of Ψ s ( r ) {\displaystyle \Psi _{\text{s}}(\mathbf {r} )} that is of interest, because observations near the scattering center (e.g. an atomic nucleus) are mostly not feasible, and detection of particles takes place far away from the origin. At large distances, the particles should behave like free particles, and Ψ s ( r ) {\displaystyle \Psi _{\text{s}}(\mathbf {r} )} should therefore be a solution to the free Schrödinger equation. For a spherically symmetric potential, these solutions should be outgoing spherical waves, Ψ s ( r ) ∝ exp ⁡ ( i k r ) / r {\displaystyle \Psi _{\text{s}}(\mathbf {r} )\propto \exp(ikr)/r} at large distances. Thus the asymptotic form of the scattered wave is chosen as [ 2 ] : 371 where f ( θ , k ) {\displaystyle f(\theta ,k)} is the so-called scattering amplitude , which is in this case only dependent on the elevation angle θ {\displaystyle \theta } and the energy. This gives the following asymptotic expression for the entire wave function: In case of a spherically symmetric potential V ( r ) = V ( r ) {\displaystyle V(\mathbf {r} )=V(r)} , the scattering wave function may be expanded in spherical harmonics , which reduce to Legendre polynomials because of azimuthal symmetry (no dependence on ϕ {\displaystyle \phi } ): In the standard scattering problem, the incoming beam is assumed to take the form of a plane wave of wave number k , which can be decomposed into partial waves using the plane-wave expansion in terms of spherical Bessel functions and Legendre polynomials : Here we have assumed a spherical coordinate system in which the z axis is aligned with the beam direction. The radial part of this wave function consists solely of the spherical Bessel function, which can be rewritten as a sum of two spherical Hankel functions : This has physical significance: h ℓ (2) asymptotically (i.e. for large r ) behaves as i −( ℓ +1) e ikr /( kr ) and is thus an outgoing wave, whereas h ℓ (1) asymptotically behaves as i ℓ +1 e −ikr /( kr ) and is thus an incoming wave. The incoming wave is unaffected by the scattering, while the outgoing wave is modified by a factor known as the partial-wave S-matrix element S ℓ : where u ℓ ( r )/ r is the radial component of the actual wave function. The scattering phase shift δ ℓ is defined as half of the phase of S ℓ : If flux is not lost, then | S ℓ | = 1 , and thus the phase shift is real. This is typically the case, unless the potential has an imaginary absorptive component, which is often used in phenomenological models to simulate loss due to other reaction channels. Therefore, the full asymptotic wave function is Subtracting ψ in yields the asymptotic outgoing wave function: Making use of the asymptotic behavior of the spherical Hankel functions, one obtains Since the scattering amplitude f ( θ , k ) is defined from it follows that [ 2 ] : 386 and thus the differential cross section is given by This works for any short-ranged interaction. For long-ranged interactions (such as the Coulomb interaction ), the summation over ℓ may not converge. The general approach for such problems consist in treating the Coulomb interaction separately from the short-ranged interaction, as the Coulomb problem can be solved exactly in terms of Coulomb functions , which take on the role of the Hankel functions in this problem. This scattering –related article is a stub . You can help Wikipedia by expanding it . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial-wave_analysis
In abstract algebra , a partial algebra is a generalization of universal algebra to partial operations . [ 1 ] [ 2 ] There is a "Meta Birkhoff Theorem" by Andreka, Nemeti and Sain (1982). [ 1 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial_algebra
In atomic physics , a partial charge (or net atomic charge ) is a non- integer charge value when measured in elementary charge units. It is represented by the Greek lowercase delta (𝛿), namely 𝛿− or 𝛿+. Partial charges are created due to the asymmetric distribution of electrons in chemical bonds . For example, in a polar covalent bond like HCl , the shared electron oscillates between the bonded atoms. The resulting partial charges are a property only of zones within the distribution, and not the assemblage as a whole. For example, chemists often choose to look at a small space surrounding the nucleus of an atom : When an electrically neutral atom bonds chemically to another neutral atom that is more electronegative , its electrons are partially drawn away. This leaves the region about that atom's nucleus with a partial positive charge, and it creates a partial negative charge on the atom to which it is bonded. In such a situation, the distributed charges taken as a group always carries a whole number of elementary charge units. Yet one can point to zones within the assemblage where less than a full charge resides, such as the area around an atom's nucleus. This is possible in part because particles are not like mathematical points—which must be either inside a zone or outside it—but are smeared out by the uncertainty principle of quantum mechanics . Because of this smearing effect, if one defines a sufficiently small zone, a fundamental particle may be both partly inside and partly outside it. Partial atomic charges are used in molecular mechanics force fields to compute the electrostatic interaction energy using Coulomb's law , even though this leads to substantial failures for anisotropic charge distributions. [ 1 ] Partial charges are also often used for a qualitative understanding of the structure and reactivity of molecules. Occasionally, δδ+ is used to indicate a partial charge that is less positively charged than δ+ (likewise for δδ−) in cases where it is relevant to do so. [ 2 ] This can be extended to δδδ± to indicate even weaker partial charges as well. Generally, a single δ+ (or δ−) is sufficient for most discussions of partial charge in organic chemistry. Partial atomic charges can be used to quantify the degree of ionic versus covalent bonding of any compound across the periodic table. The necessity for such quantities arises, for example, in molecular simulations to compute bulk and surface properties in agreement with experiment. Evidence for chemically different compounds shows that available experimental data and chemical understanding lead to justified atomic charges. [ 3 ] Atomic charges for a given compound can be derived in multiple ways, such as: The discussion of individual compounds in prior work has shown convergence in atomic charges, i.e., a high level of consistency between the assigned degree of polarity and the physical-chemical properties mentioned above. The resulting uncertainty in atomic charges is ±0.1e to ±0.2e for highly charged compounds, and often <0.1e for compounds with atomic charges below ±1.0e. Often, the application of one or two of the above concepts already leads to very good values, especially taking into account a growing library of experimental benchmark compounds and compounds with tested force fields. [ 4 ] The published research literature on partial atomic charges varies in quality from extremely poor to extremely well-done. Although a large number of different methods for assigning partial atomic charges from quantum chemistry calculations have been proposed over many decades, the vast majority of proposed methods do not work well across a wide variety of material types. [ 5 ] [ 6 ] Only as recently as 2016 was a method for theoretically computing partial atomic charges developed that performs consistently well across an extremely wide variety of material types. [ 5 ] [ 6 ] All of the earlier methods had fundamental deficiencies that prevented them from assigning accurate partial atomic charges in many materials. [ 5 ] [ 6 ] Mulliken and Löwdin partial charges are physically unreasonable, because they do not have a mathematical limit as the basis set is improved towards completeness. [ 7 ] Hirshfeld partial charges are usually too low in magnitude. [ 8 ] Some methods for assigning partial atomic charges do not converge to a unique solution. [ 5 ] In some materials, atoms in molecules analysis yields non-nuclear attractors describing electron density partitions that cannot be assigned to any atom in the material; in such cases, atoms in molecules analysis cannot assign partial atomic charges. [ 9 ] According to Cramer (2002), partial charge methods can be divided into four classes: [ 10 ] The following is a detailed list of methods, partly based on Meister and Schwarz (1994). [ 11 ]
https://en.wikipedia.org/wiki/Partial_charge
In the field of cell biology , the method of partial cloning ( PCL ) converts a fully differentiated old somatic cell into a partially reprogrammed young cell that retains all the specialised functions of the differentiated old cell but is simply younger. [ 1 ] The method of PCL reverses characteristics associated with old cells. For example, old, senescent, cells rejuvenated by PCL are free of highly condensed senescence-associated heterochromatin foci (SAHF) and re-acquire the proliferation potential of young cells. [ 2 ] The method of PCL thus rejuvenates old cells without de-differentiation and passage through an embryonic, pluripotent, stage. PCL consists in introducing a somatic adult or senescent cell nucleus or entire cell with enlarged membrane pores in an (activated) oocyte and to withdraw this treated cell before its de-differentiation and first cell division occurs. Thus, the progressive rejuvenation capability of the oocyte is used only temporarily in order to obtain a partial natural rejuvenation. PCL permits to envisage a chosen degree of partial rejuvenation in changing the duration of the introduction of the treated cell in the oocyte. Using PCL cell de-differentiation and its age reprogramming might be, at least partially, separable. Thus the existence of an isolated ageing clock would be confirmed at least during a certain part of the cellular evolution and involution . First experimental result shows a possible high efficiency in partial rejuvenation of senescent mouse cells. Notably PCL rejuvenates exclusively one single tissue or organ, in contrast to classical cloning PCL is therefore unable to reconstitute an entire organism. Furthermore, PCL is feasible in a few hours in opposition to classical cloning or induced pluripotent stem cells (iPS) which all need weeks or months. Classical cloning can rejuvenate old cells but the process demands that the old cells must artificially pass through an embryonic cell stage. Partial cloning affords the advantage that the old cells to be rejuvenated do not have to pass through the embryonic cell stage and are simply made younger. The extension of human lifespan, in terms of useful, quality, years added to life, has been a goal for many since time immemorial. And while a goal whose attainment was thought improbable, or at least achievable only in the far distant future, the discovery that animals can be cloned has brought the goal of rejuvenation much closer. The remarkable discovery that animals can be cloned showed that the nucleus of an old cell can be used as a donor in so-called “nuclear transfer” experiments where an old nucleus is transferred into a recipient egg whose own nuclear material has been removed. The “reconstructed” egg is then prompted to engage development and develops through an embryonic stage that results, once the embryo is implanted into a surrogate mother, into a new born. Thus an old cell can give rise to a newborn, which has a typical lifespan: the age of the donor cell is “wiped clean” and returned to a youthful state. Notably, in classical animal cloning the rejuvenation process involves a return to an embryonic form. Thus the specialized functions of the adult cell are also “wiped clean” and returned to an embryonic cell type. And in classical cloning passage through this embryonic state is a must for the age of the cell to be “wiped clean”. The key notion that exemplifies “partial” cloning from “classical” cloning is the separation of the mechanism(s) that “wipe clean” the specialization of a cell from those that “wipe-clean” the age of the cell. In short, partial cloning aims to retain the specialized functions of a cell and simply make it younger, e.g., a skin cell is rejuvenated without having to pass through the embryonic stage that is a must for rejuvenation via the classical cloning technique (see diagram). In a new laboratory at the Forschungszentrum Borstel our work on partial cloning focuses, inter alia, on the restricted, temporary, incubation of an “old” cell within the egg. In this way only the age of the cell is “wiped clean” and its specialized, differentiated, state is retained. It is simply made younger – rejuvenated - without going through the embryonic state. The measure of Diagram showing the difference between “Classical” and “Partial” cloning: Classical cloning (the route given by the black arrows) can rejuvenate an old cell but requires passage through an embryonic stage. “Partial cloning” (given by the red arrow) rejuvenates old cells without passage through an embryonic stage.“Partial cloning” (given by the red arrow) rejuvenates old cells without passage through an embryonic stage. In a new laboratory at the Forschungszentrum Borstel our work on partial cloning focuses, inter alia, on the restricted, temporary, incubation of an “old” cell within the egg. In this way only the age of the cell is “wiped clean” and its specialized, differentiated, state is retained. It is simply made younger – rejuvenated - without going through the embryonic state. The measure of rejuvenation in our system is, first, the re-acquisition of the ability of an old cell to divide, something that is lost in old cells and, second, the loss of characteristics that are associated with old cells. Should such rejuvenation be achievable the consequences for medicine would be profound. It would avoid the need to artificially pass through an embryonic stage – either by nuclear transfer or by the so-called iPS cells method - to rejuvenate cells. One would simply be able to take aged cells from a patient and then return to the patient their own, histocompatible, rejuvenated heart cells, liver cells etc. In sharp contrast to the cycle of artificial de-differentiation of somatic cells to stem cells and then the artificial re-differentiation of stem cells to the desired differentiated cell type, which is highly inefficient, time-consuming and results in unstable cell types. The process of partial cloning would be efficient and rapid and thus cheap both in terms of materials and manpower. In short, partial cloning has enormous potential to relieve human suffering and disease: it is the most rapid and cheap route to successful regenerative medicine. Partial cloning also avoids the ethical problems associated with “classical” cloning in that it does not result in live born – it mere uses the oocyte briefly as a means to condition and thereby rejuvenate the old cell exclusively.
https://en.wikipedia.org/wiki/Partial_cloning
In theoretical computer science , an algorithm is correct with respect to a specification if it behaves as specified. Best explored is functional correctness, which refers to the input–output behavior of the algorithm: for each input it produces an output satisfying the specification. [ 1 ] Within the latter notion, partial correctness , requiring that if an answer is returned it will be correct, is distinguished from total correctness , which additionally requires that an answer is eventually returned, i.e. the algorithm terminates. Correspondingly, to prove a program's total correctness, it is sufficient to prove its partial correctness, and its termination. [ 2 ] The latter kind of proof ( termination proof ) can never be fully automated, since the halting problem is undecidable . For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number —it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory . A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally. In particular it is not expected to be a correctness assertion for a given program implementing the algorithm on a given machine. That would involve such considerations as limitations on computer memory . A deep result in proof theory , the Curry–Howard correspondence , states that a proof of functional correctness in constructive logic corresponds to a certain program in the lambda calculus . Converting a proof in this way is called program extraction . Hoare logic is a specific formal system for reasoning rigorously about the correctness of computer programs. [ 3 ] It uses axiomatic techniques to define programming language semantics and argue about the correctness of programs through assertions known as Hoare triples. Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality. [ 4 ]
https://en.wikipedia.org/wiki/Partial_correctness
In graph theory , a partial cube is a graph that is an isometric subgraph of a hypercube . [ 1 ] In other words, a partial cube can be identified with a subgraph of a hypercube in such a way that the distance between any two vertices in the partial cube is the same as the distance between those vertices in the hypercube. Equivalently, a partial cube is a graph whose vertices can be labeled with bit strings of equal length in such a way that the distance between two vertices in the graph is equal to the Hamming distance between their labels. Such a labeling is called a Hamming labeling ; it represents an isometric embedding of the partial cube into a hypercube. Firsov (1965) was the first to study isometric embeddings of graphs into hypercubes. The graphs that admit such embeddings were characterized by Djoković (1973) and Winkler (1984) , and were later named partial cubes. A separate line of research on the same structures, in the terminology of families of sets rather than of hypercube labelings of graphs, was followed by Kuzmin & Ovchinnikov (1975) and Falmagne & Doignon (1997) , among others. [ 2 ] Every tree is a partial cube. For, suppose that a tree T has m edges, and number these edges (arbitrarily) from 0 to m – 1 . Choose a root vertex r for the tree, arbitrarily, and label each vertex v with a string of m bits that has a 1 in position i whenever edge i lies on the path from r to v in T . For instance, r itself will have a label that is all zero bits, its neighbors will have labels with a single 1-bit, etc. Then the Hamming distance between any two labels is the distance between the two vertices in the tree, so this labeling shows that T is a partial cube. Every hypercube graph is itself a partial cube, which can be labeled with all the different bitstrings of length equal to the dimension of the hypercube. More complex examples include the following: Many of the theorems about partial cubes are based directly or indirectly upon a certain binary relation defined on the edges of the graph. This relation, first described by Djoković (1973) and given an equivalent definition in terms of distances by Winkler (1984) , is denoted by Θ {\displaystyle \Theta } . Two edges e = { x , y } {\displaystyle e=\{x,y\}} and f = { u , v } {\displaystyle f=\{u,v\}} are defined to be in the relation Θ {\displaystyle \Theta } , written e Θ f {\displaystyle e{\mathrel {\Theta }}f} , if d ( x , u ) + d ( y , v ) ≠ d ( x , v ) + d ( y , u ) {\displaystyle d(x,u)+d(y,v)\not =d(x,v)+d(y,u)} . This relation is reflexive and symmetric , but in general it is not transitive . Winkler showed that a connected graph is a partial cube if and only if it is bipartite and the relation Θ {\displaystyle \Theta } is transitive. [ 8 ] In this case, it forms an equivalence relation and each equivalence class separates two connected subgraphs of the graph from each other. A Hamming labeling may be obtained by assigning one bit of each label to each of the equivalence classes of the Djoković–Winkler relation; in one of the two connected subgraphs separated by an equivalence class of edges, all of the vertices have a 0 in that position of their labels, and in the other connected subgraph all of the vertices have a 1 in the same position. Partial cubes can be recognized, and a Hamming labeling constructed, in O ( n 2 ) {\displaystyle O(n^{2})} time, where n {\displaystyle n} is the number of vertices in the graph. [ 9 ] Given a partial cube, it is straightforward to construct the equivalence classes of the Djoković–Winkler relation by doing a breadth first search from each vertex, in total time O ( n m ) {\displaystyle O(nm)} ; the O ( n 2 ) {\displaystyle O(n^{2})} -time recognition algorithm speeds this up by using bit-level parallelism to perform multiple breadth first searches in a single pass through the graph, and then applies a separate algorithm to verify that the result of this computation is a valid partial cube labeling. The isometric dimension of a partial cube is the minimum dimension of a hypercube onto which it may be isometrically embedded, and is equal to the number of equivalence classes of the Djoković–Winkler relation. For instance, the isometric dimension of an n {\displaystyle n} -vertex tree is its number of edges, n − 1 {\displaystyle n-1} . An embedding of a partial cube onto a hypercube of this dimension is unique, up to symmetries of the hypercube. [ 10 ] Every hypercube and therefore every partial cube can be embedded isometrically into an integer lattice . The lattice dimension of a graph is the minimum dimension of an integer lattice into which the graph can be isometrically embedded. The lattice dimension may be significantly smaller than the isometric dimension; for instance, for a tree it is half the number of leaves in the tree (rounded up to the nearest integer). The lattice dimension of any graph, and a lattice embedding of minimum dimension, may be found in polynomial time by an algorithm based on maximum matching in an auxiliary graph. [ 11 ] Other types of dimension of partial cubes have also been defined, based on embeddings into more specialized structures. [ 12 ] Isometric embeddings of graphs into hypercubes have an important application in chemical graph theory . A benzenoid graph is a graph consisting of all vertices and edges lying on and in the interior of a cycle in a hexagonal lattice . Such graphs are the molecular graphs of the benzenoid hydrocarbons , a large class of organic molecules. Every such graph is a partial cube. A Hamming labeling of such a graph can be used to compute the Wiener index of the corresponding molecule, which can then be used to predict certain of its chemical properties. [ 13 ] A different molecular structure formed from carbon, the diamond cubic , also forms partial cube graphs. [ 14 ]
https://en.wikipedia.org/wiki/Partial_cube
In electrochemistry , partial current is defined as the electric current associated with ( anodic or cathodic ) half of the electrode reaction . Depending on the electrode half-reaction, one can distinguish two types of partial current: The cathodic and anodic partial currents are defined by IUPAC . [ 1 ] The partial current densities ( i c and i a ) are the ratios of partial currents respect to the electrode areas ( A c and A a ): The sum of the cathodic partial current density i c (positive) and the anodic partial current density i a (negative) gives the net current density i : [ 2 ] In the case of the cathodic partial current density being equal to the anodic partial current density (for example, in a corrosion process [ 3 ] ), the net current density on the electrode is zero: [ 2 ] When more than one reaction occur on an electrode simultaneously, then the total electrode current can be expressed as: [ 1 ] where the index j {\displaystyle j} refers to the particular reactions. This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial_current
In mathematics, a partial cyclic order is a ternary relation that generalizes a cyclic order in the same way that a partial order generalizes a linear order . Over a given set, a partial cyclic order is a ternary relation R {\displaystyle R} that is: Direct sum Direct product Power [ 2 ] [ 3 ] Dedekind–MacNeille completion linear extension , Szpilrajn extension theorem standard example The relationship between partial and total cyclic orders is more complex than the relationship between partial and total linear orders. To begin with, not every partial cyclic order can be extended to a total cyclic order. An example is the following relation on the first thirteen letters of the alphabet: { acd, bde, cef, dfg, egh, fha, gac, hcb } ∪ { abi, cij, bjk, ikl, jlm, kma, lab, mbc }. This relation is a partial cyclic order, but it cannot be extended with either abc or cba ; either attempt would result in a contradiction. [ 4 ] The above was a relatively mild example. One can also construct partial cyclic orders with higher-order obstructions such that, for example, any 15 triples can be added but the 16th cannot. In fact, cyclic ordering is NP-complete , since it solves 3SAT . This is in stark contrast with the recognition problem for linear orders, which can be solved in linear time . [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Partial_cyclic_order
In mathematics , a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative , in which all variables are allowed to vary). Partial derivatives are used in vector calculus and differential geometry . The partial derivative of a function f ( x , y , … ) {\displaystyle f(x,y,\dots )} with respect to the variable x {\displaystyle x} is variously denoted by It can be thought of as the rate of change of the function in the x {\displaystyle x} -direction. Sometimes, for z = f ( x , y , … ) {\displaystyle z=f(x,y,\ldots )} , the partial derivative of z {\displaystyle z} with respect to x {\displaystyle x} is denoted as ∂ z ∂ x . {\displaystyle {\tfrac {\partial z}{\partial x}}.} Since a partial derivative generally has the same arguments as the original function, its functional dependence is sometimes explicitly signified by the notation, such as in: f x ′ ( x , y , … ) , ∂ f ∂ x ( x , y , … ) . {\displaystyle f'_{x}(x,y,\ldots ),{\frac {\partial f}{\partial x}}(x,y,\ldots ).} The symbol used to denote partial derivatives is ∂ . One of the first known uses of this symbol in mathematics is by Marquis de Condorcet from 1770, [ 1 ] who used it for partial differences . The modern partial derivative notation was created by Adrien-Marie Legendre (1786), although he later abandoned it; Carl Gustav Jacob Jacobi reintroduced the symbol in 1841. [ 2 ] Like ordinary derivatives, the partial derivative is defined as a limit . Let U be an open subset of R n {\displaystyle \mathbb {R} ^{n}} and f : U → R {\displaystyle f:U\to \mathbb {R} } a function. The partial derivative of f at the point a = ( a 1 , … , a n ) ∈ U {\displaystyle \mathbf {a} =(a_{1},\ldots ,a_{n})\in U} with respect to the i -th variable x i is defined as ∂ ∂ x i f ( a ) = lim h → 0 f ( a 1 , … , a i − 1 , a i + h , a i + 1 … , a n ) − f ( a 1 , … , a i , … , a n ) h = lim h → 0 f ( a + h e i ) − f ( a ) h . {\displaystyle {\begin{aligned}{\frac {\partial }{\partial x_{i}}}f(\mathbf {a} )&=\lim _{h\to 0}{\frac {f(a_{1},\ldots ,a_{i-1},a_{i}+h,a_{i+1}\,\ldots ,a_{n})\ -f(a_{1},\ldots ,a_{i},\dots ,a_{n})}{h}}\\&=\lim _{h\to 0}{\frac {f(\mathbf {a} +h\mathbf {e_{i}} )-f(\mathbf {a} )}{h}}\,.\end{aligned}}} Where e i {\displaystyle \mathbf {e_{i}} } is the unit vector of i -th variable x i . Even if all partial derivatives ∂ f / ∂ x i ( a ) {\displaystyle \partial f/\partial x_{i}(a)} exist at a given point a , the function need not be continuous there. However, if all partial derivatives exist in a neighborhood of a and are continuous there, then f is totally differentiable in that neighborhood and the total derivative is continuous. In this case, it is said that f is a C 1 function. This can be used to generalize for vector valued functions, f : U → R m {\displaystyle f:U\to \mathbb {R} ^{m}} , by carefully using a componentwise argument. The partial derivative ∂ f ∂ x {\textstyle {\frac {\partial f}{\partial x}}} can be seen as another function defined on U and can again be partially differentiated. If the direction of derivative is not repeated, it is called a mixed partial derivative . If all mixed second order partial derivatives are continuous at a point (or on a set), f is termed a C 2 function at that point (or on that set); in this case, the partial derivatives can be exchanged by Clairaut's theorem : ∂ 2 f ∂ x i ∂ x j = ∂ 2 f ∂ x j ∂ x i . {\displaystyle {\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}={\frac {\partial ^{2}f}{\partial x_{j}\partial x_{i}}}.} For the following examples, let f be a function in x , y , and z . First-order partial derivatives: ∂ f ∂ x = f x ′ = ∂ x f . {\displaystyle {\frac {\partial f}{\partial x}}=f'_{x}=\partial _{x}f.} Second-order partial derivatives: ∂ 2 f ∂ x 2 = f x x ″ = ∂ x x f = ∂ x 2 f . {\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}}=f''_{xx}=\partial _{xx}f=\partial _{x}^{2}f.} Second-order mixed derivatives : ∂ 2 f ∂ y ∂ x = ∂ ∂ y ( ∂ f ∂ x ) = ( f x ′ ) y ′ = f x y ″ = ∂ y x f = ∂ y ∂ x f . {\displaystyle {\frac {\partial ^{2}f}{\partial y\,\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial f}{\partial x}}\right)=(f'_{x})'_{y}=f''_{xy}=\partial _{yx}f=\partial _{y}\partial _{x}f.} Higher-order partial and mixed derivatives: ∂ i + j + k f ∂ x i ∂ y j ∂ z k = f ( i , j , k ) = ∂ x i ∂ y j ∂ z k f . {\displaystyle {\frac {\partial ^{i+j+k}f}{\partial x^{i}\partial y^{j}\partial z^{k}}}=f^{(i,j,k)}=\partial _{x}^{i}\partial _{y}^{j}\partial _{z}^{k}f.} When dealing with functions of multiple variables, some of these variables may be related to each other, thus it may be necessary to specify explicitly which variables are being held constant to avoid ambiguity. In fields such as statistical mechanics , the partial derivative of f with respect to x , holding y and z constant, is often expressed as ( ∂ f ∂ x ) y , z . {\displaystyle \left({\frac {\partial f}{\partial x}}\right)_{y,z}.} Conventionally, for clarity and simplicity of notation, the partial derivative function and the value of the function at a specific point are conflated by including the function arguments when the partial derivative symbol (Leibniz notation) is used. Thus, an expression like ∂ f ( x , y , z ) ∂ x {\displaystyle {\frac {\partial f(x,y,z)}{\partial x}}} is used for the function, while ∂ f ( u , v , w ) ∂ u {\displaystyle {\frac {\partial f(u,v,w)}{\partial u}}} might be used for the value of the function at the point ( x , y , z ) = ( u , v , w ) {\displaystyle (x,y,z)=(u,v,w)} . However, this convention breaks down when we want to evaluate the partial derivative at a point like ( x , y , z ) = ( 17 , u + v , v 2 ) {\displaystyle (x,y,z)=(17,u+v,v^{2})} . In such a case, evaluation of the function must be expressed in an unwieldy manner as ∂ f ( x , y , z ) ∂ x ( 17 , u + v , v 2 ) {\displaystyle {\frac {\partial f(x,y,z)}{\partial x}}(17,u+v,v^{2})} or ∂ f ( x , y , z ) ∂ x | ( x , y , z ) = ( 17 , u + v , v 2 ) {\displaystyle \left.{\frac {\partial f(x,y,z)}{\partial x}}\right|_{(x,y,z)=(17,u+v,v^{2})}} in order to use the Leibniz notation. Thus, in these cases, it may be preferable to use the Euler differential operator notation with D i {\displaystyle D_{i}} as the partial derivative symbol with respect to the i -th variable. For instance, one would write D 1 f ( 17 , u + v , v 2 ) {\displaystyle D_{1}f(17,u+v,v^{2})} for the example described above, while the expression D 1 f {\displaystyle D_{1}f} represents the partial derivative function with respect to the first variable. [ 3 ] For higher order partial derivatives, the partial derivative (function) of D i f {\displaystyle D_{i}f} with respect to the j -th variable is denoted D j ( D i f ) = D i , j f {\displaystyle D_{j}(D_{i}f)=D_{i,j}f} . That is, D j ∘ D i = D i , j {\displaystyle D_{j}\circ D_{i}=D_{i,j}} , so that the variables are listed in the order in which the derivatives are taken, and thus, in reverse order of how the composition of operators is usually notated. Of course, Clairaut's theorem implies that D i , j = D j , i {\displaystyle D_{i,j}=D_{j,i}} as long as comparatively mild regularity conditions on f are satisfied. An important example of a function of several variables is the case of a scalar-valued function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} on a domain in Euclidean space R n {\displaystyle \mathbb {R} ^{n}} (e.g., on R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} ). In this case f has a partial derivative ∂ f / ∂ x j {\displaystyle \partial f/\partial x_{j}} with respect to each variable x j . At the point a , these partial derivatives define the vector ∇ f ( a ) = ( ∂ f ∂ x 1 ( a ) , … , ∂ f ∂ x n ( a ) ) . {\displaystyle \nabla f(a)=\left({\frac {\partial f}{\partial x_{1}}}(a),\ldots ,{\frac {\partial f}{\partial x_{n}}}(a)\right).} This vector is called the gradient of f at a . If f is differentiable at every point in some domain, then the gradient is a vector-valued function ∇ f which takes the point a to the vector ∇ f ( a ) . Consequently, the gradient produces a vector field . A common abuse of notation is to define the del operator ( ∇ ) as follows in three-dimensional Euclidean space R 3 {\displaystyle \mathbb {R} ^{3}} with unit vectors i ^ , j ^ , k ^ {\displaystyle {\hat {\mathbf {i} }},{\hat {\mathbf {j} }},{\hat {\mathbf {k} }}} : ∇ = [ ∂ ∂ x ] i ^ + [ ∂ ∂ y ] j ^ + [ ∂ ∂ z ] k ^ {\displaystyle \nabla =\left[{\frac {\partial }{\partial x}}\right]{\hat {\mathbf {i} }}+\left[{\frac {\partial }{\partial y}}\right]{\hat {\mathbf {j} }}+\left[{\frac {\partial }{\partial z}}\right]{\hat {\mathbf {k} }}} Or, more generally, for n -dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} with coordinates x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and unit vectors e ^ 1 , … , e ^ n {\displaystyle {\hat {\mathbf {e} }}_{1},\ldots ,{\hat {\mathbf {e} }}_{n}} : ∇ = ∑ j = 1 n [ ∂ ∂ x j ] e ^ j = [ ∂ ∂ x 1 ] e ^ 1 + [ ∂ ∂ x 2 ] e ^ 2 + ⋯ + [ ∂ ∂ x n ] e ^ n {\displaystyle \nabla =\sum _{j=1}^{n}\left[{\frac {\partial }{\partial x_{j}}}\right]{\hat {\mathbf {e} }}_{j}=\left[{\frac {\partial }{\partial x_{1}}}\right]{\hat {\mathbf {e} }}_{1}+\left[{\frac {\partial }{\partial x_{2}}}\right]{\hat {\mathbf {e} }}_{2}+\dots +\left[{\frac {\partial }{\partial x_{n}}}\right]{\hat {\mathbf {e} }}_{n}} The directional derivative of a scalar function f ( x ) = f ( x 1 , x 2 , … , x n ) {\displaystyle f(\mathbf {x} )=f(x_{1},x_{2},\ldots ,x_{n})} along a vector v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} is the function ∇ v f {\displaystyle \nabla _{\mathbf {v} }{f}} defined by the limit [ 4 ] ∇ v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h = d d t f ( x + t v ) | t = 0 . {\displaystyle \nabla _{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\to 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}=\left.{\frac {\mathrm {d} }{\mathrm {d} t}}f(\mathbf {x} +t\mathbf {v} )\right|_{t=0}.} Suppose that f is a function of more than one variable. For instance, z = f ( x , y ) = x 2 + x y + y 2 . {\displaystyle z=f(x,y)=x^{2}+xy+y^{2}.} The graph of this function defines a surface in Euclidean space . To every point on this surface, there are an infinite number of tangent lines . Partial differentiation is the act of choosing one of these lines and finding its slope . Usually, the lines of most interest are those that are parallel to the xz -plane, and those that are parallel to the yz -plane (which result from holding either y or x constant, respectively). To find the slope of the line tangent to the function at P (1, 1) and parallel to the xz -plane, we treat y as a constant. The graph and this plane are shown on the right. Below, we see how the function looks on the plane y = 1 . By finding the derivative of the equation while assuming that y is a constant, we find that the slope of f at the point ( x , y ) is: ∂ z ∂ x = 2 x + y . {\displaystyle {\frac {\partial z}{\partial x}}=2x+y.} So at (1, 1) , by substitution, the slope is 3 . Therefore, ∂ z ∂ x = 3 {\displaystyle {\frac {\partial z}{\partial x}}=3} at the point (1, 1) . That is, the partial derivative of z with respect to x at (1, 1) is 3 , as shown in the graph. The function f can be reinterpreted as a family of functions of one variable indexed by the other variables: f ( x , y ) = f y ( x ) = x 2 + x y + y 2 . {\displaystyle f(x,y)=f_{y}(x)=x^{2}+xy+y^{2}.} In other words, every value of y defines a function, denoted f y , which is a function of one variable x . [ 6 ] That is, f y ( x ) = x 2 + x y + y 2 . {\displaystyle f_{y}(x)=x^{2}+xy+y^{2}.} In this section the subscript notation f y denotes a function contingent on a fixed value of y , and not a partial derivative. Once a value of y is chosen, say a , then f ( x , y ) determines a function f a which traces a curve x 2 + ax + a 2 on the xz -plane: f a ( x ) = x 2 + a x + a 2 . {\displaystyle f_{a}(x)=x^{2}+ax+a^{2}.} In this expression, a is a constant , not a variable , so f a is a function of only one real variable, that being x . Consequently, the definition of the derivative for a function of one variable applies: f a ′ ( x ) = 2 x + a . {\displaystyle f_{a}'(x)=2x+a.} The above procedure can be performed for any choice of a . Assembling the derivatives together into a function gives a function which describes the variation of f in the x direction: ∂ f ∂ x ( x , y ) = 2 x + y . {\displaystyle {\frac {\partial f}{\partial x}}(x,y)=2x+y.} This is the partial derivative of f with respect to x . Here ' ∂ ' is a rounded 'd' called the partial derivative symbol ; to distinguish it from the letter 'd', ' ∂ ' is sometimes pronounced "partial". Second and higher order partial derivatives are defined analogously to the higher order derivatives of univariate functions. For the function f ( x , y , . . . ) {\displaystyle f(x,y,...)} the "own" second partial derivative with respect to x is simply the partial derivative of the partial derivative (both with respect to x ): [ 7 ] : 316–318 ∂ 2 f ∂ x 2 ≡ ∂ ∂ f / ∂ x ∂ x ≡ ∂ f x ∂ x ≡ f x x . {\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}}\equiv \partial {\frac {\partial f/\partial x}{\partial x}}\equiv {\frac {\partial f_{x}}{\partial x}}\equiv f_{xx}.} The cross partial derivative with respect to x and y is obtained by taking the partial derivative of f with respect to x , and then taking the partial derivative of the result with respect to y , to obtain ∂ 2 f ∂ y ∂ x ≡ ∂ ∂ f / ∂ x ∂ y ≡ ∂ f x ∂ y ≡ f x y . {\displaystyle {\frac {\partial ^{2}f}{\partial y\,\partial x}}\equiv \partial {\frac {\partial f/\partial x}{\partial y}}\equiv {\frac {\partial f_{x}}{\partial y}}\equiv f_{xy}.} Schwarz's theorem states that if the second derivatives are continuous, the expression for the cross partial derivative is unaffected by which variable the partial derivative is taken with respect to first and which is taken second. That is, ∂ 2 f ∂ x ∂ y = ∂ 2 f ∂ y ∂ x {\displaystyle {\frac {\partial ^{2}f}{\partial x\,\partial y}}={\frac {\partial ^{2}f}{\partial y\,\partial x}}} or equivalently f y x = f x y . {\displaystyle f_{yx}=f_{xy}.} Own and cross partial derivatives appear in the Hessian matrix which is used in the second order conditions in optimization problems. The higher order partial derivatives can be obtained by successive differentiation There is a concept for partial derivatives that is analogous to antiderivatives for regular derivatives. Given a partial derivative, it allows for the partial recovery of the original function. Consider the example of ∂ z ∂ x = 2 x + y . {\displaystyle {\frac {\partial z}{\partial x}}=2x+y.} The so-called partial integral can be taken with respect to x (treating y as constant, in a similar manner to partial differentiation): z = ∫ ∂ z ∂ x d x = x 2 + x y + g ( y ) . {\displaystyle z=\int {\frac {\partial z}{\partial x}}\,dx=x^{2}+xy+g(y).} Here, the constant of integration is no longer a constant, but instead a function of all the variables of the original function except x . The reason for this is that all the other variables are treated as constant when taking the partial derivative, so any function which does not involve x will disappear when taking the partial derivative, and we have to account for this when we take the antiderivative. The most general way to represent this is to have the constant represent an unknown function of all the other variables. Thus the set of functions x 2 + x y + g ( y ) {\displaystyle x^{2}+xy+g(y)} , where g is any one-argument function, represents the entire set of functions in variables x , y that could have produced the x -partial derivative 2 x + y {\displaystyle 2x+y} . If all the partial derivatives of a function are known (for example, with the gradient ), then the antiderivatives can be matched via the above process to reconstruct the original function up to a constant. Unlike in the single-variable case, however, not every set of functions can be the set of all (first) partial derivatives of a single function. In other words, not every vector field is conservative . The volume V of a cone depends on the cone's height h and its radius r according to the formula V ( r , h ) = π r 2 h 3 . {\displaystyle V(r,h)={\frac {\pi r^{2}h}{3}}.} The partial derivative of V with respect to r is ∂ V ∂ r = 2 π r h 3 , {\displaystyle {\frac {\partial V}{\partial r}}={\frac {2\pi rh}{3}},} which represents the rate with which a cone's volume changes if its radius is varied and its height is kept constant. The partial derivative with respect to h equals 1 3 π r 2 {\textstyle {\frac {1}{3}}\pi r^{2}} , which represents the rate with which the volume changes if its height is varied and its radius is kept constant. By contrast, the total derivative of V with respect to r and h are respectively d V d r = 2 π r h 3 ⏞ ∂ V ∂ r + π r 2 3 ⏞ ∂ V ∂ h d h d r , d V d h = π r 2 3 ⏞ ∂ V ∂ h + 2 π r h 3 ⏞ ∂ V ∂ r d r d h . {\displaystyle {\begin{aligned}{\frac {dV}{dr}}&=\overbrace {\frac {2\pi rh}{3}} ^{\frac {\partial V}{\partial r}}+\overbrace {\frac {\pi r^{2}}{3}} ^{\frac {\partial V}{\partial h}}{\frac {dh}{dr}}\,,\\{\frac {dV}{dh}}&=\overbrace {\frac {\pi r^{2}}{3}} ^{\frac {\partial V}{\partial h}}+\overbrace {\frac {2\pi rh}{3}} ^{\frac {\partial V}{\partial r}}{\frac {dr}{dh}}\,.\end{aligned}}} The difference between the total and partial derivative is the elimination of indirect dependencies between variables in partial derivatives. If (for some arbitrary reason) the cone's proportions have to stay the same, and the height and radius are in a fixed ratio k , k = h r = d h d r . {\displaystyle k={\frac {h}{r}}={\frac {dh}{dr}}.} This gives the total derivative with respect to r , d V d r = 2 π r h 3 + π r 2 3 k , {\displaystyle {\frac {dV}{dr}}={\frac {2\pi rh}{3}}+{\frac {\pi r^{2}}{3}}k\,,} which simplifies to d V d r = k π r 2 , {\displaystyle {\frac {dV}{dr}}=k\pi r^{2},} Similarly, the total derivative with respect to h is d V d h = π r 2 . {\displaystyle {\frac {dV}{dh}}=\pi r^{2}.} The total derivative with respect to both r and h of the volume intended as scalar function of these two variables is given by the gradient vector ∇ V = ( ∂ V ∂ r , ∂ V ∂ h ) = ( 2 3 π r h , 1 3 π r 2 ) . {\displaystyle \nabla V=\left({\frac {\partial V}{\partial r}},{\frac {\partial V}{\partial h}}\right)=\left({\frac {2}{3}}\pi rh,{\frac {1}{3}}\pi r^{2}\right).} Partial derivatives appear in any calculus-based optimization problem with more than one choice variable. For example, in economics a firm may wish to maximize profit π( x , y ) with respect to the choice of the quantities x and y of two different types of output. The first order conditions for this optimization are π x = 0 = π y . Since both partial derivatives π x and π y will generally themselves be functions of both arguments x and y , these two first order conditions form a system of two equations in two unknowns . Partial derivatives appear in thermodynamic equations like Gibbs-Duhem equation , in quantum mechanics as in Schrödinger wave equation , as well as in other equations from mathematical physics . The variables being held constant in partial derivatives here can be ratios of simple variables like mole fractions x i in the following example involving the Gibbs energies in a ternary mixture system: G 2 ¯ = G + ( 1 − x 2 ) ( ∂ G ∂ x 2 ) x 1 x 3 {\displaystyle {\bar {G_{2}}}=G+(1-x_{2})\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}} Express mole fractions of a component as functions of other components' mole fraction and binary mole ratios: x 1 = 1 − x 2 1 + x 3 x 1 x 3 = 1 − x 2 1 + x 1 x 3 {\textstyle {\begin{aligned}x_{1}&={\frac {1-x_{2}}{1+{\frac {x_{3}}{x_{1}}}}}\\x_{3}&={\frac {1-x_{2}}{1+{\frac {x_{1}}{x_{3}}}}}\end{aligned}}} Differential quotients can be formed at constant ratios like those above: ( ∂ x 1 ∂ x 2 ) x 1 x 3 = − x 1 1 − x 2 ( ∂ x 3 ∂ x 2 ) x 1 x 3 = − x 3 1 − x 2 {\displaystyle {\begin{aligned}\left({\frac {\partial x_{1}}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}&=-{\frac {x_{1}}{1-x_{2}}}\\\left({\frac {\partial x_{3}}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}&=-{\frac {x_{3}}{1-x_{2}}}\end{aligned}}} Ratios X, Y, Z of mole fractions can be written for ternary and multicomponent systems: X = x 3 x 1 + x 3 Y = x 3 x 2 + x 3 Z = x 2 x 1 + x 2 {\displaystyle {\begin{aligned}X&={\frac {x_{3}}{x_{1}+x_{3}}}\\Y&={\frac {x_{3}}{x_{2}+x_{3}}}\\Z&={\frac {x_{2}}{x_{1}+x_{2}}}\end{aligned}}} which can be used for solving partial differential equations like: ( ∂ μ 2 ∂ n 1 ) n 2 , n 3 = ( ∂ μ 1 ∂ n 2 ) n 1 , n 3 {\displaystyle \left({\frac {\partial \mu _{2}}{\partial n_{1}}}\right)_{n_{2},n_{3}}=\left({\frac {\partial \mu _{1}}{\partial n_{2}}}\right)_{n_{1},n_{3}}} This equality can be rearranged to have differential quotient of mole fractions on one side. Partial derivatives are key to target-aware image resizing algorithms. Widely known as seam carving , these algorithms require each pixel in an image to be assigned a numerical 'energy' to describe their dissimilarity against orthogonal adjacent pixels. The algorithm then progressively removes rows or columns with the lowest energy. The formula established to determine a pixel's energy (magnitude of gradient at a pixel) depends heavily on the constructs of partial derivatives. Partial derivatives play a prominent role in economics , in which most functions describing economic behaviour posit that the behaviour depends on more than one variable. For example, a societal consumption function may describe the amount spent on consumer goods as depending on both income and wealth; the marginal propensity to consume is then the partial derivative of the consumption function with respect to income.
https://en.wikipedia.org/wiki/Partial_derivative
In mathematics a partial differential algebraic equation (PDAE) set is an incomplete system of partial differential equations that is closed with a set of algebraic equations . A general PDAE is defined as: where: The relationship between a PDAE and a partial differential equation (PDE) is analogous to the relationship between an ordinary differential equation (ODE) and a differential algebraic equation (DAE). PDAEs of this general form are challenging to solve. Simplified forms are studied in more detail in the literature. [ 1 ] [ 2 ] [ 3 ] Even as recently as 2000, the term "PDAE" has been handled as unfamiliar by those in related fields. [ 4 ] Semi-discretization is a common method for solving PDAEs whose independent variables are those of time and space , and has been used for decades. [ 5 ] [ 6 ] This method involves removing the spatial variables using a discretization method, such as the finite volume method , and incorporating the resulting linear equations as part of the algebraic relations. This reduces the system to a DAE , for which conventional solution methods can be employed.
https://en.wikipedia.org/wiki/Partial_differential_algebraic_equation
In mathematics , a partial differential equation ( PDE ) is an equation which involves a multivariable function and one or more of its partial derivatives . The function is often thought of as an "unknown" that solves the equation, similar to how x is thought of as an unknown number solving, e.g., an algebraic equation like x 2 − 3 x + 2 = 0 . However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research , in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. [ 1 ] Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations , named as one of the Millennium Prize Problems in 2000. Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering . For instance, they are foundational in the modern scientific understanding of sound , heat , diffusion , electrostatics , electrodynamics , thermodynamics , fluid dynamics , elasticity , general relativity , and quantum mechanics ( Schrödinger equation , Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations ; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology . Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields. [ 2 ] Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable . Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics , Boltzmann equations , and dispersive partial differential equations . [ 3 ] A function u ( x , y , z ) of three variables is " harmonic " or "a solution of the Laplace equation " if it satisfies the condition ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=0.} Such functions were widely studied in the 19th century due to their relevance for classical mechanics , for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance u ( x , y , z ) = 1 x 2 − 2 x + y 2 + z 2 + 1 {\displaystyle u(x,y,z)={\frac {1}{\sqrt {x^{2}-2x+y^{2}+z^{2}+1}}}} and u ( x , y , z ) = 2 x 2 − y 2 − z 2 {\displaystyle u(x,y,z)=2x^{2}-y^{2}-z^{2}} are both harmonic while u ( x , y , z ) = sin ⁡ ( x y ) + z {\displaystyle u(x,y,z)=\sin(xy)+z} is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not , in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist. The nature of this failure can be seen more concretely in the case of the following PDE: for a function v ( x , y ) of two variables, consider the equation ∂ 2 v ∂ x ∂ y = 0. {\displaystyle {\frac {\partial ^{2}v}{\partial x\partial y}}=0.} It can be directly checked that any function v of the form v ( x , y ) = f ( x ) + g ( y ) , for any single-variable functions f and g whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions. The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate. To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself. The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions. Even more phenomena are possible. For instance, the following PDE , arising naturally in the field of differential geometry , illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function. In contrast to the earlier examples, this PDE is nonlinear , owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution. A partial differential equation is an equation that involves an unknown function of n ≥ 2 {\displaystyle n\geq 2} variables and (some of) its partial derivatives. [ 4 ] That is, for the unknown function u : U → R , {\displaystyle u:U\rightarrow \mathbb {R} ,} of variables x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\dots ,x_{n})} belonging to the open subset U {\displaystyle U} of R n {\displaystyle \mathbb {R} ^{n}} , the k t h {\displaystyle k^{th}} -order partial differential equation is defined as F [ D k u , D k − 1 u , … , D u , u , x ] = 0 , {\displaystyle F[D^{k}u,D^{k-1}u,\dots ,Du,u,x]=0,} where F : R n k × R n k − 1 ⋯ × R n × R × U → R , {\displaystyle F:\mathbb {R} ^{n^{k}}\times \mathbb {R} ^{n^{k-1}}\dots \times \mathbb {R} ^{n}\times \mathbb {R} \times U\rightarrow \mathbb {R} ,} and D {\displaystyle D} is the partial derivative operator. When writing PDEs, it is common to denote partial derivatives using subscripts. For example: u x = ∂ u ∂ x , u x x = ∂ 2 u ∂ x 2 , u x y = ∂ 2 u ∂ y ∂ x = ∂ ∂ y ( ∂ u ∂ x ) . {\displaystyle u_{x}={\frac {\partial u}{\partial x}},\quad u_{xx}={\frac {\partial ^{2}u}{\partial x^{2}}},\quad u_{xy}={\frac {\partial ^{2}u}{\partial y\,\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial u}{\partial x}}\right).} In the general situation that u is a function of n variables, then u i denotes the first partial derivative relative to the i -th input, u ij denotes the second partial derivative relative to the i -th and j -th inputs, and so on. The Greek letter Δ denotes the Laplace operator ; if u is a function of n variables, then Δ u = u 11 + u 22 + ⋯ + u n n . {\displaystyle \Delta u=u_{11}+u_{22}+\cdots +u_{nn}.} In the physics literature, the Laplace operator is often denoted by ∇ 2 ; in the mathematics literature, ∇ 2 u may also denote the Hessian matrix of u . A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function u of x and y , a second order linear PDE is of the form a 1 ( x , y ) u x x + a 2 ( x , y ) u x y + a 3 ( x , y ) u y x + a 4 ( x , y ) u y y + a 5 ( x , y ) u x + a 6 ( x , y ) u y + a 7 ( x , y ) u = f ( x , y ) {\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+a_{5}(x,y)u_{x}+a_{6}(x,y)u_{y}+a_{7}(x,y)u=f(x,y)} where a i and f are functions of the independent variables x and y only. (Often the mixed-partial derivatives u xy and u yx will be equated, but this is not required for the discussion of linearity.) If the a i are constants (independent of x and y ) then the PDE is called linear with constant coefficients . If f is zero everywhere then the linear PDE is homogeneous , otherwise it is inhomogeneous . (This is separate from asymptotic homogenization , which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.) Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is a 1 ( x , y ) u x x + a 2 ( x , y ) u x y + a 3 ( x , y ) u y x + a 4 ( x , y ) u y y + f ( u x , u y , u , x , y ) = 0 {\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0} In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives: a 1 ( u x , u y , u , x , y ) u x x + a 2 ( u x , u y , u , x , y ) u x y + a 3 ( u x , u y , u , x , y ) u y x + a 4 ( u x , u y , u , x , y ) u y y + f ( u x , u y , u , x , y ) = 0 {\displaystyle a_{1}(u_{x},u_{y},u,x,y)u_{xx}+a_{2}(u_{x},u_{y},u,x,y)u_{xy}+a_{3}(u_{x},u_{y},u,x,y)u_{yx}+a_{4}(u_{x},u_{y},u,x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0} Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion. A PDE without any linearity properties is called fully nonlinear , and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation , which arises in differential geometry . [ 5 ] The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming u xy = u yx , the general linear second-order PDE in two independent variables has the form A u x x + 2 B u x y + C u y y + ⋯ (lower order terms) = 0 , {\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+\cdots {\mbox{(lower order terms)}}=0,} where the coefficients A , B , C ... may depend upon x and y . If A 2 + B 2 + C 2 > 0 over a region of the xy -plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section: A x 2 + 2 B x y + C y 2 + ⋯ = 0. {\displaystyle Ax^{2}+2Bxy+Cy^{2}+\cdots =0.} More precisely, replacing ∂ x by X , and likewise for other variables (formally this is done by a Fourier transform ), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial , here a quadratic form ) being most significant for the classification. Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B 2 − 4 AC , the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B 2 − AC due to the convention of the xy term being 2 B rather than B ; formally, the discriminant (of the associated quadratic form) is (2 B ) 2 − 4 AC = 4( B 2 − AC ) , with the factor of 4 dropped for simplicity. If there are n independent variables x 1 , x 2 , …, x n , a general linear partial differential equation of second order has the form L u = ∑ i = 1 n ∑ j = 1 n a i , j ∂ 2 u ∂ x i ∂ x j + lower-order terms = 0. {\displaystyle Lu=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{i,j}{\frac {\partial ^{2}u}{\partial x_{i}\partial x_{j}}}\quad +{\text{lower-order terms}}=0.} The classification depends upon the signature of the eigenvalues of the coefficient matrix a i , j . The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation , the heat equation , and the wave equation . However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation ; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized. The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices A ν are m by m matrices for ν = 1, 2, …, n . The partial differential equation takes the form L u = ∑ ν = 1 n A ν ∂ u ∂ x ν + B = 0 , {\displaystyle Lu=\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial u}{\partial x_{\nu }}}+B=0,} where the coefficient matrices A ν and the vector B may depend upon x and u . If a hypersurface S is given in the implicit form φ ( x 1 , x 2 , … , x n ) = 0 , {\displaystyle \varphi (x_{1},x_{2},\ldots ,x_{n})=0,} where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes: Q ( ∂ φ ∂ x 1 , … , ∂ φ ∂ x n ) = det [ ∑ ν = 1 n A ν ∂ φ ∂ x ν ] = 0. {\displaystyle Q\left({\frac {\partial \varphi }{\partial x_{1}}},\ldots ,{\frac {\partial \varphi }{\partial x_{n}}}\right)=\det \left[\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial \varphi }{\partial x_{\nu }}}\right]=0.} The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S , then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S , then S is non-characteristic. If the data on S and the differential equation do not determine the normal derivative of u on S , then the surface is characteristic , and the differential equation restricts the data on S : the differential equation is internal to S . Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem. [ 8 ] In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called separable partial differential equations , and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x " as a coordinate, each coordinate can be understood separately. This generalizes to the method of characteristics , and is also used in integral transforms . The characteristic surface in n = 2 - dimensional space is called a characteristic curve . [ 9 ] In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics . More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces. An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator. An important example of this is Fourier analysis , which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables . For example, the Black–Scholes equation ∂ V ∂ t + 1 2 σ 2 S 2 ∂ 2 V ∂ S 2 + r S ∂ V ∂ S − r V = 0 {\displaystyle {\frac {\partial V}{\partial t}}+{\tfrac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0} is reducible to the heat equation ∂ u ∂ τ = ∂ 2 u ∂ x 2 {\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {\partial ^{2}u}{\partial x^{2}}}} by the change of variables [ 10 ] V ( S , t ) = v ( x , τ ) , x = ln ⁡ ( S ) , τ = 1 2 σ 2 ( T − t ) , v ( x , τ ) = e − α x − β τ u ( x , τ ) . {\displaystyle {\begin{aligned}V(S,t)&=v(x,\tau ),\\[5px]x&=\ln \left(S\right),\\[5px]\tau &={\tfrac {1}{2}}\sigma ^{2}(T-t),\\[5px]v(x,\tau )&=e^{-\alpha x-\beta \tau }u(x,\tau ).\end{aligned}}} Inhomogeneous equations [ clarification needed ] can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source P ( D ) u = δ {\displaystyle P(D)u=\delta } ), then taking the convolution with the boundary conditions to get the solution. This is analogous in signal processing to understanding a filter by its impulse response . The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin x + sin x = 2 sin x . The same principle can be observed in PDEs where the solutions may be real or complex and additive. If u 1 and u 2 are solutions of linear PDE in some function space R , then u = c 1 u 1 + c 2 u 2 with any constants c 1 and c 2 are also a solution of that PDE in the same function space. There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem ) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis ). Nevertheless, some techniques can be used for several types of equations. The h -principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems. The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations. [ 11 ] In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods . Many interesting problems in science and engineering are solved in this way using computers , sometimes high performance supercomputers . From 1870 Sophus Lie 's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups , be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact . A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions ( Lie theory ). Continuous group theory , Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs , recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. The Adomian decomposition method , [ 12 ] the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method . [ 13 ] These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory , thus giving these methods greater flexibility and solution generality. The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods , which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM . Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method , discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc. The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. [ 14 ] [ 15 ] The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc. Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives. Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem . These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design. Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions . An example [ 19 ] for the definition of a weak solution is as follows: Consider the boundary-value problem given by: L u = f in U , u = 0 on ∂ U , {\displaystyle {\begin{aligned}Lu&=f\quad {\text{in }}U,\\u&=0\quad {\text{on }}\partial U,\end{aligned}}} where L u = − ∑ i , j ∂ j ( a i j ∂ i u ) + ∑ i b i ∂ i u + c u {\displaystyle Lu=-\sum _{i,j}\partial _{j}(a^{ij}\partial _{i}u)+\sum _{i}b^{i}\partial _{i}u+cu} denotes a second-order partial differential operator in divergence form . We say a u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} is a weak solution if ∫ U [ ∑ i , j a i j ( ∂ i u ) ( ∂ j v ) + ∑ i b i ( ∂ i u ) v + c u v ] d x = ∫ U f v d x {\displaystyle \int _{U}[\sum _{i,j}a^{ij}(\partial _{i}u)(\partial _{j}v)+\sum _{i}b^{i}(\partial _{i}u)v+cuv]dx=\int _{U}fvdx} for every v ∈ H 0 1 ( U ) {\displaystyle v\in H_{0}^{1}(U)} , which can be derived by a formal integral by parts. An example for a weak solution is as follows: ϕ ( x ) = 1 4 π 1 | x | {\displaystyle \phi (x)={\frac {1}{4\pi }}{\frac {1}{|x|}}} is a weak solution satisfying ∇ 2 ϕ = δ in R 3 {\displaystyle \nabla ^{2}\phi =\delta {\text{ in }}R^{3}} in distributional sense, as formally, ∫ R 3 ∇ 2 ϕ ( x ) ψ ( x ) d x = ∫ R 3 ϕ ( x ) ∇ 2 ψ ( x ) d x = ψ ( 0 ) for ψ ∈ C c ∞ ( R 3 ) . {\displaystyle \int _{R^{3}}\nabla ^{2}\phi (x)\psi (x)dx=\int _{R^{3}}\phi (x)\nabla ^{2}\psi (x)dx=\psi (0){\text{ for }}\psi \in C_{c}^{\infty }(R^{3}).} As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary. Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have: This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed. Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces . This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution. Results from functional analysis are often used in this field of study. Some common PDEs Types of boundary conditions Various topics
https://en.wikipedia.org/wiki/Partial_differential_equation
In materials science , a partial dislocation is a decomposed form of dislocation that occurs within a crystalline material. An extended dislocation is a dislocation that has dissociated into a pair of partial dislocations. The vector sum of the Burgers vectors of the partial dislocations is the Burgers vector of the extended dislocation. A dislocation will decompose into partial dislocations if the energy state of the sum of the partials is less than the energy state of the original dislocation. This is summarized by Frank's Energy Criterion : Shockley partial dislocations generally refer to a pair of dislocations which can lead to the presence of stacking faults . This pair of partial dislocations can enable dislocation motion by allowing an alternate path for atomic motion. In FCC systems, an example of Shockley decomposition is: Which is energetically favorable: The components of the Shockley Partials must add up to the original vector that is being decomposed: Frank partial dislocations are sessile (immobile), but can move by diffusion of atoms. [ 1 ] In FCC systems, Frank partials are given by: For FCC crystals, Thompson tetrahedrons or Thompson notation are an invented notation for more easily describing partial dislocations. In a given unit cell, mark point A at the origin, point B at a/2 [110], point C at a/2[011], and point D at a/2[101]--these points form the vertices of a tetrahedron. Then, mark the center of the opposite faces for each point as α, β, γ, and δ, respectively. [ 2 ] With this, the geometric representation of a Thompson tetrahedron is complete. Any combination of Roman letters describes a member of the {111} slip planes in an FCC crystal. A vector made from two Roman letters describes the Burgers vector of a perfect dislocation. If the vector is made from a Roman letter and a Greek letter, then it is a Frank partial if the letters are corresponding (Aα, Bβ,...) or a Shockley partial otherwise (Aβ, Aγ,...). Vectors made from two Greek letters describe stair-rod dislocations. Using Thompson notation, Burgers vectors can be added to describe other dislocations and mechanisms. For example, two Shockley partial dislocations can be added to form a perfect dislocation: Aβ + βC = AC. [ 2 ] It is necessary that the interior letters of a given operation match, but many can be added in sequence to describe more complex mechanisms. It is useful to summarize this information using an unfolded Thompson tetrahedron. The Lomer-Cottrell dislocation forms via a more complex dislocation reaction. For example, consider two extended dislocations: DB = Dγ + γB and BC = Bδ + δC. When they meet, it is more energetically favorable to form a single dislocation, DC = DB + BC = Dγ + γB + Bδ + δC = Dγ + γδ + δC. The trailing partials of each extended dislocation now form a stair-rod partial. This structure leads to reduced mobility of the dislocations as the core structure is non-planar (meaning it doesn’t cross along the face of the tetrahedron). [ 2 ] This reduction of mobility transforms the Lomer-Cottrell dislocation into an obstacle for other dislocations, thus strengthening the material. When forming stacking faults, the partial dislocations reach an equilibrium when the repulsive energy between partial dislocations matches the attractive energy of the stacking fault. This means that higher stacking fault energy materials, i.e. those with high shear modulus and large Burgers vectors, will have smaller distance between partial dislocations. Conversely, low stacking fault energy materials will have large distances between partial dislocations. [ 3 ] In order to cross slip , both partial dislocations need to change slip planes. The common Friedel-Escaig mechanism requires that the partial dislocations recombine at a point before cross slipping onto a different slip plane. [ 2 ] Bringing the partials together entails applying sufficient shear stress to reduce the distance between them, so partial dislocations with low stacking fault energies will inherently be more difficult to bring together and thus more difficult to cross slip. [ 3 ] [ 4 ] Conversely, high stacking fault energy materials will be easier to cross slip. The more easily a dislocation can cross slip, the more freely the dislocation can move around obstacles–this makes work hardening more difficult. Thus, materials that allow easy cross slip (high stacking fault energy) will see less work hardening and strengthening from methods like solid-solution strengthening.
https://en.wikipedia.org/wiki/Partial_dislocation
In genetics , the partial dominance hypothesis states that inbreeding depression is the result of the frequency increase of homozygous deleterious recessive or partially recessive alleles . The hypothesis can be explained by looking at a population that is divided into a large number of separately inbred lines. Deleterious alleles will eventually be eliminated from some lines and become fixed in other lines, while some lines disappear because of fixation of deleterious alleles. This will cause an overall decline in population and trait value, but then increase to a trait value that is equal to or greater than the trait value in the original population. Crossing inbred lines restores fitness in the overdominance hypothesis and a fitness increase in the partial dominance hypothesis. [ 1 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial_dominance_hypothesis
In mathematics , a partial function f from a set X to a set Y is a function from a subset S of X (possibly the whole X itself) to Y . The subset S , that is, the domain of f viewed as a function, is called the domain of definition or natural domain of f . If S equals X , that is, if f is defined on every element in X , then f is said to be a total function . In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation . (There may be some elements in the domain that are not mapped to elements in the domain.) This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set. A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus , where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a function . In computability theory , a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total. When arrow notation is used for functions, a partial function f {\displaystyle f} from X {\displaystyle X} to Y {\displaystyle Y} is sometimes written as f : X ⇀ Y , {\displaystyle f:X\rightharpoonup Y,} f : X ↛ Y , {\displaystyle f:X\nrightarrow Y,} or f : X ↪ Y . {\displaystyle f:X\hookrightarrow Y.} However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings . [ citation needed ] Specifically, for a partial function f : X ⇀ Y , {\displaystyle f:X\rightharpoonup Y,} and any x ∈ X , {\displaystyle x\in X,} one has either: For example, if f {\displaystyle f} is the square root function restricted to the integers then f ( n ) {\displaystyle f(n)} is only defined if n {\displaystyle n} is a perfect square (that is, 0 , 1 , 4 , 9 , 16 , … {\displaystyle 0,1,4,9,16,\ldots } ). So f ( 25 ) = 5 {\displaystyle f(25)=5} but f ( 26 ) {\displaystyle f(26)} is undefined. A partial function arises from the consideration of maps between two sets X and Y that may not be defined on the entire set X . A common example is the square root operation on the real numbers R {\displaystyle \mathbb {R} } : because negative real numbers do not have real square roots, the operation can be viewed as a partial function from R {\displaystyle \mathbb {R} } to R . {\displaystyle \mathbb {R} .} The domain of definition of a partial function is the subset S of X on which the partial function is defined; in this case, the partial function may also be viewed as a function from S to Y . In the example of the square root operation, the set S consists of the nonnegative real numbers [ 0 , + ∞ ) . {\displaystyle [0,+\infty ).} The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem . In case the domain of definition S is equal to the whole set X , the partial function is said to be total . Thus, total partial functions from X to Y coincide with functions from X to Y . Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective , surjective , or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively. Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. [ 1 ] An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function. The notion of transformation can be generalized to partial functions as well. A partial transformation is a function f : A ⇀ B , {\displaystyle f:A\rightharpoonup B,} where both A {\displaystyle A} and B {\displaystyle B} are subsets of some set X . {\displaystyle X.} [ 1 ] For convenience, denote the set of all partial functions f : X ⇀ Y {\displaystyle f:X\rightharpoonup Y} from a set X {\displaystyle X} to a set Y {\displaystyle Y} by [ X ⇀ Y ] . {\displaystyle [X\rightharpoonup Y].} This set is the union of the sets of functions defined on subsets of X {\displaystyle X} with same codomain Y {\displaystyle Y} : the latter also written as ⋃ D ⊆ X Y D . {\textstyle \bigcup _{D\subseteq {X}}Y^{D}.} In finite case, its cardinality is because any partial function can be extended to a function by any fixed value c {\displaystyle c} not contained in Y , {\displaystyle Y,} so that the codomain is Y ∪ { c } , {\displaystyle Y\cup \{c\},} an operation which is injective (unique and invertible by restriction). The first diagram at the top of the article represents a partial function that is not a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set. Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function. Subtraction of natural numbers (in which N {\displaystyle \mathbb {N} } is the non-negative integers ) is a partial function: It is defined only when x ≥ y . {\displaystyle x\geq y.} In denotational semantics a partial function is considered as returning the bottom element when it is undefined. In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested. In a programming language where function parameters are statically typed , a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function. In category theory , when considering the operation of morphism composition in concrete categories , the composition operation ∘ : hom ⁡ ( C ) × hom ⁡ ( C ) → hom ⁡ ( C ) {\displaystyle \circ \;:\;\hom(C)\times \hom(C)\to \hom(C)} is a total function if and only if ob ⁡ ( C ) {\displaystyle \operatorname {ob} (C)} has one element. The reason for this is that two morphisms f : X → Y {\displaystyle f:X\to Y} and g : U → V {\displaystyle g:U\to V} can only be composed as g ∘ f {\displaystyle g\circ f} if Y = U , {\displaystyle Y=U,} that is, the codomain of f {\displaystyle f} must equal the domain of g . {\displaystyle g.} The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. [ 2 ] One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology ( one-point compactification ) and in theoretical computer science ." [ 3 ] The category of sets and partial bijections is equivalent to its dual . [ 4 ] It is the prototypical inverse category . [ 5 ] Partial algebra generalizes the notion of universal algebra to partial operations . An example would be a field , in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined). [ 6 ] The set of all partial functions (partial transformations ) on a given base set, X , {\displaystyle X,} forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X {\displaystyle X} ), typically denoted by P T X . {\displaystyle {\mathcal {PT}}_{X}.} [ 7 ] [ 8 ] [ 9 ] The set of all partial bijections on X {\displaystyle X} forms the symmetric inverse semigroup . [ 7 ] [ 8 ] Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map , which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps. The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined.
https://en.wikipedia.org/wiki/Partial_function
In abstract algebra , a partial groupoid (also called halfgroupoid , pargoid , or partial magma ) is a set endowed with a partial binary operation . [ 1 ] [ 2 ] A partial groupoid is a partial algebra . A partial groupoid ( G , ∘ ) {\displaystyle (G,\circ )} is called a partial semigroup if the following associative law holds: [ 3 ] For all x , y , z ∈ G {\displaystyle x,y,z\in G} such that x ∘ y ∈ G {\displaystyle x\circ y\in G} and y ∘ z ∈ G {\displaystyle y\circ z\in G} , the following two statements hold: This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial_groupoid
Partial Information Decomposition is an extension of information theory , that aims to generalize the pairwise relations described by information theory to the interaction of multiple variables. [ 1 ] Information theory can quantify the amount of information a single source variable X 1 {\displaystyle X_{1}} has about a target variable Y {\displaystyle Y} via the mutual information I ( X 1 ; Y ) {\displaystyle I(X_{1};Y)} . If we now consider a second source variable X 2 {\displaystyle X_{2}} , classical information theory can only describe the mutual information of the joint variable { X 1 , X 2 } {\displaystyle \{X_{1},X_{2}\}} with Y {\displaystyle Y} , given by I ( X 1 , X 2 ; Y ) {\displaystyle I(X_{1},X_{2};Y)} . In general however, it would be interesting to know how exactly the individual variables X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} and their interactions relate to Y {\displaystyle Y} . Consider that we are given two source variables X 1 , X 2 ∈ { 0 , 1 } {\displaystyle X_{1},X_{2}\in \{0,1\}} and a target variable Y = X O R ( X 1 , X 2 ) {\displaystyle Y=XOR(X_{1},X_{2})} . In this case the total mutual information I ( X 1 , X 2 ; Y ) = 1 {\displaystyle I(X_{1},X_{2};Y)=1} , while the individual mutual information I ( X 1 ; Y ) = I ( X 2 ; Y ) = 0 {\displaystyle I(X_{1};Y)=I(X_{2};Y)=0} . That is, there is synergistic information arising from the interaction of X 1 , X 2 {\displaystyle X_{1},X_{2}} about Y {\displaystyle Y} , which cannot be easily captured with classical information theoretic quantities. Partial information decomposition further decomposes the mutual information between the source variables { X 1 , X 2 } {\displaystyle \{X_{1},X_{2}\}} with the target variable Y {\displaystyle Y} as I ( X 1 , X 2 ; Y ) = Unq ( X 1 ; Y ∖ X 2 ) + Unq ( X 2 ; Y ∖ X 1 ) + Syn ( X 1 , X 2 ; Y ) + Red ( X 1 , X 2 ; Y ) {\displaystyle I(X_{1},X_{2};Y)={\text{Unq}}(X_{1};Y\setminus X_{2})+{\text{Unq}}(X_{2};Y\setminus X_{1})+{\text{Syn}}(X_{1},X_{2};Y)+{\text{Red}}(X_{1},X_{2};Y)} Here the individual information atoms are defined as There is, thus far, no universal agreement on how these terms should be defined, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Despite the lack of universal agreement, partial information decomposition has been applied to diverse fields, including climatology, [ 5 ] neuroscience [ 6 ] [ 7 ] [ 8 ] sociology, [ 9 ] and machine learning [ 10 ] Partial information decomposition has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems [ 11 ] and may be relevant to formal theories of consciousness. [ 12 ]
https://en.wikipedia.org/wiki/Partial_information_decomposition
A partial linear space (also semilinear or near-linear space) is a basic incidence structure in the field of incidence geometry, that carries slightly less structure than a linear space . The notion is equivalent to that of a linear hypergraph . Let S = ( P , L , I ) {\displaystyle S=({\mathcal {P}},{\mathcal {L}},{\textbf {I}})} an incidence structure, for which the elements of P {\displaystyle {\mathcal {P}}} are called points and the elements of L {\displaystyle {\mathcal {L}}} are called lines . S is a partial linear space, if the following axioms hold: If there is a unique line incident with every pair of distinct points, then we get a linear space. The De Bruijn–Erdős theorem shows that in any finite linear space S = ( P , L , I ) {\displaystyle S=({\mathcal {P}},{\mathcal {L}},{\textbf {I}})} which is not a single point or a single line, we have | P | ≤ | L | {\displaystyle |{\mathcal {P}}|\leq |{\mathcal {L}}|} .
https://en.wikipedia.org/wiki/Partial_linear_space
In thermodynamics , a partial molar property is a quantity which describes the variation of an extensive property of a solution or mixture with changes in the molar composition of the mixture at constant temperature and pressure . It is the partial derivative of the extensive property with respect to the amount (number of moles) of the component of interest. Every extensive property of a mixture has a corresponding partial molar property. The partial molar volume is broadly understood as the contribution that a component of a mixture makes to the overall volume of the solution. However, there is more to it than this: When one mole of water is added to a large volume of water at 25 °C, the volume increases by 18 cm 3 . The molar volume of pure water would thus be reported as 18 cm 3 mol −1 . However, addition of one mole of water to a large volume of pure ethanol results in an increase in volume of only 14 cm 3 . The reason that the increase is different is that the volume occupied by a given number of water molecules depends upon the identity of the surrounding molecules. The value 14 cm 3 is said to be the partial molar volume of water in ethanol. In general, the partial molar volume of a substance X in a mixture is the change in volume per mole of X added to the mixture. The partial molar volumes of the components of a mixture vary with the composition of the mixture, because the environment of the molecules in the mixture changes with the composition. It is the changing molecular environment (and the consequent alteration of the interactions between molecules) that results in the thermodynamic properties of a mixture changing as its composition is altered. If, by Z {\displaystyle Z} , one denotes a generic extensive property of a mixture, it will always be true that it depends on the pressure ( P {\displaystyle P} ), temperature ( T {\displaystyle T} ), and the amount of each component of the mixture (measured in moles , n ). For a mixture with q components, this is expressed as Now if temperature T and pressure P are held constant, Z = Z ( n 1 , n 2 , ⋯ ) {\displaystyle Z=Z(n_{1},n_{2},\cdots )} is a homogeneous function of degree 1, since doubling the quantities of each component in the mixture will double Z {\displaystyle Z} . More generally, for any λ {\displaystyle \lambda } : By Euler's first theorem for homogeneous functions , this implies [ 1 ] where Z i ¯ {\displaystyle {\bar {Z_{i}}}} is the partial molar Z {\displaystyle Z} of component i {\displaystyle i} defined as: By Euler's second theorem for homogeneous functions , Z i ¯ {\displaystyle {\bar {Z_{i}}}} is a homogeneous function of degree 0 (i.e., Z i ¯ {\displaystyle {\bar {Z_{i}}}} is an intensive property) which means that for any λ {\displaystyle \lambda } : In particular, taking λ = 1 / n T {\displaystyle \lambda =1/n_{T}} where n T = n 1 + n 2 + ⋯ {\displaystyle n_{T}=n_{1}+n_{2}+\cdots } , one has where x i = n i n T {\displaystyle x_{i}={\frac {n_{i}}{n_{T}}}} is the concentration expressed as the mole fraction of component i {\displaystyle i} . Since the molar fractions satisfy the relation the x i are not independent, and the partial molar property is a function of only q − 1 {\displaystyle q-1} mole fractions: The partial molar property is thus an intensive property - it does not depend on the size of the system. The partial volume is not the partial molar volume. Partial molar properties are useful because chemical mixtures are often maintained at constant temperature and pressure and under these conditions, the value of any extensive property can be obtained from its partial molar property. They are especially useful when considering specific properties of pure substances (that is, properties of one mole of pure substance) and properties of mixing (such as the heat of mixing or entropy of mixing ). By definition, properties of mixing are related to those of the pure substances by: Here ∗ {\displaystyle *} denotes a pure substance, M {\displaystyle M} the mixing property, and z {\displaystyle z} corresponds to the specific property under consideration. From the definition of partial molar properties, substitution yields: So from knowledge of the partial molar properties, deviation of properties of mixing from single components can be calculated. Partial molar properties satisfy relations analogous to those of the extensive properties. For the internal energy U , enthalpy H , Helmholtz free energy A , and Gibbs free energy G , the following hold: where P {\displaystyle P} is the pressure, V {\displaystyle V} the volume , T {\displaystyle T} the temperature, and S {\displaystyle S} the entropy . The thermodynamic potentials also satisfy where μ i {\displaystyle \mu _{i}} is the chemical potential defined as (for constant n j with j≠i): This last partial derivative is the same as G i ¯ {\displaystyle {\bar {G_{i}}}} , the partial molar Gibbs free energy . This means that the partial molar Gibbs free energy and the chemical potential, one of the most important properties in thermodynamics and chemistry, are the same quantity. Under isobaric (constant P ) and isothermal (constant T ) conditions, knowledge of the chemical potentials, μ i ( x 1 , x 2 , ⋯ , x m ) {\displaystyle \mu _{i}(x_{1},x_{2},\cdots ,x_{m})} , yields every property of the mixture as they completely determine the Gibbs free energy. To measure the partial molar property Z 1 ¯ {\displaystyle {\bar {Z_{1}}}} of a binary solution, one begins with the pure component denoted as 2 {\displaystyle 2} and, keeping the temperature and pressure constant during the entire process, add small quantities of component 1 {\displaystyle 1} ; measuring Z {\displaystyle Z} after each addition. After sampling the compositions of interest one can fit a curve to the experimental data. This function will be Z ( n 1 ) {\displaystyle Z(n_{1})} . Differentiating with respect to n 1 {\displaystyle n_{1}} will give Z 1 ¯ {\displaystyle {\bar {Z_{1}}}} . Z 2 ¯ {\displaystyle {\bar {Z_{2}}}} is then obtained from the relation: The relation between partial molar properties and the apparent ones can be derived from the definition of the apparent quantities and of the molality. The relation holds also for multicomponent mixtures, just that in this case subscript i is required.
https://en.wikipedia.org/wiki/Partial_molar_property
Partial oxidation ( POX ) is a type of chemical reaction . It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use, for example in a fuel cell . A distinction is made between thermal partial oxidation (TPOX) and catalytic partial oxidation (CPOX). Partial oxidation is a technically mature process in which natural gas or a heavy hydrocarbon fuel ( heating oil ) is mixed with a limited amount of oxygen in an exothermic process. [ 1 ] The formulas given for coal and heating oil show only a typical representative of these complex fuels. Water may be added to lower the combustion temperature and reduce soot formation. Yields are below stoichiometric due to some fuel being fully combusted to carbon dioxide and water. [ citation needed ] TPOX ( thermal partial oxidation ) reaction temperatures are dependent on the air-fuel ratio or oxygen-fuel ratio. Typical reaction temperatures are 1200 °C and above. [ citation needed ] In CPOX ( catalytic partial oxidation ) the use of a catalyst reduces the required temperature to around 800°C – 900°C. [ citation needed ] The choice of reforming technique depends on the sulfur content of the fuel being used. CPOX can be employed if the sulfur content is below 50 ppm . A higher sulfur content can poison the catalyst, so the TPOX procedure is used for such fuels. However, recent research shows that CPOX is possible with sulfur contents up to 400ppm. [ 2 ] 1926 – Vandeveer and Parr at the University of Illinois used oxygen to replace air. [ 3 ]
https://en.wikipedia.org/wiki/Partial_oxidation
In combinatorial mathematics , a partial permutation , or sequence without repetition , on a finite set S is a bijection between two specified subsets of S . That is, it is defined by two subsets U and V of equal size, and a one-to-one mapping from U to V . Equivalently, it is a partial function on S that can be extended to a permutation . [ 1 ] [ 2 ] It is common to consider the case when the set S is simply the set {1, 2, ..., n } of the first n integers. In this case, a partial permutation may be represented by a string of n symbols, some of which are distinct numbers in the range from 1 to n {\displaystyle n} and the remaining ones of which are a special "hole" symbol ◊. In this formulation, the domain U of the partial permutation consists of the positions in the string that do not contain a hole, and each such position is mapped to the number in that position. For instance, the string "1 ◊ 2" would represent the partial permutation that maps 1 to itself and maps 3 to 2. [ 3 ] The seven partial permutations on two items are The number of partial permutations on n items, for n = 0, 1, 2, ..., is given by the integer sequence where the n th item in the sequence is given by the summation formula in which the i th term counts the number of partial permutations with support of size i , that is, the number of partial permutations with i non-hole entries. Alternatively, it can be computed by a recurrence relation This is determined as follows: Some authors restrict partial permutations so that either the domain [ 4 ] or the range [ 3 ] of the bijection is forced to consist of the first k items in the set of n items being permuted, for some k . In the former case, a partial permutation of length k from an n -set is just a sequence of k terms from the n -set without repetition. (In elementary combinatorics, these objects are sometimes confusingly called " k -permutations " of the n -set.)
https://en.wikipedia.org/wiki/Partial_permutation
In a mixture of gases , each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature . [ 1 ] The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture ( Dalton's Law ). In respiratory physiology , the partial pressure of a dissolved gas in liquid (such as oxygen in arterial blood) is also defined as the partial pressure of that gas as it would be undissolved in gas phase yet in equilibrium with the liquid. [ 2 ] [ 3 ] This concept is also known as blood gas tension . In this sense, the diffusion of a gas liquid is said to be driven by differences in partial pressure (not concentration). In chemistry and thermodynamics , this concept is generalized to non-ideal gases and instead called fugacity . The partial pressure of a gas is a measure of its thermodynamic activity . Gases dissolve, diffuse, and react according to their partial pressures and not according to their concentrations in a gas mixture or as a solute in solution. [ 4 ] This general property of gases is also true in chemical reactions of gases in biology. The symbol for pressure is usually p or pp which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively. [ 5 ] [ 6 ] Examples: Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. [ 7 ] This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to this ideal. For example, given an ideal gas mixture of nitrogen (N 2 ), hydrogen (H 2 ) and ammonia (NH 3 ): p = p N 2 + p H 2 + p NH 3 {\displaystyle p=p_{{\ce {N2}}}+p_{{\ce {H2}}}+p_{{\ce {NH3}}}} where: Ideally the ratio of partial pressures equals the ratio of the number of molecules. That is, the mole fraction x i {\displaystyle x_{\mathrm {i} }} of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component: x i = p i p = n i n {\displaystyle x_{\mathrm {i} }={\frac {p_{\mathrm {i} }}{p}}={\frac {n_{\mathrm {i} }}{n}}} and the partial pressure of an individual gas component in an ideal gas can be obtained using this expression: p i = x i ⋅ p {\displaystyle p_{\mathrm {i} }=x_{\mathrm {i} }\cdot p} The mole fraction of a gas component in a gas mixture is equal to the volumetric fraction of that component in a gas mixture. [ 8 ] The ratio of partial pressures relies on the following isotherm relation: V X V t o t = p X p t o t = n X n t o t {\displaystyle {\frac {V_{\rm {X}}}{V_{\rm {tot}}}}={\frac {p_{\rm {X}}}{p_{\rm {tot}}}}={\frac {n_{\rm {X}}}{n_{\rm {tot}}}}} The partial volume of a particular gas in a mixture is the volume of one component of the gas mixture. It is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction: [ 9 ] V X = V t o t × p X p t o t = V t o t × n X n t o t {\displaystyle V_{\rm {X}}=V_{\rm {tot}}\times {\frac {p_{\rm {X}}}{p_{\rm {tot}}}}=V_{\rm {tot}}\times {\frac {n_{\rm {X}}}{n_{\rm {tot}}}}} Vapor pressure is the pressure of a vapor in equilibrium with its non-vapor phases (i.e., liquid or solid). Most often the term is used to describe a liquid 's tendency to evaporate . It is a measure of the tendency of molecules and atoms to escape from a liquid or a solid . A liquid's atmospheric pressure boiling point corresponds to the temperature at which its vapor pressure is equal to the surrounding atmospheric pressure and it is often called the normal boiling point . The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point of the liquid. The vapor pressure chart displayed has graphs of the vapor pressures versus temperatures for a variety of liquids. [ 10 ] As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere ( atm ) of absolute vapor pressure. At higher altitudes, the atmospheric pressure is less than that at sea level, so boiling points of liquids are reduced. At the top of Mount Everest , the atmospheric pressure is approximately 0.333 atm, so by using the graph, the boiling point of diethyl ether would be approximately 7.5 °C versus 34.6 °C at sea level (1 atm). It is possible to work out the equilibrium constant for a chemical reaction involving a mixture of gases given the partial pressure of each gas and the overall reaction formula. For a reversible reaction involving gas reactants and gas products, such as: a A + b B ↽ − − ⇀ c C + d D {\displaystyle {\ce {{{\mathit {a}}A}+{{\mathit {b}}B}<=>{{\mathit {c}}C}+{{\mathit {d}}D}}}} the equilibrium constant of the reaction would be: K p = p C c p D d p A a p B b {\displaystyle K_{\mathrm {p} }={\frac {p_{C}^{c}\,p_{D}^{d}}{p_{A}^{a}\,p_{B}^{b}}}} For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle . However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the overriding factor to consider. Gases will dissolve in liquids to an extent that is determined by the equilibrium between the undissolved gas and the gas that has dissolved in the liquid (called the solvent ). [ 11 ] The equilibrium constant for that equilibrium is: where: The form of the equilibrium constant shows that the concentration of a solute gas in a solution is directly proportional to the partial pressure of that gas above the solution . This statement is known as Henry's law and the equilibrium constant k {\displaystyle k} is quite often referred to as the Henry's law constant. [ 11 ] [ 12 ] [ 13 ] Henry's law is sometimes written as: [ 14 ] where k ′ {\displaystyle k'} is also referred to as the Henry's law constant. [ 14 ] As can be seen by comparing equations ( 1 ) and ( 2 ) above, k ′ {\displaystyle k'} is the reciprocal of k {\displaystyle k} . Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used. Henry's law is an approximation that only applies for dilute, ideal solutions and for solutions where the liquid solvent does not react chemically with the gas being dissolved. In underwater diving the physiological effects of individual component gases of breathing gases are a function of partial pressure. [ 15 ] Using diving terms, partial pressure is calculated as: For the component gas "i": For example, at 50 metres (164 ft) underwater, the total absolute pressure is 6 bar (600 kPa) (i.e., 1 bar of atmospheric pressure + 5 bar of water pressure) and the partial pressures of the main components of air , oxygen 21% by volume and nitrogen approximately 79% by volume are: The minimum safe lower limit for the partial pressures of oxygen in a breathing gas mixture for diving is 0.16 bars (16 kPa) absolute. Hypoxia and sudden unconsciousness can become a problem with an oxygen partial pressure of less than 0.16 bar absolute. [ 16 ] Oxygen toxicity , involving convulsions, becomes a problem when oxygen partial pressure is too high. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen also determines the maximum operating depth of a gas mixture. [ 15 ] Narcosis is a problem when breathing gases at high pressure. Typically, the maximum total partial pressure of narcotic gases used when planning for technical diving may be around 4.5 bar absolute, based on an equivalent narcotic depth of 35 metres (115 ft). The effect of a toxic contaminant such as carbon monoxide in breathing gas is also related to the partial pressure when breathed. A mixture which may be relatively safe at the surface could be dangerously toxic at the maximum depth of a dive, or a tolerable level of carbon dioxide in the breathing loop of a diving rebreather may become intolerable within seconds during descent when the partial pressure rapidly increases, and could lead to panic or incapacitation of the diver. [ 15 ] The partial pressures of particularly oxygen ( p O 2 {\displaystyle p_{\mathrm {O_{2}} }} ) and carbon dioxide ( p C O 2 {\displaystyle p_{\mathrm {CO_{2}} }} ) are important parameters in tests of arterial blood gases , but can also be measured in, for example, cerebrospinal fluid . [ why? ]
https://en.wikipedia.org/wiki/Partial_pressure
The grid method (also known as the box method or matrix method ) of multiplication is an introductory approach to multi-digit multiplication calculations that involve numbers larger than ten. Because it is often taught in mathematics education at the level of primary school or elementary school , this algorithm is sometimes called the grammar school method. [ 1 ] Compared to traditional long multiplication , the grid method differs in clearly breaking the multiplication and addition into two steps, and in being less dependent on place value. Whilst less efficient than the traditional method, grid multiplication is considered to be more reliable , in that children are less likely to make mistakes. Most pupils will go on to learn the traditional method, once they are comfortable with the grid method; but knowledge of the grid method remains a useful "fall back", in the event of confusion. It is also argued that since anyone doing a lot of multiplication would nowadays use a pocket calculator, efficiency for its own sake is less important; equally, since this means that most children will use the multiplication algorithm less often, it is useful for them to become familiar with a more explicit (and hence more memorable) method. Use of the grid method has been standard in mathematics education in primary schools in England and Wales since the introduction of a National Numeracy Strategy with its "numeracy hour" in the 1990s. It can also be found included in various curricula elsewhere. Essentially the same calculation approach, but not with the explicit grid arrangement, is also known as the partial products algorithm or partial products method . The grid method can be introduced by thinking about how to add up the number of points in a regular array, for example the number of squares of chocolate in a chocolate bar. As the size of the calculation becomes larger, it becomes easier to start counting in tens; and to represent the calculation as a box which can be sub-divided, rather than drawing a multitude of dots. [ 2 ] [ 3 ] At the simplest level, pupils might be asked to apply the method to a calculation like 3 × 17. Breaking up ("partitioning") the 17 as (10 + 7), this unfamiliar multiplication can be worked out as the sum of two simple multiplications: so 3 × 17 = 30 + 21 = 51. This is the "grid" or "boxes" structure which gives the multiplication method its name. Faced with a slightly larger multiplication, such as 34 × 13, pupils may initially be encouraged to also break this into tens. So, expanding 34 as 10 + 10 + 10 + 4 and 13 as 10 + 3, the product 34 × 13 might be represented: Totalling the contents of each row, it is apparent that the final result of the calculation is (100 + 100 + 100 + 40) + (30 + 30 + 30 + 12) = 340 + 102 = 442. Once pupils have become comfortable with the idea of splitting the whole product into contributions from separate boxes, it is a natural step to group the tens together, so that the calculation 34 × 13 becomes giving the addition so 34 × 13 = 442. This is the most usual form for a grid calculation. In countries such as the UK where teaching of the grid method is usual, pupils may spend a considerable period of time regularly setting out calculations like the above, until the method is entirely comfortable and familiar. The grid method extends straightforwardly to calculations involving larger numbers. For example, to calculate 345 × 28, the student could construct the grid with six easy multiplications to find the answer 6900 + 2760 = 9660. However, by this stage (at least in standard current UK teaching practice) pupils may be starting to be encouraged to set out such a calculation using the traditional long multiplication form without having to draw up a grid. Traditional long multiplication can be related to a grid multiplication in which only one of the numbers is broken into tens and units parts to be multiplied separately: The traditional method is ultimately faster and much more compact; but it requires two significantly more difficult multiplications which pupils may at first struggle with [ citation needed ] . Compared to the grid method, traditional long multiplication may also be more abstract [ citation needed ] and less manifestly clear [ citation needed ] , so some pupils find it harder to remember what is to be done at each stage and why [ citation needed ] . Pupils may therefore be encouraged for quite a period to use the simpler grid method alongside the more efficient traditional long multiplication method, as a check and a fall-back. While not normally taught as a standard method for multiplying fractions , the grid method can readily be applied to simple cases where it is easier to find a product by breaking it down. For example, the calculation 2 ⁠ 1 / 2 ⁠ × 1 ⁠ 1 / 2 ⁠ can be set out using the grid method to find that the resulting product is 2 + ⁠ 1 / 2 ⁠ + 1 + ⁠ 1 / 4 ⁠ = 3 ⁠ 3 / 4 ⁠ The grid method can also be used to illustrate the multiplying out of a product of binomials , such as ( a + 3)( b + 2), a standard topic in elementary algebra (although one not usually met until secondary school ): Thus ( a + 3)( b + 2) = ab + 3 b + 2 a + 6. 32-bit CPUs usually lack an instruction to multiply two 64-bit integers. However, most CPUs support a "multiply with overflow" instruction, which takes two 32-bit operands, multiplies them, and puts the 32-bit result in one register and the overflow in another, resulting in a carry. For example, these include the umull instruction added in the ARMv4t instruction set or the pmuludq instruction added in SSE2 which operates on the lower 32 bits of an SIMD register containing two 64-bit lanes. On platforms that support these instructions, a slightly modified version of the grid method is used. The differences are: This would be the routine in C: This would be the routine in ARM assembly: Mathematically, the ability to break up a multiplication in this way is known as the distributive law , which can be expressed in algebra as the property that a ( b + c ) = ab + ac . The grid method uses the distributive property twice to expand the product, once for the horizontal factor, and once for the vertical factor. Historically the grid calculation (tweaked slightly) was the basis of a method called lattice multiplication , which was the standard method of multiple-digit multiplication developed in medieval Arabic and Hindu mathematics. Lattice multiplication was introduced into Europe by Fibonacci at the start of the thirteenth century along with Arabic numerals themselves; although, like the numerals also, the ways he suggested to calculate with them were initially slow to catch on. Napier's bones were a calculating help introduced by the Scot John Napier in 1617 to assist lattice-method calculations.
https://en.wikipedia.org/wiki/Partial_products_algorithm
The partial specific volume v i ¯ , {\displaystyle {\bar {v_{i}}},} express the variation of the extensive volume of a mixture in respect to composition of the masses. It is the partial derivative of volume with respect to the mass of the component of interest. where v i ¯ {\displaystyle {\bar {v_{i}}}} is the partial specific volume of a component i {\displaystyle i} defined as: The PSV is usually measured in milliLiters (mL) per gram (g), proteins > 30 kDa can be assumed to have a partial specific volume of 0.708 mL/g. [ 1 ] Experimental determination is possible by measuring the natural frequency of a U-shaped tube filled successively with air, buffer and protein solution. [ 2 ] The weighted sum of partial specific volumes of a mixture or solution is an inverse of density of the mixture namely the specific volume of the mixture. This chemistry -related article is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Partial_specific_volume
Partial stroke testing (or PST) is a technique used in a control system to allow the user to test a percentage of the possible failure modes of a shut down valve without the need to physically close the valve. PST is used to assist in determining that the safety function will operate on demand. PST is most often used on high integrity emergency shutdown valves (ESDVs) in applications where closing the valve will have a high cost burden yet proving the integrity of the valve is essential to maintaining a safe facility. In addition to ESDVs PST is also used on high integrity pressure protection systems or HIPPS. Partial stroke testing is not a replacement for the need to fully stroke valves as proof testing is still a mandatory requirement. Partial stroke testing is an accepted petroleum industry standard technique and is also quantified in detail by regulatory bodies such as the International Electrotechnical Commission (IEC) and the Instrument Society of Automation (ISA). The following are the standards appropriate to these bodies. These standards define the requirements for safety related systems and describe how to quantify the performance of PST systems IEC61508 adapts a safety life cycle approach to the management of plant safety. During the design phase of this life cycle of a safety system the required safety performance level is determined using techniques such as Markov analysis , FMEA , fault tree analysis and Hazop . These techniques allow the user to determine the potential frequency and consequence of hazardous activities and to quantify the level of risk. A common method for this quantification is the safety integrity level . This is quantified from one to four with level four being the most hazardous. Once the SIL level is determined this specifies the required performance level of the safety systems during the operational phase of the plant. The metric for measuring the performance of a safety function is called the average probability of failure on demand (or PFD avg ) and this correlates to the SIL level as follows One method of calculating the PFD avg for a basic safety function with no redundancy is using the formula Where: The proof test coverage is a measure of how effective the partial stroke test is and the higher the PTC the greater the effect of the test. The benefits of using PST are not limited to simply the safety performance but gains can also be made in the production performance of a plant and the capital cost of a plant. [ 1 ] [ 2 ] These are summarised as follows Gains can be made in the following areas by the use of PST. There are a number of areas where production efficiency can be improved by the successful implementation of a PST system: The main drawback of all PST systems is the increased probability of causing an accidental activation of the safety system thus causing a plant shutdown , this is the primary concern of PST systems by operators and for this reason many PST system remain dormant after installation. Different techniques mitigate for this issue in different manners but all systems have an inherent risk In addition in some cases, a PST cannot be performed due to the limitations inherent in the process or the valve being used. Further, as the PST introduces a disturbance into the process or system, it may not be appropriate for some processes or systems that are sensitive to disturbances. Finally, a PST cannot always differentiate between different faults or failures within the valve and actuator assembly thus limiting the diagnostic capability. There are a number of different techniques available for partial stroke testing and the selection of the most appropriate technique depends on the main benefits the operator is trying to gain. Mechanical jammers are devices where a device is inserted into the valve and actuator assembly that physically prevents the valve from moving past a certain point. These are used in cases where accidentally shutting the valve would have severe consequences, or any application where the end user prefers a mechanical device. Typical benefits of this type of device are as follows: [ 3 ] However, opinions differ whether these devices are suitable for functional safety systems as the safety function is offline for the duration of the test. Modern mechanical PST devices may be automated . Examples of this kind of device include direct interface products that mount between the valve and the actuator and may use cams fitted to the valve stem. An example of such a mechanical PST system: [ 4 ] Other methods include adjustable actuator end stops. The basic principle behind partial stroke testing is that the valve is moved to a predetermined position in order to determine the performance of the shut down valve. This led to the adaptation of pneumatic positioners used on flow control valve for use in partial stroke testing. These systems are often suitable for use on shutdown valves up to and including SIL3. The main benefits are : The main benefit of these systems is that positioners are common equipment on plants and thus operators are familiar with the operation of these systems, however the primary drawback is the increased risk of spurious trip caused by the introduction of additional control components that are not normally used on on/off valves. These systems are however limited to use on pneumatically actuated valves. These systems use an electrical switch to de-energise the solenoid valve and use an electrical relay attached to the actuator to re-energise the solenoid coil when the desired PST point is reached. Electronic control systems use a configurable electronic module that connects between the supply from the ESD system and the solenoid valve . In order to perform a test the timer de-energises the solenoid valve to simulate a shutdown and re-energises the solenoid when the required degree of partial stroke is reached. These systems are fundamentally a miniature PLC dedicated to the testing of the valve. Due to their nature these devices do not actually form part of the safety function and are therefore 100% fail safe. With the addition of a pressure sensor and/or a position sensor for feedback timer systems are also capable of providing intelligent diagnostics in order to diagnose the performance of all components including the valve, actuator and solenoid valves. In addition timers are capable of operating with any type of fluid power actuator and can also be used with subsea valves where the solenoid valve is located top-side. Another technique is to embed the control electronics into a solenoid valve enclosure removing the need for additional control boxes. In addition there is no need to change the control schematic as no dedicated components are required.
https://en.wikipedia.org/wiki/Partial_stroke_testing
In linear algebra and functional analysis , the partial trace is a generalization of the trace . Whereas the trace is a scalar -valued function on operators, the partial trace is an operator -valued function. The partial trace has applications in quantum information and decoherence which is relevant for quantum measurement and thereby to the decoherent approaches to interpretations of quantum mechanics , including consistent histories and the relative state interpretation . Suppose V {\displaystyle V} , W {\displaystyle W} are finite-dimensional vector spaces over a field , with dimensions m {\displaystyle m} and n {\displaystyle n} , respectively. For any space ⁠ A {\displaystyle A} ⁠ , let L ( A ) {\displaystyle L(A)} denote the space of linear operators on A {\displaystyle A} . The partial trace over W {\displaystyle W} is then written as ⁠ Tr W : L ⁡ ( V ⊗ W ) → L ⁡ ( V ) {\displaystyle \operatorname {Tr} _{W}:\operatorname {L} (V\otimes W)\to \operatorname {L} (V)} ⁠ , where ⊗ {\displaystyle \otimes } denotes the Kronecker product . It is defined as follows: For ⁠ T ∈ L ⁡ ( V ⊗ W ) {\displaystyle T\in \operatorname {L} (V\otimes W)} ⁠ , let ⁠ e 1 , … , e m {\displaystyle e_{1},\ldots ,e_{m}} ⁠ , and ⁠ f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} ⁠ , be bases for V and W respectively; then T has a matrix representation relative to the basis e k ⊗ f ℓ {\displaystyle e_{k}\otimes f_{\ell }} of V ⊗ W {\displaystyle V\otimes W} . Now for indices k , i in the range 1, ..., m , consider the sum This gives a matrix b k , i . The associated linear operator on V is independent of the choice of bases and is by definition the partial trace . Among physicists, this is often called "tracing out" or "tracing over" W to leave only an operator on V in the context where W and V are Hilbert spaces associated with quantum systems (see below). The partial trace operator can be defined invariantly (that is, without reference to a basis) as follows: it is the unique linear map such that To see that the conditions above determine the partial trace uniquely, let v 1 , … , v m {\displaystyle v_{1},\ldots ,v_{m}} form a basis for V {\displaystyle V} , let w 1 , … , w n {\displaystyle w_{1},\ldots ,w_{n}} form a basis for W {\displaystyle W} , let E i j : V → V {\displaystyle E_{ij}:V\to V} be the map that sends v i {\displaystyle v_{i}} to v j {\displaystyle v_{j}} (and all other basis elements to zero), and let F k l : W → W {\displaystyle F_{kl}\colon W\to W} be the map that sends w k {\displaystyle w_{k}} to w l {\displaystyle w_{l}} . Since the vectors v i ⊗ w k {\displaystyle v_{i}\otimes w_{k}} form a basis for V ⊗ W {\displaystyle V\otimes W} , the maps E i j ⊗ F k l {\displaystyle E_{ij}\otimes F_{kl}} form a basis for L ⁡ ( V ⊗ W ) {\displaystyle \operatorname {L} (V\otimes W)} . From this abstract definition, the following properties follow: It is the partial trace of linear transformations that is the subject of Joyal, Street, and Verity's notion of Traced monoidal category . A traced monoidal category is a monoidal category ( C , ⊗ , I ) {\displaystyle (C,\otimes ,I)} together with, for objects X , Y , U in the category, a function of Hom-sets, satisfying certain axioms. Another case of this abstract notion of partial trace takes place in the category of finite sets and bijections between them, in which the monoidal product is disjoint union. One can show that for any finite sets, X , Y , U and bijection X + U ≅ Y + U {\displaystyle X+U\cong Y+U} there exists a corresponding "partially traced" bijection X ≅ Y {\displaystyle X\cong Y} . The partial trace generalizes to operators on infinite dimensional Hilbert spaces. Suppose V , W are Hilbert spaces, and let be an orthonormal basis for W . Now there is an isometric isomorphism Under this decomposition, any operator T ∈ L ⁡ ( V ⊗ W ) {\displaystyle T\in \operatorname {L} (V\otimes W)} can be regarded as an infinite matrix of operators on V where T k ℓ ∈ L ⁡ ( V ) {\displaystyle T_{k\ell }\in \operatorname {L} (V)} . First suppose T is a non-negative operator. In this case, all the diagonal entries of the above matrix are non-negative operators on V . If the sum converges in the strong operator topology of L( V ), it is independent of the chosen basis of W . The partial trace Tr W ( T ) is defined to be this operator. The partial trace of a self-adjoint operator is defined if and only if the partial traces of the positive and negative parts are defined. Suppose W has an orthonormal basis, which we denote by ket vector notation as ⁠ { | ℓ ⟩ } ℓ {\displaystyle \{\vert \ell \rangle \}_{\ell }} ⁠ . Then The superscripts in parentheses do not represent matrix components, but instead label the matrix itself. In the case of finite dimensional Hilbert spaces, there is a useful way of looking at partial trace involving integration with respect to a suitably normalized Haar measure μ over the unitary group U( W ) of W . Suitably normalized means that μ is taken to be a measure with total mass dim( W ). Theorem . Suppose V , W are finite dimensional Hilbert spaces. Then commutes with all operators of the form I V ⊗ S {\displaystyle I_{V}\otimes S} and hence is uniquely of the form R ⊗ I W {\displaystyle R\otimes I_{W}} . The operator R is the partial trace of T . The partial trace can be viewed as a quantum operation . Consider a quantum mechanical system whose state space is the tensor product H A ⊗ H B {\displaystyle H_{A}\otimes H_{B}} of Hilbert spaces. A mixed state is described by a density matrix ρ , that is a non-negative trace-class operator of trace 1 on the tensor product H A ⊗ H B . {\displaystyle H_{A}\otimes H_{B}.} The partial trace of ρ with respect to the system B , denoted by ρ A {\displaystyle \rho ^{A}} , is called the reduced state of ρ on system A . In symbols, [ 1 ] ρ A = Tr B ⁡ ρ . {\displaystyle \rho ^{A}=\operatorname {Tr} _{B}\rho .} To show that this is indeed a sensible way to assign a state on the A subsystem to ρ, we offer the following justification. Let M be an observable on the subsystem A , then the corresponding observable on the composite system is M ⊗ I {\displaystyle M\otimes I} . However one chooses to define a reduced state ρ A {\displaystyle \rho ^{A}} , there should be consistency of measurement statistics. The expectation value of M after the subsystem A is prepared in ρ A {\displaystyle \rho ^{A}} and that of M ⊗ I {\displaystyle M\otimes I} when the composite system is prepared in ρ should be the same, i.e. the following equality should hold: We see that this is satisfied if ρ A {\displaystyle \rho ^{A}} is as defined above via the partial trace. Furthermore, such operation is unique. Let T ( H ) be the Banach space of trace-class operators on the Hilbert space H . It can be easily checked that the partial trace, viewed as a map is completely positive and trace-preserving. The density matrix ρ is Hermitian , positive semi-definite , and has a trace of 1. It has a spectral decomposition : Its easy to see that the partial trace ρ A {\displaystyle \rho ^{A}} also satisfies these conditions. For example, for any pure state | ψ A ⟩ {\displaystyle |\psi _{A}\rangle } in H A {\displaystyle H_{A}} , we have Note that the term Tr B ⁡ [ ⟨ ψ A | Ψ m ⟩ ⟨ Ψ m | ψ A ⟩ ] {\displaystyle \operatorname {Tr} _{B}[\langle \psi _{A}|\Psi _{m}\rangle \langle \Psi _{m}|\psi _{A}\rangle ]} represents the probability of finding the state | ψ A ⟩ {\displaystyle |\psi _{A}\rangle } when the composite system is in the state | Ψ m ⟩ {\displaystyle |\Psi _{m}\rangle } . This proves the positive semi-definiteness of ρ A {\displaystyle \rho ^{A}} . The partial trace map as given above induces a dual map Tr B ∗ {\displaystyle \operatorname {Tr} _{B}^{*}} between the C*-algebras of bounded operators on H A {\displaystyle \;H_{A}} and H A ⊗ H B {\displaystyle H_{A}\otimes H_{B}} given by Tr B ∗ {\displaystyle \operatorname {Tr} _{B}^{*}} maps observables to observables and is the Heisenberg picture representation of ⁠ Tr B {\displaystyle \operatorname {Tr} _{B}} ⁠ . Suppose instead of quantum mechanical systems, the two systems A and B are classical. The space of observables for each system are then abelian C*-algebras. These are of the form C ( X ) and C ( Y ) respectively for compact spaces X , Y . The state space of the composite system is simply A state on the composite system is a positive element ρ of the dual of C( X × Y ), which by the Riesz–Markov theorem corresponds to a regular Borel measure on X × Y . The corresponding reduced state is obtained by projecting the measure ρ to X . Thus the partial trace is the quantum mechanical equivalent of this operation.
https://en.wikipedia.org/wiki/Partial_trace
In abstract algebra , a partially ordered group is a group ( G , +) equipped with a partial order "≤" that is translation-invariant ; in other words, "≤" has the property that, for all a , b , and g in G , if a ≤ b then a + g ≤ b + g and g + a ≤ g + b . An element x of G is called positive if 0 ≤ x . The set of elements 0 ≤ x is often denoted with G + , and is called the positive cone of G . By translation invariance, we have a ≤ b if and only if 0 ≤ - a + b . So we can reduce the partial order to a monadic property: a ≤ b if and only if - a + b ∈ G + . For the general group G , the existence of a positive cone specifies an order on G . A group G is a partially orderable group if and only if there exists a subset H (which is G + ) of G such that: A partially ordered group G with positive cone G + is said to be unperforated if n · g ∈ G + for some positive integer n implies g ∈ G + . Being unperforated means there is no "gap" in the positive cone G + . If the order on the group is a linear order , then it is said to be a linearly ordered group . If the order on the group is a lattice order , i.e. any two elements have a least upper bound, then it is a lattice-ordered group (shortly l-group , though usually typeset with a script l: ℓ-group). A Riesz group is an unperforated partially ordered group with a property slightly weaker than being a lattice-ordered group. Namely, a Riesz group satisfies the Riesz interpolation property : if x 1 , x 2 , y 1 , y 2 are elements of G and x i ≤ y j , then there exists z ∈ G such that x i ≤ z ≤ y j . If G and H are two partially ordered groups, a map from G to H is a morphism of partially ordered groups if it is both a group homomorphism and a monotonic function . The partially ordered groups, together with this notion of morphism, form a category . Partially ordered groups are used in the definition of valuations of fields . The Archimedean property of the real numbers can be generalized to partially ordered groups. A partially ordered group G is called integrally closed if for all elements a and b of G , if a n ≤ b for all natural n then a ≤ 1. [ 1 ] This property is somewhat stronger than the fact that a partially ordered group is Archimedean , though for a lattice-ordered group to be integrally closed and to be Archimedean is equivalent. [ 2 ] There is a theorem that every integrally closed directed group is already abelian . This has to do with the fact that a directed group is embeddable into a complete lattice-ordered group if and only if it is integrally closed. [ 1 ] Everett, C. J.; Ulam, S. (1945). "On Ordered Groups" . Transactions of the American Mathematical Society . 57 (2): 208– 216. doi : 10.2307/1990202 . JSTOR 1990202 .
https://en.wikipedia.org/wiki/Partially_ordered_group
All definitions tacitly require the homogeneous relation R {\displaystyle R} be transitive : for all a , b , c , {\displaystyle a,b,c,} if a R b {\displaystyle aRb} and b R c {\displaystyle bRc} then a R c . {\displaystyle aRc.} A term's definition may require additional properties that are not listed in this table. In mathematics , especially order theory , a partial order on a set is an arrangement such that, for certain pairs of elements, one precedes the other. The word partial is used to indicate that not every pair of elements needs to be comparable; that is, there may be pairs for which neither element precedes the other. Partial orders thus generalize total orders , in which every pair is comparable. Formally, a partial order is a homogeneous binary relation that is reflexive , antisymmetric , and transitive . A partially ordered set ( poset for short) is an ordered pair P = ( X , ≤ ) {\displaystyle P=(X,\leq )} consisting of a set X {\displaystyle X} (called the ground set of P {\displaystyle P} ) and a partial order ≤ {\displaystyle \leq } on X {\displaystyle X} . When the meaning is clear from context and there is no ambiguity about the partial order, the set X {\displaystyle X} itself is sometimes called a poset. The term partial order usually refers to the reflexive partial order relations, referred to in this article as non-strict partial orders. However some authors use the term for the other common type of partial order relations, the irreflexive partial order relations, also called strict partial orders. Strict and non-strict partial orders can be put into a one-to-one correspondence , so for every strict partial order there is a unique corresponding non-strict partial order, and vice versa. A reflexive , weak , [ 1 ] or non-strict partial order , [ 2 ] commonly referred to simply as a partial order , is a homogeneous relation ≤ on a set P {\displaystyle P} that is reflexive , antisymmetric , and transitive . That is, for all a , b , c ∈ P , {\displaystyle a,b,c\in P,} it must satisfy: A non-strict partial order is also known as an antisymmetric preorder . An irreflexive , strong , [ 1 ] or strict partial order is a homogeneous relation < on a set P {\displaystyle P} that is irreflexive , asymmetric and transitive ; that is, it satisfies the following conditions for all a , b , c ∈ P : {\displaystyle a,b,c\in P:} A transitive relation is asymmetric if and only if it is irreflexive. [ 3 ] So the definition is the same if it omits either irreflexivity or asymmetry (but not both). A strict partial order is also known as an asymmetric strict preorder . Strict and non-strict partial orders on a set P {\displaystyle P} are closely related. A non-strict partial order ≤ {\displaystyle \leq } may be converted to a strict partial order by removing all relationships of the form a ≤ a ; {\displaystyle a\leq a;} that is, the strict partial order is the set < := ≤ ∖ Δ P {\displaystyle <\;:=\ \leq \ \setminus \ \Delta _{P}} where Δ P := { ( p , p ) : p ∈ P } {\displaystyle \Delta _{P}:=\{(p,p):p\in P\}} is the identity relation on P × P {\displaystyle P\times P} and ∖ {\displaystyle \;\setminus \;} denotes set subtraction . Conversely, a strict partial order < on P {\displaystyle P} may be converted to a non-strict partial order by adjoining all relationships of that form; that is, ≤ := Δ P ∪ < {\displaystyle \leq \;:=\;\Delta _{P}\;\cup \;<\;} is a non-strict partial order. Thus, if ≤ {\displaystyle \leq } is a non-strict partial order, then the corresponding strict partial order < is the irreflexive kernel given by a < b if a ≤ b and a ≠ b . {\displaystyle a<b{\text{ if }}a\leq b{\text{ and }}a\neq b.} Conversely, if < is a strict partial order, then the corresponding non-strict partial order ≤ {\displaystyle \leq } is the reflexive closure given by: a ≤ b if a < b or a = b . {\displaystyle a\leq b{\text{ if }}a<b{\text{ or }}a=b.} The dual (or opposite ) R op {\displaystyle R^{\text{op}}} of a partial order relation R {\displaystyle R} is defined by letting R op {\displaystyle R^{\text{op}}} be the converse relation of R {\displaystyle R} , i.e. x R op y {\displaystyle xR^{\text{op}}y} if and only if y R x {\displaystyle yRx} . The dual of a non-strict partial order is a non-strict partial order, [ 4 ] and the dual of a strict partial order is a strict partial order. The dual of a dual of a relation is the original relation. Given a set P {\displaystyle P} and a partial order relation, typically the non-strict partial order ≤ {\displaystyle \leq } , we may uniquely extend our notation to define four partial order relations ≤ , {\displaystyle \leq ,} < , {\displaystyle <,} ≥ , {\displaystyle \geq ,} and > {\displaystyle >} , where ≤ {\displaystyle \leq } is a non-strict partial order relation on P {\displaystyle P} , < {\displaystyle <} is the associated strict partial order relation on P {\displaystyle P} (the irreflexive kernel of ≤ {\displaystyle \leq } ), ≥ {\displaystyle \geq } is the dual of ≤ {\displaystyle \leq } , and > {\displaystyle >} is the dual of < {\displaystyle <} . Strictly speaking, the term partially ordered set refers to a set with all of these relations defined appropriately. But practically, one need only consider a single relation, ( P , ≤ ) {\displaystyle (P,\leq )} or ( P , < ) {\displaystyle (P,<)} , or, in rare instances, the non-strict and strict relations together, ( P , ≤ , < ) {\displaystyle (P,\leq ,<)} . [ 5 ] The term ordered set is sometimes used as a shorthand for partially ordered set , as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as "ordered sets", especially in areas where these structures are more common than posets. Some authors use different symbols than ≤ {\displaystyle \leq } such as ⊑ {\displaystyle \sqsubseteq } [ 6 ] or ⪯ {\displaystyle \preceq } [ 7 ] to distinguish partial orders from total orders. When referring to partial orders, ≤ {\displaystyle \leq } should not be taken as the complement of > {\displaystyle >} . The relation > {\displaystyle >} is the converse of the irreflexive kernel of ≤ {\displaystyle \leq } , which is always a subset of the complement of ≤ {\displaystyle \leq } , but > {\displaystyle >} is equal to the complement of ≤ {\displaystyle \leq } if, and only if , ≤ {\displaystyle \leq } is a total order. [ a ] Another way of defining a partial order, found in computer science , is via a notion of comparison . Specifically, given ≤ , < , ≥ , and > {\displaystyle \leq ,<,\geq ,{\text{ and }}>} as defined previously, it can be observed that two elements x and y may stand in any of four mutually exclusive relationships to each other: either x < y , or x = y , or x > y , or x and y are incomparable . This can be represented by a function compare : P × P → { < , > , = , | } {\displaystyle {\text{compare}}:P\times P\to \{<,>,=,\vert \}} that returns one of four codes when given two elements. [ 8 ] [ 9 ] This definition is equivalent to a partial order on a setoid , where equality is taken to be a defined equivalence relation rather than set equality. [ 10 ] Wallis defines a more general notion of a partial order relation as any homogeneous relation that is transitive and antisymmetric . This includes both reflexive and irreflexive partial orders as subtypes. [ 1 ] A finite poset can be visualized through its Hasse diagram . [ 11 ] Specifically, taking a strict partial order relation ( P , < ) {\displaystyle (P,<)} , a directed acyclic graph (DAG) may be constructed by taking each element of P {\displaystyle P} to be a node and each element of < {\displaystyle <} to be an edge. The transitive reduction of this DAG [ b ] is then the Hasse diagram. Similarly this process can be reversed to construct strict partial orders from certain DAGs. In contrast, the graph associated to a non-strict partial order has self-loops at every node and therefore is not a DAG; when a non-strict order is said to be depicted by a Hasse diagram, actually the corresponding strict order is shown. Standard examples of posets arising in mathematics include: One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig. 4): All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field , the result is in each case also an ordered vector space. See also orders on the Cartesian product of totally ordered sets . Another way to combine two (disjoint) posets is the ordinal sum [ 12 ] (or linear sum ), [ 13 ] Z = X ⊕ Y , defined on the union of the underlying sets X and Y by the order a ≤ Z b if and only if: If two posets are well-ordered , then so is their ordinal sum. [ 14 ] Series-parallel partial orders are formed from the ordinal sum operation (in this context called series composition) and another operation called parallel composition. Parallel composition is the disjoint union of two partially ordered sets, with no order relation between elements of one set and elements of the other set. The examples use the poset ( P ( { x , y , z } ) , ⊆ ) {\displaystyle ({\mathcal {P}}(\{x,y,z\}),\subseteq )} consisting of the set of all subsets of a three-element set { x , y , z } , {\displaystyle \{x,y,z\},} ordered by set inclusion (see Fig. 1). There are several notions of "greatest" and "least" element in a poset P , {\displaystyle P,} notably: As another example, consider the positive integers , ordered by divisibility: 1 is a least element, as it divides all other elements; on the other hand this poset does not have a greatest element. This partially ordered set does not even have any maximal elements, since any g divides for instance 2 g , which is distinct from it, so g is not maximal. If the number 1 is excluded, while keeping divisibility as ordering on the elements greater than 1, then the resulting poset does not have a least element, but any prime number is a minimal element for it. In this poset, 60 is an upper bound (though not a least upper bound) of the subset { 2 , 3 , 5 , 10 } , {\displaystyle \{2,3,5,10\},} which does not have any lower bound (since 1 is not in the poset); on the other hand 2 is a lower bound of the subset of powers of 2, which does not have any upper bound. If the number 0 is included, this will be the greatest element, since this is a multiple of every integer (see Fig. 6). Given two partially ordered sets ( S , ≤) and ( T , ≼) , a function f : S → T {\displaystyle f:S\to T} is called order-preserving , or monotone , or isotone , if for all x , y ∈ S , {\displaystyle x,y\in S,} x ≤ y {\displaystyle x\leq y} implies f ( x ) ≼ f ( y ) . If ( U , ≲) is also a partially ordered set, and both f : S → T {\displaystyle f:S\to T} and g : T → U {\displaystyle g:T\to U} are order-preserving, their composition g ∘ f : S → U {\displaystyle g\circ f:S\to U} is order-preserving, too. A function f : S → T {\displaystyle f:S\to T} is called order-reflecting if for all x , y ∈ S , {\displaystyle x,y\in S,} f ( x ) ≼ f ( y ) implies x ≤ y . {\displaystyle x\leq y.} If f is both order-preserving and order-reflecting, then it is called an order-embedding of ( S , ≤) into ( T , ≼) . In the latter case, f is necessarily injective , since f ( x ) = f ( y ) {\displaystyle f(x)=f(y)} implies x ≤ y and y ≤ x {\displaystyle x\leq y{\text{ and }}y\leq x} and in turn x = y {\displaystyle x=y} according to the antisymmetry of ≤ . {\displaystyle \leq .} If an order-embedding between two posets S and T exists, one says that S can be embedded into T . If an order-embedding f : S → T {\displaystyle f:S\to T} is bijective , it is called an order isomorphism , and the partial orders ( S , ≤) and ( T , ≼) are said to be isomorphic . Isomorphic orders have structurally similar Hasse diagrams (see Fig. 7a). It can be shown that if order-preserving maps f : S → T {\displaystyle f:S\to T} and g : T → U {\displaystyle g:T\to U} exist such that g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} yields the identity function on S and T , respectively, then S and T are order-isomorphic. [ 15 ] For example, a mapping f : N → P ( N ) {\displaystyle f:\mathbb {N} \to \mathbb {P} (\mathbb {N} )} from the set of natural numbers (ordered by divisibility) to the power set of natural numbers (ordered by set inclusion) can be defined by taking each number to the set of its prime divisors . It is order-preserving: if x divides y , then each prime divisor of x is also a prime divisor of y . However, it is neither injective (since it maps both 12 and 6 to { 2 , 3 } {\displaystyle \{2,3\}} ) nor order-reflecting (since 12 does not divide 6). Taking instead each number to the set of its prime power divisors defines a map g : N → P ( N ) {\displaystyle g:\mathbb {N} \to \mathbb {P} (\mathbb {N} )} that is order-preserving, order-reflecting, and hence an order-embedding. It is not an order-isomorphism (since it, for instance, does not map any number to the set { 4 } {\displaystyle \{4\}} ), but it can be made one by restricting its codomain to g ( N ) . {\displaystyle g(\mathbb {N} ).} Fig. 7b shows a subset of N {\displaystyle \mathbb {N} } and its isomorphic image under g . The construction of such an order-isomorphism into a power set can be generalized to a wide class of partial orders, called distributive lattices ; see Birkhoff's representation theorem . Sequence A001035 in OEIS gives the number of partial orders on a set of n labeled elements: Note that S ( n , k ) refers to Stirling numbers of the second kind . The number of strict partial orders is the same as that of partial orders. If the count is made only up to isomorphism, the sequence 1, 1, 2, 5, 16, 63, 318, ... (sequence A000112 in the OEIS ) is obtained. A poset P ∗ = ( X ∗ , ≤ ∗ ) {\displaystyle P^{*}=(X^{*},\leq ^{*})} is called a subposet of another poset P = ( X , ≤ ) {\displaystyle P=(X,\leq )} provided that X ∗ {\displaystyle X^{*}} is a subset of X {\displaystyle X} and ≤ ∗ {\displaystyle \leq ^{*}} is a subset of ≤ {\displaystyle \leq } . The latter condition is equivalent to the requirement that for any x {\displaystyle x} and y {\displaystyle y} in X ∗ {\displaystyle X^{*}} (and thus also in X {\displaystyle X} ), if x ≤ ∗ y {\displaystyle x\leq ^{*}y} then x ≤ y {\displaystyle x\leq y} . If P ∗ {\displaystyle P^{*}} is a subposet of P {\displaystyle P} and furthermore, for all x {\displaystyle x} and y {\displaystyle y} in X ∗ {\displaystyle X^{*}} , whenever x ≤ y {\displaystyle x\leq y} we also have x ≤ ∗ y {\displaystyle x\leq ^{*}y} , then we call P ∗ {\displaystyle P^{*}} the subposet of P {\displaystyle P} induced by X ∗ {\displaystyle X^{*}} , and write P ∗ = P [ X ∗ ] {\displaystyle P^{*}=P[X^{*}]} . A partial order ≤ ∗ {\displaystyle \leq ^{*}} on a set X {\displaystyle X} is called an extension of another partial order ≤ {\displaystyle \leq } on X {\displaystyle X} provided that for all elements x , y ∈ X , {\displaystyle x,y\in X,} whenever x ≤ y , {\displaystyle x\leq y,} it is also the case that x ≤ ∗ y . {\displaystyle x\leq ^{*}y.} A linear extension is an extension that is also a linear (that is, total) order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Every partial order can be extended to a total order ( order-extension principle ). [ 16 ] In computer science , algorithms for finding linear extensions of partial orders (represented as the reachability orders of directed acyclic graphs ) are called topological sorting . Every poset (and every preordered set ) may be considered as a category where, for objects x {\displaystyle x} and y , {\displaystyle y,} there is at most one morphism from x {\displaystyle x} to y . {\displaystyle y.} More explicitly, let hom( x , y ) = {( x , y )} if x ≤ y (and otherwise the empty set ) and ( y , z ) ∘ ( x , y ) = ( x , z ) . {\displaystyle (y,z)\circ (x,y)=(x,z).} Such categories are sometimes called posetal . Posets are equivalent to one another if and only if they are isomorphic . In a poset, the smallest element, if it exists, is an initial object , and the largest element, if it exists, is a terminal object . Also, every preordered set is equivalent to a poset. Finally, every subcategory of a poset is isomorphism-closed . If P {\displaystyle P} is a partially ordered set that has also been given the structure of a topological space , then it is customary to assume that { ( a , b ) : a ≤ b } {\displaystyle \{(a,b):a\leq b\}} is a closed subset of the topological product space P × P . {\displaystyle P\times P.} Under this assumption partial order relations are well behaved at limits in the sense that if lim i → ∞ a i = a , {\displaystyle \lim _{i\to \infty }a_{i}=a,} and lim i → ∞ b i = b , {\displaystyle \lim _{i\to \infty }b_{i}=b,} and for all i , {\displaystyle i,} a i ≤ b i , {\displaystyle a_{i}\leq b_{i},} then a ≤ b . {\displaystyle a\leq b.} [ 17 ] A convex set in a poset P is a subset I of P with the property that, for any x and y in I and any z in P , if x ≤ z ≤ y , then z is also in I . This definition generalizes the definition of intervals of real numbers . When there is possible confusion with convex sets of geometry , one uses order-convex instead of "convex". A convex sublattice of a lattice L is a sublattice of L that is also a convex set of L . Every nonempty convex sublattice can be uniquely represented as the intersection of a filter and an ideal of L . An interval in a poset P is a subset that can be defined with interval notation: Whenever a ≤ b does not hold, all these intervals are empty. Every interval is a convex set, but the converse does not hold; for example, in the poset of divisors of 120, ordered by divisibility (see Fig. 7b), the set {1, 2, 4, 5, 8} is convex, but not an interval. An interval I is bounded if there exist elements a , b ∈ P {\displaystyle a,b\in P} such that I ⊆ [ a , b ] . Every interval that can be represented in interval notation is obviously bounded, but the converse is not true. For example, let P = (0, 1) ∪ (1, 2) ∪ (2, 3) as a subposet of the real numbers. The subset (1, 2) is a bounded interval, but it has no infimum or supremum in P , so it cannot be written in interval notation using elements of P . A poset is called locally finite if every bounded interval is finite. For example, the integers are locally finite under their natural ordering. The lexicographical order on the cartesian product N × N {\displaystyle \mathbb {N} \times \mathbb {N} } is not locally finite, since (1, 2) ≤ (1, 3) ≤ (1, 4) ≤ (1, 5) ≤ ... ≤ (2, 1) . Using the interval notation, the property " a is covered by b " can be rephrased equivalently as [ a , b ] = { a , b } . {\displaystyle [a,b]=\{a,b\}.} This concept of an interval in a partial order should not be confused with the particular class of partial orders known as the interval orders . Media related to Hasse diagrams at Wikimedia Commons; each of which shows an example for a partial order
https://en.wikipedia.org/wiki/Partially_ordered_set
Partially premixed combustion (PPC), also known as PPCI (partially-premixed compression ignition) or GDCI (gasoline direct-injection compression-ignition) [ 1 ] [ 2 ] [ 3 ] [ 4 ] is a modern combustion process intended to be used in internal combustion engines of automobiles and other motorized vehicles in the future. Its high specific power , high fuel efficiency and low exhaust pollution have made it a promising technology. As a compression-ignition engine, the fuel mixture ignites due to the increase in temperature that occurs with compression rather than a spark from a spark plug. [ 5 ] A PPC engine injects and premixes a charge during the compression stroke. This premixed charge is too lean to ignite during the compression stroke – the charge will ignite after the last fuel injection ends near TDC . The fuel efficiency and working principle of a PPC engine resemble those of Diesel engine , but the PPC engine can be run with a variety of fuels. Also, the partially premixed charge burns clean. [ 6 ] Challenges with using gasoline in a PPC engine arise due to the low lubricity of gasoline and the low cetane value of gasoline. Use of fuel additives or gasoline-diesel or gasoline- biodiesel blends can mitigate the various problems with gasoline. [ 7 ]
https://en.wikipedia.org/wiki/Partially_premixed_combustion
Participatory design (originally co-operative design , now often co-design ) is an approach to design attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable . Participatory design is an approach which is focused on processes and procedures of design and is not a design style. The term is used in a variety of fields e.g. software design , urban design , architecture , landscape architecture , product design , sustainability , graphic design , industrial design , planning, and health services development as a way of creating environments that are more responsive and appropriate to their inhabitants' and users' cultural, emotional, spiritual and practical needs. It is also one approach to placemaking . Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own. [ 1 ] [ 2 ] Companies increasingly rely on their user communities to generate new product ideas , marketing them as "user-designed" products to the wider consumer market ; consumers who are not actively participating but observe this user-driven approach show a preference for products from such firms over those driven by designers . This preference is attributed to an enhanced identification with firms adopting a user-driven philosophy , consumers experiencing empowerment by being indirectly involved in the design process, leading to a preference for the firm's products. If consumers feel dissimilar to participating users, especially in demographics or expertise, the effects are weakened. Additionally, if a user-driven firm is only selectively open to user participation, rather than fully inclusive, observing consumers may not feel socially included, attenuating the identified preference. [ 3 ] Participatory design has been used in many settings and at various scales. For some, this approach has a political dimension of user empowerment and democratization. [ 4 ] This inclusion of external parties in the design process does not excuse designers of their responsibilities. In their article "Participatory Design and Prototyping", Wendy Mackay and Michel Beaudouin-Lafon support this point by stating that "[a] common misconception about participatory design is that designers are expected to abdicate their responsibilities as designers and leave the design to users. This is never the case: designers must always consider what users can and cannot contribute." [ 5 ] In several Scandinavian countries , during the 1960s and 1970s, participatory design was rooted in work with trade unions; its ancestry also includes action research and sociotechnical design . [ 6 ] In participatory design, participants (putative, potential or future) are invited to cooperate with designers, researchers and developers during an innovation process. Co-design requires the end user's participation: not only in decision making but also in idea generation. [ 7 ] Potentially, they participate during several stages of an innovation process: they participate during the initial exploration and problem definition both to help define the problem and to focus ideas for solution, and during development, they help evaluate proposed solutions. [ 2 ] Maarten Pieters and Stefanie Jansen describe co-design as part of a complete co-creation process, which refers to the "transparent process of value creation in ongoing, productive collaboration with, and supported by all relevant parties, with end-users playing a central role" and covers all stages of a development process. [ 8 ] In "Co-designing for Society", Deborah Szebeko and Lauren Tan list various precursors of co-design, starting with the Scandinavian participatory design movement and then state "Co-design differs from some of these areas as it includes all stakeholders of an issue not just the users, throughout the entire process from research to implementation." [ 9 ] In contrast, Elizabeth Sanders and Pieter Stappers state that "the terminology used until the recent obsession with what is now called co-creation/co-design" was "participatory design". [ 7 ] They also discuss the differences between co-design and co-creation and how they are "often confused and/or treated synonymously with one another". [ 7 ] In their words, "Co-creation is a very broad term with applications ranging from the physical to the metaphysical and from the material to the spiritual", while seeing "co-design [as] a specific instance of co-creation". [ 7 ] Pulling from the idea of what co-creation is, the definition of co-design in the context of their paper developed into "the creativity of designers and people not trained in design working together in the design development process". [ 7 ] Another term brought up in this article front end design, which was formerly known as pre-design. "The goal of the explorations in the front end is to determine what is to be designed and sometimes what should not be designed and manufactured" and provides a space for the initial stages of co-design to take place. [ 7 ] An alternate definition of co-design has been brought up by Maria Gabriela Sanchez and Lois Frankel. They proposed that "Co-design may be considered, for the purpose of this study, as an interdisciplinary process that involves designers and non-designers in the development of design solutions" and that "the success of the interdisciplinary process depends on the participation of all the stakeholders in the project". [ 10 ] "Co-design is a perfect example of interdisciplinary work, where designer, researcher, and user work collaboratively in order to reach a common goal. The concept of interdisciplinarity, however, becomes broader in this context where it not only results from the union of different academic disciplines, but from the combination of different perspectives on a problem or topic." [ 10 ] Similarly, another perspective comes from Golsby-Smith's "Fourth Order Design" which outlines a design process in which end-user participation is required and favours individual process over outcome. [ 11 ] Buchanan's definition of culture as a verb is a key part of Golsby-Smith's argument in favour of fourth order design. [ 11 ] In Buchanan's words, "Culture is not a state, expressed in an ideology or a body of doctrines. It is an activity. Culture is the activity of ordering, disordering and reordering in the search for understanding and for values which guide action." [ 12 ] Therefore, to design for the fourth-order one must design within the widest scope. The system is discussion and the focus falls onto process rather than outcome. [ 11 ] The idea that culture and people are an integral part of participatory design is supported by the idea that a "key feature of the field is that it involves people or communities: it is not merely a mental place or a series of processes". [ 11 ] "Just as a product is not only a thing, but exists within a series of connected processes, so these processes do not live in a vacuum, but move through a field of less tangible factors such as values, beliefs and the wider context of other contingent processes." [ 11 ] As described by Sanders and Stappers, [ 7 ] one could position co-design as a form of human-centered design across two different dimensions. One dimension is the emphasis on research or design, another dimension is how much people are involved. Therefore, there are many forms of co-design, with different degrees of emphasis on research or design and different degrees of stakeholder involvement. For instance, one of the forms of co-design which involves stakeholders strongly early at the front end design process in the creative activities is generative co-design. [ 13 ] Generative co-design is increasingly being used to involve different stakeholders as patient, care professionals and designers actively in the creative making process to develop health services. [ 14 ] [ 15 ] Another dimension to consider is that of the crossover between design research and education. An example of this is a study that was completed at the Middle East Technical University in Turkey, the purpose of which was to look into the use of “team development [in] enhancing interdisciplinary collaboration between design and engineering students using design thinking”. [ 16 ] The students in this study were tasked with completing a group project and reporting on the experience of working together. One of the main takeaways was that "Interdisciplinary collaboration is an effective way to address complex problems with creative solutions. However, a successful collaboration requires teams first to get ready to work in harmony towards a shared goal and to appreciate interdisciplinarity" [ 16 ] From the 1960s onward there was a growing demand for greater consideration of community opinions in major decision-making. In Australia many people believed that they were not being planned 'for' but planned 'at'. (Nichols 2009). A lack of consultation made the planning system seem paternalistic and without proper consideration of how changes to the built environment affected its primary users. In Britain "the idea that the public should participate was first raised in 1965." [ 17 ] However the level of participation is an important issue. At a minimum public workshops and hearings have now been included in almost every planning endeavour. [ 18 ] Yet this level of consultation can simply mean information about change without detailed participation. Involvement that 'recognises an active part in plan making' [ 17 ] has not always been straightforward to achieve. Participatory design has attempted to create a platform for active participation in the design process, for end users. Participatory design was actually born in Scandinavia and called cooperative design . However, when the methods were presented to the US community 'cooperation' was a word that didn't resonate with the strong separation between workers and managers - they weren't supposed to discuss ways of working face-to-face. Hence, 'participatory' was instead used as the initial Participatory Design sessions weren't a direct cooperation between workers and managers, sitting in the same room discussing how to improve their work environment and tools, but there were separate sessions for workers and managers. Each group was participating in the process, not directly cooperating. (in historical review of Cooperative Design, at a Scandinavian conference). In Scandinavia, research projects on user participation in systems development date back to the 1970s. [ 19 ] The so-called "collective resource approach" developed strategies and techniques for workers to influence the design and use of computer applications at the workplace: The Norwegian Iron and Metal Workers Union (NJMF) project took a first move from traditional research to working with people, directly changing the role of the union clubs in the project. [ 20 ] The Scandinavian projects developed an action research approach, emphasizing active co-operation between researchers and workers of the organization to help improve the latter's work situation. While researchers got their results, the people whom they worked with were equally entitled to get something out of the project. The approach built on people's own experiences, providing for them resources to be able to act in their current situation. The view of organizations as fundamentally harmonious—according to which conflicts in an organization are regarded as pseudo-conflicts or "problems" dissolved by good analysis and increased communication—was rejected in favor of a view of organizations recognizing fundamental "un-dissolvable" conflicts in organizations (Ehn & Sandberg, 1979). In the Utopia project (Bødker et al., 1987, Ehn, 1988), the major achievements were the experience-based design methods, developed through the focus on hands-on experiences, emphasizing the need for technical and organizational alternatives (Bødker et al., 1987). The parallel Florence project (Gro Bjerkness & Tone Bratteteig) started a long line of Scandinavian research projects in the health sector. In particular, it worked with nurses and developed approaches for nurses to get a voice in the development of work and IT in hospitals. The Florence project put gender on the agenda with its starting point in a highly gendered work environment. The 1990s led to a number of projects including the AT project (Bødker et al., 1993) and the EureCoop / EuroCode projects (Grønbæk, Kyng & Mogensen, 1995). In recent years, it has been a major challenge to participatory design to embrace the fact that much technology development no longer happens as design of isolated systems in well-defined communities of work (Beck, 2002). At the dawn of the 21st century, we use technology at work, at home, in school, and while on the move. As mentioned above, one definition of co-design states that it is the process of working with one or more non-designers throughout the design process. This method is focused on the insights, experiences and input from end-users on a product or service, with the aim to develop strategies for improvement. [ 21 ] It is often used by trained designers who recognize the difficulty in properly understanding the cultural, societal, or usage scenarios encountered by their user. C. K. Prahalad and Venkat Ramaswamy are usually given credit for bringing co-creation/co-design to the minds of those in the business community with the 2004 publication of their book, The Future of Competition: Co-Creating Unique Value with Customers. They propose: The meaning of value and the process of value creation are rapidly shifting from a product and firm-centric view to personalized consumer experiences. Informed, networked, empowered and active consumers are increasingly co-creating value with the firm. [ 22 ] The phrase co-design is also used in reference to the simultaneous development of interrelated software and hardware systems. The term co-design has become popular in mobile phone development, where the two perspectives of hardware and software design are brought into a co-design process. [ 23 ] Results directly related to integrating co-design into existing frameworks is "researchers and practitioners have seen that co-creation practiced at the early front end of the design development process can have an impact with positive, long-range consequences." [ 24 ] Co-design is an attempt to define a new evolution of the design process and with that, there is an evolution of the designer. Within the co-design process, the designer is required to shift their role from one of expertise to one of an egalitarian mindset. [ 7 ] The designer must believe that all people are capable of creativity and problem solving. The designer no longer exists from the isolated roles of researcher and creator, but now must shift to roles such as philosopher and facilitator. [ 11 ] This shift allows for the designer to position themselves and their designs within the context of the world around them creating better awareness. This awareness is important because in the designer's attempt to answer a question, "[they] must address all other related questions about values, perceptions, and worldview". [ 11 ] Therefore, by shifting the role of the designer not only do the designs better address their cultural context yet so do the discussions around them. Discourses in the PD literature have been sculpted by three main concerns: (1) the politics of design, (2) the nature of participation, and (3) methods, tools and techniques for carrying out design projects (Finn Kensing & Jeanette Blomberg, 1998, p. 168). [ 25 ] The politics of design have been the concern for many design researchers and practitioners. Kensing and Blomberg illustrate the main concerns which related to the introduction of new frameworks such as system design which related to the introduction of computer-based systems and power dynamics that emerge within the workspace. The automation introduced by system design has created concerns within unions and workers as it threatened their involvement in production and their ownership over their work situation. Asaro (2000) offers a detailed analysis of the politics of design and the inclusion of "users" in the design process. Major international organizations such as Project for Public Spaces create opportunities for rigorous participation in the design and creation of place , believing that it is the essential ingredient for successful environments. Rather than simply consulting the public, PPS creates a platform for the community to participate and co-design new areas, which reflect their intimate knowledge. Providing insights, which independent design professionals such as architects or even local government planners may not have. Using a method called Place Performance Evaluation or (Place Game), groups from the community are taken on the site of proposed development, where they use their knowledge to develop design strategies, which would benefit the community. "Whether the participants are schoolchildren or professionals, the exercise produces dramatic results because it relies on the expertise of people who use the place every day, or who are the potential users of the place." [ 26 ] This successfully engages with the ultimate idea of participatory design, where various stakeholders who will be the users of the end product, are involved in the design process as a collective. Similar projects have had success in Melbourne, Australia particularly in relation to contested sites, where design solutions are often harder to establish. The Talbot Reserve in the suburb of St. Kilda faced numerous problems of use, such as becoming a regular spot for sex workers and drug users to congregate. A Design In, which incorporated a variety of key users in the community about what they wanted for the future of the reserve allowed traditionally marginalised voices to participate in the design process. Participants described it as 'a transforming experience as they saw the world through different eyes.' (Press, 2003, p. 62). This is perhaps the key attribute of participatory design, a process which, allows multiple voices to be heard and involved in the design, resulting in outcomes which suite a wider range of users. It builds empathy within the system and users where it is implemented, which makes solving larger problems more holistically. As planning affects everyone it is believed that "those whose livelihoods, environments and lives are at stake should be involved in the decisions which affect them" (Sarkissian and Perglut, 1986, p. 3). C. West Churchman said systems thinking "begins when first you view the world through the eyes of another". [ 27 ] Participatory design has many applications in development and changes to the built environment . It has particular currency to planners and architects , in relation to placemaking and community regeneration projects. It potentially offers a far more democratic approach to the design process as it involves more than one stakeholder . By incorporating a variety of views there is greater opportunity for successful outcomes. Many universities and major institutions are beginning to recognise its importance. The UN , Global studio involved students from Columbia University , University of Sydney and Sapienza University of Rome to provide design solutions for Vancouver 's downtown eastside, which suffered from drug- and alcohol-related problems. The process allowed cross-discipline participation from planners, architects and industrial designers, which focused on collaboration and the sharing of ideas and stories, as opposed to rigid and singular design outcomes. (Kuiper, 2007, p. 52) Public interest design is a design movement, extending to architecture, with the main aim of structuring design around the needs of the community. At the core of its application is participatory design. [ 28 ] Through allowing individuals to have a say in the process of design of their own surrounding built environment, design can become proactive and tailored towards addressing wider social issues facing that community. [ 29 ] Public interest design is meant to reshape conventional modern architectural practice. Instead of having each construction project solely meet the needs of the individual, public interest design addresses wider social issues at their core. This shift in architectural practice is a structural and systemic one, allowing design to serve communities responsibly. [ 29 ] Solutions to social issues can be addressed in a long-term manner through such design, serving the public, and involving it directly in the process through participatory design. The built environment can become the very reason for social and community issues to arise if not executed properly and responsibly. Conventional architectural practice often does cause such problems since only the paying client has a say in the design process. [ 29 ] That is why many architects throughout the world are employing participatory design and practicing their profession more responsibly, encouraging a wider shift in architectural practice. Several architects have largely succeeded in disproving theories that deem public interest design and participatory design financially and organizationally not feasible. Their work is setting the stage for the expansion of this movement, providing valuable data on its effectiveness and the ways in which it can be carried out. Participatory Design is a growing practice within the field of design yet has not yet been widely implemented. Some barriers to the adoption of participatory design are listed below. A belief that creativity is a restricted skill would invalidate the proposal of participatory design to allow a wider reach of affected people to participate in the creative process of designing. [ 30 ] However, this belief is based on a limited view of creativity which does not recognize that creativity can manifest in a wide range of activities and experiences. This doubt can be damaging not only to individuals but also to society as a whole. By assuming that only a select few possess creative talent, we may overlook the unique perspectives, ideas, and solutions. Often co-op based design technology assumes users have equal knowledge of technology used. For example: Co-op 3d-design program can let multiple people design at same time, but does not have support for guided help – tell the other guy what to do through markings and text, without talking to the person. In programming, one also have the lack of guided help support, concerning co-op based programing. One have support for letting multiple people programming at same time, but here one also have lack of guided help support – text saying write this code, hints from other user, that one can mark relevant stuff on screen and so on. This is a problem in pair-programming, with communication as a bottle neck – one should have possibility to mark, configure and guide the user without knowledge. In a profit-motivated system, the commercial field of design may feel fearful of relinquishing some control in order to empower those who are typically not involved in the process of design. [ 30 ] Commercial organizational structures often prioritize profit, individual gain, or status over the well-being of the community or other externalities . However, participatory practices are not impossible to implement in commercial settings. It may be difficult for those who have acquired success in a hierarchical structure to imagine alternative systems of open collaboration. Although participatory design has been of interest in design academia, applied uses require funding and dedication from many individuals. The high time and financial costs make research and development of participatory design less appealing for speculative investors. [ 30 ] It also may be difficult to find or convince enough shareholders or community members to commit their time and effort to a project. [ 31 ] However, widespread and involved participation is critical to the process. Successful examples of participatory design are critical because they demonstrate the benefits of this approach and inspire others to adopt it. A lack of funding or interest can cause participatory projects to revert to practices where the designer initiates and dominates rather than facilitating design by the community. [ 31 ] Participatory design projects which involve a professional designer as a facilitator to a larger group can have difficulty with competing objectives. Designers may prioritize aesthetics while end-users may prioritize functionality and affordability. [ 31 ] Addressing these differing priorities may involve finding creative solutions that balance the needs of all stakeholders, such as using low-cost materials that meet functional requirements while also being aesthetically pleasing. Despite any potential predetermined assumptions, "the users’ knowledge has to be considered as important as the knowledge of the other professionals in the team, [as this] can be an obstacle to the co-design practice." [ 10 ] "[The future of] co-designing will be a close collaboration between all the stakeholders in the design development process together with a variety of professionals having hybrid design/research skills." [ 7 ] Recent scholarship has highlighted the complex emotional landscape navigated by researchers engaged in participatory design, especially in contexts involving vulnerable or marginalized communities. Emotional challenges such as guilt and shame often emerge as researchers confront the disparity between their professional objectives and the lived realities of the communities they engage with. These emotions may stem from unmet expectations, perceived exploitation, or limited project impact. For instance, researchers may experience a sense of guilt when project outcomes fail to meet community needs or when research goals appear to benefit academic careers more than the communities themselves. The ethical dilemmas associated with balancing research agendas, funding constraints, and community needs can create a conflict between professional obligations and personal commitments, potentially leading to emotional burnout or moral distress. Consequently, there is a growing call within the field for frameworks that address these emotional aspects, advocate for ethical reflexivity, and promote sustained engagement strategies that align more closely with community well-being and autonomy. This perspective broadens the traditional scope of participatory design by acknowledging the emotional toll on researchers, thereby emphasizing the need for supportive structures that account for these emotional and ethical intricacies. [ 32 ] Many local governments require community consultation in any major changes to the built environment. Community involvement in the planning process is almost a standard requirement in most strategic changes. Community involvement in local decision making creates a sense of empowerment. The City of Melbourne Swanston Street redevelopment project received over 5000 responses from the public allowing them to participate in the design process by commenting on seven different design options. [ 33 ] While the City of Yarra recently held a "Stories in the Street" [ 34 ] consultation, to record peoples ideas about the future of Smith Street. It offered participants a variety of mediums to explore their opinions such as mapping, photo surveys and storytelling. Although local councils are taking positive steps towards participatory design as opposed to traditional top down approaches to planning, many communities are moving to take design into their own hands. Portland, Oregon City Repair Project [ 35 ] is a form of participatory design, which involves the community co-designing problem areas together to make positive changes to their environment. It involves collaborative decision-making and design without traditional involvement from local government or professionals but instead runs on volunteers from the community. The process has created successful projects such as intersection repair, [ 36 ] which saw a misused intersection develop into a successful community square. In Malawi, a UNICEF WASH programme trialled participatory design development for latrines in order to ensure that users participate in creating and selecting sanitation technologies that are appropriate and affordable for them. The process provided an opportunity for community members to share their traditional knowledge and skills in partnership with designers and researchers. [ 37 ] Peer-to-peer urbanism [ 38 ] [ 39 ] is a form of decentralized, participatory design for urban environments and individual buildings. It borrows organizational ideas from the open-source software movement , so that knowledge about construction methods and urban design schemes is freely exchanged. In the English -speaking world, the term has a particular currency in the world of software development , especially in circles connected to Computer Professionals for Social Responsibility (CPSR), who have put on a series of Participatory Design Conferences . It overlaps with the approach extreme programming takes to user involvement in design, but (possibly because of its European trade union origins) the Participatory Design tradition puts more emphasis on the involvement of a broad population of users rather than a small number of user representatives. Participatory design can be seen as a move of end-users into the world of researchers and developers, whereas empathic design can be seen as a move of researchers and developers into the world of end-users. There is a very significant differentiation between user-design and user-centered design in that there is an emancipatory theoretical foundation, and a systems theory bedrock ( Ivanov , 1972, 1995), on which user-design is founded. Indeed, user-centered design is a useful and important construct, but one that suggests that users are taken as centers in the design process, consulting with users heavily, but not allowing users to make the decisions, nor empowering users with the tools that the experts use. For example, Wikipedia content is user-designed. Users are given the necessary tools to make their own entries. Wikipedia's underlying wiki software is based on user-centered design: while users are allowed to propose changes or have input on the design, a smaller and more specialized group decide about features and system design. Participatory work in software development has historically tended toward two distinct trajectories, one in Scandinavia and northern Europe, and the other in North America. The Scandinavian and northern European tradition has remained closer to its roots in the labor movement (e.g., Beck, 2002; Bjerknes, Ehn, and Kyng, 1987). The North American and Pacific rim tradition has tended to be both broader (e.g., including managers and executives as "stakeholders" in design) and more circumscribed (e.g., design of individual features as contrasted with the Scandinavian approach to the design of entire systems and design of the work that the system is supposed to support ) (e.g., Beyer and Holtzblatt, 1998; Noro and Imada, 1991). However, some more recent work has tended to combine the two approaches (Bødker et al., 2004; Muller, 2007). Increasingly researchers are focusing on co-design as a way of doing research, and therefore are developing parts of its research methodology. For instance, in the field of generative co-design Vandekerckhove et al. [ 40 ] have proposed a methodology to assemble a group of stakeholders to participate in generative co-design activities in the early innovation process. They propose first to sample a group of potential stakeholders through snowball sampling, afterwards interview these people and assess their knowledge and inference experience, lastly they propose to assemble a diverse group of stakeholders according to their knowledge and inference experience. [ 40 ] Though not completely synonymous, research methods of Participatory Design can be defined under Participatory Research (PR): [ 41 ] a term for research designs and frameworks using direct collaboration with those affected by the studied issue. [ 42 ] More specifically, Participatory Design has evolved from Community-Based Research and Participatory Action Research (PAR). PAR is a qualitative research methodology involving: "three types of change, including critical consciousness development of researchers and participants, improvement of lives of those participating in research, and transformation of societal 'decolonizing' research methods with the power of healing and social justice". [ 43 ] Participatory Action Research (PAR) is a subset of Community-Based Research aimed explicitly at including participants and empowering people to create measurable action. [ 43 ] PAR practices across various disciplines, with research in Participatory Design being an application of its different qualitative methodologies. Just as PAR is often used in social sciences, for example, to investigate a person's lived experience concerning systemic structures and social power relations, Participatory Design seeks to deeply understand stakeholders' experiences by directly engaging them in the problem-defining and solving processes. Therefore, in Participatory Design, research methods extend beyond simple qualitative and quantitative data collection. Rather than being concentrated within data collection, research methods of Participatory Design are tools and techniques used throughout co-designing research questions, collecting, analyzing, and interpreting data, knowledge dissemination, and enacting change. [ 41 ] When facilitating research in Participatory Design, decisions are made in all research phases to assess what will produce genuine stakeholder participation. [ 41 ] By doing so, one of Participatory Design's goals is to dismantle the power imbalance existing between 'designers' and 'users.' Applying PR and PAR research methods seeks to engage communities and question power hierarchies, which "makes us aware of the always contingent character of our presumptions and truths... truths are logical, contingent and intersubjective... not directed toward some specific and predetermined end goal... committed to denying us the (seeming) firmness of our commonsensical assumptions". [ 44 ] Participatory design offers this denial of our "commonsensical assumptions" because it forces designers to consider knowledge beyond their craft and education. Therefore, a designer conducting research for Participatory Design assumes the role of facilitator and co-creator. [ 45 ]
https://en.wikipedia.org/wiki/Participatory_design
Particle-Induced X-Ray Emission or Proton-Induced X-Ray Emission ( PIXE ) is a technique used for determining the elemental composition of a material or a sample . When a material is exposed to an ion beam, atomic interactions occur that give off EM radiation of wavelengths in the x-ray part of the electromagnetic spectrum specific to an element. PIXE is a powerful, yet non-destructive elemental analysis technique now used routinely by geologists, archaeologists, art conservators and others to help answer questions of provenance, dating and authenticity . The technique was first proposed in 1970 by Sven Johansson of Lund University , Sweden , and developed over the next few years with his colleagues Roland Akselsson and Thomas B Johansson. [ 1 ] Recent extensions of PIXE using tightly focused beams (down to 1 μm) gives the additional capability of microscopic analysis. This technique, called microPIXE , can be used to determine the distribution of trace elements in a wide range of samples. A related technique, particle-induced gamma-ray emission (PIGE) can be used to detect some light elements. Additionally a multiplexed instrument combining PIXE with Mass Spectrometry of molecules: PDI-PIXE-MS or PIXE-MS. See below. Three types of spectra can be collected from a PIXE experiment: Quantum theory states that orbiting electrons of an atom must occupy discrete energy levels in order to be stable. Bombardment with ions of sufficient energy (usually MeV protons) produced by an ion accelerator, will cause inner shell ionization of atoms in a specimen. Outer shell electrons drop down to replace inner shell vacancies, however only certain transitions are allowed. X-rays of a characteristic energy of the element are emitted. An energy dispersive detector is used to record and measure these X-rays. Only elements heavier than fluorine can be detected. The lower detection limit for a PIXE beam is given by the ability of the X-rays to pass through the window between the chamber and the X-ray detector. The upper limit is given by the ionisation cross section, the probability of the K electron shell ionisation , this is maximal when the velocity of the proton matches the velocity of the electron (10% of the speed of light ), therefore 3 MeV proton beams are optimal. [ 2 ] Protons can also interact with the nucleus of the atoms in the sample through elastic collisions, Rutherford backscattering , often repelling the proton at angles close to 180 degrees. The backscatter give information on the sample thickness and composition. The bulk sample properties allow for the correction of X-ray photon loss within the sample. The transmission of protons through a sample can also be used to get information about the sample. Channeling is one of the processes that can be used to study crystals. Protein analysis using microPIXE allow for the determination of the elemental composition of liquid and crystalline proteins. microPIXE can quantify the metal content of protein molecules with a relative accuracy of between 10% and 20%. [ 3 ] The advantage of microPIXE is that given a protein of known sequence, the X-ray emission from sulfur can be used as an internal standard to calculate the number of metal atoms per protein monomer. Because only relative concentrations are calculated there are only minimal systematic errors, and the results are totally internally consistent. The relative concentrations of DNA to protein (and metals) can also be measured using the phosphate groups of the bases as an internal calibration. Analysis of the data collected can be performed by the programs Dan32, [ 4 ] the front end to gupix. [ 5 ] [ 6 ] In order to get a meaningful sulfur signal from the analysis, the buffer should not contain sulfur (i.e. no BES, DDT , HEPES , MES , MOPS O or PIPES compounds). Excessive amounts of chlorine in the buffer should also be avoided, since this will overlap with the sulfur peak; KBr and NaBr are suitable alternatives. Due to the low penetration depth of protons and heavy charged particles, PIXE is limited to analyzing the top micrometer of a given sample. There are many advantages to using a proton beam over an electron beam. There is less crystal charging from Bremsstrahlung radiation, although there is some from the emission of Auger electrons , and there is significantly less than if the primary beam was itself an electron beam. Because of the higher mass of protons relative to electrons, there is less lateral deflection of the beam; this is important for proton beam writing applications. Two-dimensional maps of elemental compositions can be generated by scanning the microPIXE beam across the target. Whole cell and tissue analysis is possible using a microPIXE beam, this method is also referred to as nuclear microscopy . [ 7 ] MicroPIXE is a useful technique for the non-destructive analysis of paintings and antiques. Although it provides only an elemental analysis, it can be used to distinguish and measure layers within the thickness of an artifact. [ 8 ] The technique is comparable with destructive techniques such as the ICP family of analyses. [ 9 ] Proton beams can be used for writing ( proton beam writing ) through either the hardening of a polymer (by proton induced cross-linking ), or through the degradation of a proton sensitive material. This may have important effects in the field of nanotechnology . This is a technique, PIXE-MS, for short, which combines PIXE with Mass Spectrometry of molecules. Elemental determinations are performed by PIXE with a heavy ion, such as oxygen, while simultaneously collecting the molecular ions for mass analysis in a quadrupole mass spectrometer, or time-of-flight (TOF) instrument. [ 10 ] ICP-MS only determines elemental constituents using mass spectrometry, not molecular information. Sequential scanning may be done with a hydrogen ion beam and then a heavy ion beam to desorb and ionize the analyte sample. This technique allows for the analysis of both the elemental constituents as well as, simultaneously, the molecular ions, or molecular speciation, present in a sample, using a heavy ion beam. This makes use, typically, of a 4 MeV accelerator with samples prepared in glycerol, on carbon felt. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Particle-induced_X-ray_emission
Particle-laden flows refers to a class of two-phase fluid flow , in which one of the phases is continuously connected (referred to as the continuous or carrier phase) and the other phase is made up of small, immiscible, and typically dilute particles (referred to as the dispersed or particle phase). Fine aerosol particles in air is an example of a particle-laden flow; the aerosols are the dispersed phase, and the air is the carrier phase. [ 1 ] The modeling of two-phase flows has a tremendous variety of engineering and scientific applications: pollution dispersion in the atmosphere, fluidization in combustion processes, aerosol deposition in spray medication, along with many others. The starting point for a mathematical description of almost any type of fluid flow is the classical set of Navier–Stokes equations . To describe particle-laden flows, we must modify these equations to account for the effect of the particles on the carrier, or vice versa, or both - a suitable choice of such added complications depend on a variety of the parameters, for instance, how dense the particles are, how concentrated they are, or whether or not they are chemically reactive. In most real world cases, the particles are very small and occur in low concentrations, hence the dynamics are governed primarily by the continuous phase. A possible way to represent the dynamics of the carrier phase is by the following modified Navier-Stokes momentum equation: where S i {\displaystyle S_{i}} is a momentum source or sink term, arising from the presence of the particle phase. The above equation is an Eulerian equation, that is, the dynamics are understood from the viewpoint of a fixed point in space. The dispersed phase is typically (though not always) treated in a Lagrangian framework, that is, the dynamics are understood from the viewpoint of fixed particles as they move through space. A usual choice of momentum equation for a particle is: where u i {\displaystyle u_{i}} represents the carrier phase velocity and v i {\displaystyle v_{i}} represents the particle velocity. τ p {\displaystyle \tau _{p}} is the particle relaxation time, and represents a typical timescale of the particle's reaction to changes in the carrier phase velocity - loosely speaking, this can be thought of as the particle's inertia with respect to the fluid with contains it. The interpretation of the above equation is that particle motion is hindered by a drag force. In reality, there are a variety of other forces which act on the particle motion (such as gravity, Basset history and added mass) – as described through for instance the Basset–Boussinesq–Oseen equation . However, for many physical examples, in which the density of the particle far exceeds the density of the medium, the above equation is sufficient. [ 2 ] A typical assumption is that the particles are spherical, in which case the drag is modeled using Stokes drag assumption: Here d p {\displaystyle d_{p}} is the particle diameter, ρ p {\displaystyle \rho _{p}} , the particle density and μ {\displaystyle \mu } , the dynamic viscosity of the carrier phase. More sophisticated models contain the correction factor: where R e p {\displaystyle Re_{p}} is the particle Reynolds number, defined as: If the mass fraction of the dispersed phase is small, then one-way coupling between the phases is a reasonable assumption; that is, the dynamics of the particle phase are affected by the carrier phase, but the reverse is not the case. However, if the mass fraction of the dispersed phase is large, the interaction of the dynamics between the two phases must be considered - this is two-way coupling . A problem with the Lagrangian treatment of the dispersed phase is that once the number of particles becomes large, it may require a prohibitive amount of computational power to track a sufficiently large sample of particles required for statistical convergence. In addition, if the particles are sufficiently light, they behave essentially like a second fluid. In this case, an Eulerian treatment of the dispersed phase is sensible. Like all fluid dynamics-related disciplines, the modelling of particle-laden flows is an enormous challenge for researchers - this is because most flows of practical interest are turbulent . Direct numerical simulations (DNS) for single-phase flow, let alone two-phase flow, are computationally very expensive; the computing power required for models of practical engineering interest are far out of reach. Since one is often interested in modeling only large scale qualitative behavior of the flow, a possible approach is to decompose the flow velocity into mean and fluctuating components, by the Reynolds-averaged Navier-Stokes (RANS) approach. A compromise between DNS and RANS is large eddy simulation (LES), in which the small scales of fluid motion are modeled and the larger, resolved scales are simulated directly. Experimental observations, as well as DNS indicate that an important phenomenon to model is preferential concentration. Particles (particularly those with Stokes number close to 1) are known to accumulate in regions of high shear and low vorticity (such as turbulent boundary layers ), and the mechanisms behind this phenomenon are not well understood. Moreover, particles are known to migrate down turbulence intensity gradients (this process is known as turbophoresis ). These features are particularly difficult to capture using RANS or LES-based models since too much time-varying information is lost. Due to these difficulties, existing turbulence models tend to be ad hoc , that is, the range of applicability of a given model is usually suited toward a highly specific set of parameters (such as geometry, dispersed phase mass loading and particle reaction time), and are also restricted to low Reynolds numbers (whereas the Reynolds number of flows of engineering interest tend to be very high). An interesting aspect of particle-laden flows is preferential migration of particles to certain regions within the fluid flow. This is often characterized by the Stokes number (St) of the particles. At low St, particles tend to act as tracers and are uniformly distributed. At high St, particles are heavy and are influenced less by the fluid and more by its inertia. At intermediate St, particles are affected by both the fluid motion and its inertia, which gives rise to several interesting behaviors. This is especially true in wall-bounded flows where there is a velocity gradient near the wall. One of the earliest works describing preferential migration is the experimental work of Segre and Silberberg. [ 3 ] [ 4 ] They showed that a neutrally buoyant particle in a laminar pipe flow comes to an equilibrium position between the wall and the axis. This is referred to as the Segré–Silberberg effect . Saffman explained this in terms of the force acting on the particle when it experiences a velocity gradient across it. Feng et al. have studied this through detailed direct numerical simulations and have elaborated on the physical mechanism of this migration. Recently it was found that even for non-neutrally buoyant particles similar preferential migration occurs . [ 5 ] [ 6 ] At low St, the particles tend to settle at an equilibrium position while for high St, the particles begin to oscillate about the center of the channel. The behavior becomes even interesting in turbulent flows. Here, the turbophoretic force (transport of particles down gradients of turbulent kinetic energy) causes a high concentration of particles near the walls. Experimental and particle-resolved DNS studies have explained the mechanism of this migration in terms of the Saffman lift and the turbophoretic force . [ 7 ] [ 8 ] These preferential migration are of significant importance to several applications where wall-bounded particle-laden flows are encountered and is an active area of research.
https://en.wikipedia.org/wiki/Particle-laden_flow
In granulometry , the particle-size distribution ( PSD ) of a powder , or granular material , or particles dispersed in fluid , is a list of values or a mathematical function that defines the relative amount, typically by mass , of particles present according to size. [ 1 ] Significant energy is usually required to disintegrate soil, etc. particles into the PSD that is then called a grain size distribution . [ 2 ] [ self-published source? ] The PSD of a material can be important in understanding its physical and chemical properties. It affects the strength and load-bearing properties of rocks and soils. It affects the reactivity of solids participating in chemical reactions, and needs to be tightly controlled in many industrial products such as the manufacture of printer toner , cosmetics , and pharmaceutical products. Particle size distribution can greatly affect the efficiency of any collection device. Settling chambers will normally only collect very large particles, those that can be separated using sieve trays. Centrifugal collectors will normally collect particles down to about 20 μm. Higher efficiency models can collect particles down to 10 μm. Fabric filters are one of the most efficient and cost effective types of dust collectors available and can achieve a collection efficiency of more than 99% for very fine particles. Wet scrubbers that use liquid are commonly known as wet scrubbers. In these systems, the scrubbing liquid (usually water) comes into contact with a gas stream containing dust particles. The greater the contact of the gas and liquid streams, the higher the dust removal efficiency. Electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. They can be very efficient at the collection of very fine particles. Filter Press used for filtering liquids by cake filtration mechanism. The PSD plays an important part in the cake formation, cake resistance, and cake characteristics. The filterability of the liquid is determined largely by the size of the particles. ρ p : Actual particle density (g/cm 3 ) ρ g : Gas or sample matrix density (g/cm 3 ) r 2 : Least-squares coefficient of determination . The closer this value is to 1.0, the better the data fit to a hyperplane representing the relationship between the response variable and a set of covariate variables. A value equal to 1.0 indicates all data fit perfectly within the hyperplane. λ: Gas mean free path (cm) D 50 : Mass-median-diameter (MMD). The log-normal distribution mass median diameter. The MMD is considered to be the average particle diameter by mass. σ g : Geometric standard deviation . This value is determined mathematically by the equation: The value of σ g determines the slope of the least-squares regression curve. α: Relative standard deviation or degree of polydispersity . This value is also determined mathematically. For values less than 0.1, the particulate sample can be considered to be monodisperse. Re (P) : Particle Reynolds Number . In contrast to the large numerical values noted for flow Reynolds number, particle Reynolds number for fine particles in gaseous mediums is typically less than 0.1. Re f : Flow Reynolds number . Kn: Particle Knudsen number . PSD is usually defined by the method by which it is determined. The most easily understood method of determination is sieve analysis , where powder is separated on sieves of different sizes. Thus, the PSD is defined in terms of discrete size ranges: e.g. "% of sample between 45 μm and 53 μm", when sieves of these sizes are used. The PSD is usually determined over a list of size ranges that covers nearly all the sizes present in the sample. Some methods of determination allow much narrower size ranges to be defined than can be obtained by use of sieves, and are applicable to particle sizes outside the range available in sieves. However, the idea of the notional "sieve", that "retains" particles above a certain size, and "passes" particles below that size, is universally used in presenting PSD data of all kinds. The PSD may be expressed as a "range" analysis, in which the amount in each size range is listed in order. It may also be presented in "cumulative" form, in which the total of all sizes "retained" or "passed" by a single notional "sieve" is given for a range of sizes. Range analysis is suitable when a particular ideal mid-range particle size is being sought, while cumulative analysis is used where the amount of "under-size" or "over-size" must be controlled. The way in which "size" is expressed is open to a wide range of interpretations. A simple treatment assumes the particles are spheres that will just pass through a square hole in a "sieve". In practice, particles are irregular – often extremely so, for example in the case of fibrous materials – and the way in which such particles are characterized during analysis is very dependent on the method of measurement used. Before a PSD can be determined, it is vital that a representative sample is obtained. In the case where the material to be analysed is flowing, the sample must be withdrawn from the stream in such a way that the sample has the same proportions of particle sizes as the stream. The best way to do this is to take many samples of the whole stream over a period, instead of taking a portion of the stream for the whole time. [ 3 ] p. 6 In the case where the material is in a heap, scoop or thief sampling needs to be done, which is inaccurate: the sample should ideally have been taken while the powder was flowing towards the heap. [ 3 ] p. 10 After sampling, the sample volume typically needs to be reduced. The material to be analysed must be carefully blended, and the sample withdrawn using techniques that avoid size segregation, for example using a rotary divider [ 3 ] p. 5 . Particular attention must be paid to avoidance of loss of fines during manipulation of the sample. Sieve analysis is often used because of its simplicity, cheapness, and ease of interpretation. Methods may be simple shaking of the sample in sieves until the amount retained becomes more or less constant. Alternatively, the sample may be washed through with a non-reacting liquid (usually water) or blown through with an air current. Advantages : this technique is well-adapted for bulk materials. A large amount of materials can be readily loaded into 8-inch-diameter (200 mm) sieve trays. Two common uses in the powder industry are wet-sieving of milled limestone and dry-sieving of milled coal. Disadvantages : many PSDs are concerned with particles too small for separation by sieving to be practical. A very fine sieve, such as 37 μm sieve, is exceedingly fragile, and it is very difficult to get material to pass through it. Another disadvantage is that the amount of energy used to sieve the sample is arbitrarily determined. Over-energetic sieving causes attrition of the particles and thus changes the PSD, while insufficient energy fails to break down loose agglomerates. Although manual sieving procedures can be ineffective, automated sieving technologies using image fragmentation analysis software are available. These technologies can sieve material by capturing and analyzing a photo of material. Material may be separated by means of air elutriation , which employs an apparatus with a vertical tube through which fluid is passed at a controlled velocity. When the particles are introduced, often through a side tube, the smaller particles are carried over in the fluid stream while the large particles settle against the upward current. If we start with low flow rates small less dense particle attain terminal velocities, and flow with the stream, the particle from the stream is collected in overflow and hence will be separated from the feed. Flow rates can be increased to separate higher size ranges. Further size fractions may be collected if the overflow from the first tube is passed vertically upwards through a second tube of greater cross-section, and any number of such tubes can be arranged in series. Advantages : a bulk sample is analyzed using centrifugal classification and the technique is non-destructive. Each cut-point can be recovered for future size-respective chemical analyses. This technique has been used for decades in the air pollution control industry (data used for design of control devices). This technique determines particle size as a function of settling velocity in an air stream (as opposed to water, or some other liquid). Disadvantages : a bulk sample (about ten grams) must be obtained. It is a fairly time-consuming analytical technique. The actual test method [ 4 ] has been withdrawn by ASME due to obsolescence. Instrument calibration materials are therefore no longer available. Materials can now be analysed through photoanalysis procedures. Unlike sieve analyses which can be time-consuming and inaccurate, taking a photo of a sample of the materials to be measured and using software to analyze the photo can result in rapid, accurate measurements. Another advantage is that the material can be analyzed without being handled. This is beneficial in the agricultural industry, as handling of food products can lead to contamination. Photoanalysis equipment and software is currently being used in mining, forestry and agricultural industries worldwide. PSDs can be measured microscopically by sizing against a graticule and counting, but for a statistically valid analysis, millions of particles must be measured. This is impossibly arduous when done manually, but automated analysis of electron micrographs is now commercially available. It is used to determine the particle size within the range of 0.2 to 100 micrometers. An example of this is the Coulter counter , which measures the momentary changes in the conductivity of a liquid passing through an orifice that take place when individual non-conducting particles pass through. The particle count is obtained by counting pulses. This pulse is proportional to the volume of the sensed particle. Advantages : very small sample aliquots can be examined. Disadvantages : sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution. The results are only related to the projected cross-sectional area that a particle displaces as it passes through an orifice. This is a physical diameter, not really related to mathematical descriptions of particles (e.g. terminal settling velocity ). These are based upon study of the terminal velocity acquired by particles suspended in a viscous liquid. Sedimentation time is longest for the finest particles, so this technique is useful for sizes below 10 μm, but sub-micrometer particles cannot be reliably measured due to the effects of Brownian motion . Typical apparatus disperses the sample in liquid, then measures the density of the column at timed intervals. Other techniques determine the optical density of successive layers using visible light or x-rays . Advantages : this technique determines particle size as a function of settling velocity. Disadvantages : Sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution, requiring careful selection of the dispersion media. Density is highly dependent upon fluid temperature remaining constant. X-Rays will not count carbon (organic) particles. Many of these instruments can require a bulk sample (e.g. two to five grams). These depend upon analysis of the "halo" of diffracted light produced when a laser beam passes through a dispersion of particles in air or in a liquid. The angle of diffraction increases as particle size decreases, so that this method is particularly good for measuring sizes between 0.1 and 3,000 μm. Advances in sophisticated data processing and automation have allowed this to become the dominant method used in industrial PSD determination. This technique is relatively fast and can be performed on very small samples. A particular advantage is that the technique can generate a continuous measurement for analyzing process streams. Laser diffraction measures particle size distributions by measuring the angular variation in intensity of light scattered as a laser beam passes through a dispersed particulate sample. Large particles scatter light at small angles relative to the laser beam and small particles scatter light at large angles. The angular scattering intensity data is then analyzed to calculate the size of the particles responsible for creating the scattering pattern, using the Mie theory or Fraunhofer approximation of light scattering. The particle size is reported as a volume equivalent sphere diameter. A focused laser beam rotates in a constant frequency and interacts with particles within the sample medium. Each randomly scanned particle obscures the laser beam to its dedicated photo diode, which measures the time of obscuration. The time of obscuration directly relates to the particle's Diameter, by a simple calculation principle of multiplying the known beam rotation Velocity in the directly measured Time of obscuration, (D=V*t). Instead of light , this method employs ultrasound for collecting information on the particles that are dispersed in fluid. Dispersed particles absorb and scatter ultrasound similarly to light. This has been known since Lord Rayleigh developed the first theory of ultrasound scattering and published a book "The Theory of Sound" in 1878. [ 5 ] There have been hundreds of papers studying ultrasound propagation through fluid particulates in the 20th century. [ 6 ] It turns out that instead of measuring scattered energy versus angle , as with light, in the case of ultrasound, measuring the transmitted energy versus frequency is a better choice. The resulting ultrasound attenuation frequency spectra are the raw data for calculating particle size distribution. It can be measured for any fluid system with no dilution or other sample preparation. This is a big advantage of this method. Calculation of particle size distribution is based on theoretical models that are well verified for up to 50% by volume of dispersed particles on micron and nanometer scales. However, as concentration increases and the particle sizes approach the nanoscale, conventional modelling gives way to the necessity to include shear-wave re-conversion effects in order for the models to accurately reflect the real attenuation spectra. [ 7 ] Cascade impactors – particulate matter is withdrawn isokinetically from a source and segregated by size in a cascade impactor at the sampling point exhaust conditions of temperature, pressure, etc. Cascade impactors use the principle of inertial separation to size segregate particle samples from a particle laden gas stream. The mass of each size fraction is determined gravimetrically. The California Air Resources Board Method 501 [ 8 ] is currently the most widely accepted test method for particle size distribution emissions measurements. The Weibull distribution , now named for Waloddi Weibull was first identified by Fréchet (1927) and first applied by Rosin & Rammler (1933) to describe particle size distributions. It is still widely used in mineral processing to describe particle size distributions in comminution processes. where The inverse distribution is given by: where The parameters of the Rosin–Rammler distribution can be determined by refactoring the distribution function to the form [ 11 ] Hence the slope of the line in a plot of yields the parameter m {\displaystyle m} and P 80 {\displaystyle P_{\rm {80}}} is determined by substitution into
https://en.wikipedia.org/wiki/Particle-size_distribution
In marine and freshwater ecology , a particle is a small object. Particles can remain in suspension in the ocean or freshwater. However, they eventually settle (rate determined by Stokes' law ) and accumulate as sediment. Some can enter the atmosphere through wave action where they can act as cloud condensation nuclei (CCN). Many organisms filter particles out of the water with unique filtration mechanisms ( filter feeders ). Particles are often associated with high loads of toxins which attach to the surface. As these toxins are passed up the food chain they accumulate in fatty tissue and become increasingly concentrated in predators (see bioaccumulation ). Very little is known about the dynamics of particles, especially when they are re-suspended by dredging . They can remain floating in the water and drift over long distances. The decomposition of some particles by bacteria consumes much oxygen and can cause the water to become hypoxic . Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counter . They can also be scanned with an underwater microscope, such as ecoSCOPE . It takes a few days until plankton organisms have filtered the particles and incorporated the toxins into their body fat and tissue : In the southwards flow of the waters of the Hudson off the coast of New Jersey , the highest levels of mercury in copepods have not been found directly in front of the river off New York but 150 km south, off Atlantic City . Many copepods are then captured by mysidae , krill and smallest fish like the juveniles of atlantic herring - and in each step of the foodchain the toxin concentrations increase by the factor of 10. Filter of krill : The first degree filter setae carry in v-form two rows of second degree setae , pointing towards the inside of the feeding basket . The purple ball is one micrometer in size. To display the total area of this fascinating particle filtration structure one would have to tile 7500 times this image. Filter basket of a mysid. These 3 cm long animals live close to shore and hover above the sea floor, constantly collecting particles. Mysids are an important food source for herring , cod , flounder , striped bass . In polluted areas they have high toxin levels in their tissue but they are very robust and require much poison to die.
https://en.wikipedia.org/wiki/Particle_(ecology)
The Particle Astrophysics Magnet Facility (commonly known as ASTROMAG) is a NASA project that was designed to investigate anti-matter. It consisted of a series of experiments which would culminate in an experiment launched in 1995 to be externally attached to the Freedom Space Station Experiments and postulation conducted during the 1970s and 1980s revealed a higher number of anti-protons than had been expected and to verify, and investigate further, a series of experiments were designed to culminate in an experiment launched for attachment to the Space Station. In preparation for the building of the detectors and superconducting magnets to be used in the experiment some smaller ones were conducted in the upper atmosphere mounted underneath high altitude balloons: ALICE (A Large Isotropic Composition Experiment) and LEAP (Low Energy Antiproton Experiment) being the most notable. ALICE was launched from Prince Albert Airport, Canada on 15 August 1987. It was designed to measure the isotopic composition of the rays entering Earth's atmosphere and so identify the types of particles which ASTROMAG would study in more detail. [ 1 ] LEAP was launched twice also from Prince Albert, in July and August 1987 and measured the ratios between protons and anti-protons to try to establish verification in earlier experiments that reported higher than expected numbers of anti-protons. [ 1 ] The original proposal was made in 1987 and announced in 1988 [ 1 ] for implementation on the Freedom space station . The experiment was tested, accepted in 1989 and due for launch in 1995 [ 2 ] but after various problems with other flights it was demoted from first to fifth place on the schedule. The experiment, called the Particle Astrophysics Magnet Facility , was given the name ASTROMAG (NASA designated ASTRMAG) as it used a large superconducting magnet to deflect particles into its detectors . The magnet was made superconducting by being cooled to 2 kelvins . The hope was that the detectors would discover the oppositely charged anti-protons and so help physicists to use matter–antimatter reactions to develop new propulsions systems based on the resulting expulsion of energy. The experiment was to be mounted on the outside of the Space Station and measured 30 by 13 feet (9.1 by 4.0 m) and projections of costs were estimated at $30 million. [ 1 ] This was one of the first aimed at capturing material and particle data to further understand the origins and evolution of matter in the composition of the Universe. The experiment was to collect data from collisions of very high velocity particles by measuring their spectrum and attempting to find negatively charged helium or heavier elements. Eventually the delays in NASA missions and the shutdown of the Space Station led ASTROMAG to suffer a non-launch and the mission was shelved in 1991. [ 2 ] The free flyer version was to be launched in 2005 into Earth orbit at a height of 310 miles (500 km). It aimed to detect high energy (>1 GeV per nucleon ) cosmic ray nuclei, as well as electrons , to search for antimatter and dark matter candidates. [ 3 ] [ 4 ] After the experiment was not launched researchers continued experiments using BESS and the methods employed by ALICE and LEAP in 1987. [ 5 ] The latest attempt was a new Nuclear Compton Telescope (NCY) which was successfully test flown on 1 June 2005 from the Scientific Balloon Flight Facility , Fort Sumner , New Mexico . [ 6 ] [ 7 ] Its subsequent missions went well and some useful data was collected [ 8 ] until it unfortunately failed to launch in April 2010 at Alice Springs , Australia , when the balloon broke its tether to the crane in high winds. [ 9 ] [ 10 ] The experiment was superseded by the Alpha Magnetic Spectrometer which was approved by Congress. [ 11 ] An earlier smaller test version called the AMS-01 was flown in 1998 on the shuttle Discovery during a flight to the Mir Russian space station. AMS-02 was delivered to the International Space Station in 2011.
https://en.wikipedia.org/wiki/Particle_Astrophysics_Magnet_Facility
The Particle Data Group ( PDG ) is an international collaboration of particle physicists that compiles and reanalyzes published results related to the properties of particles and fundamental interactions . It also publishes reviews of theoretical results that are phenomenologically relevant, including those in related fields such as cosmology . The PDG currently publishes the Review of Particle Physics and its pocket version, the Particle Physics Booklet , which are printed biennially as books, and updated annually via the World Wide Web . In previous years, the PDG has published the Pocket Diary for Physicists , a calendar with the dates of key international conferences and contact information of major high energy physics institutions, which is now discontinued. [ 1 ] PDG also further maintains the standard numbering scheme for particles in event generators , in association with the event generator authors. The Review of Particle Physics [ 2 ] (formerly Review of Particle Properties , Data on Particles and Resonant States , and Data on Elementary Particles and Resonant States ) is a voluminous, 1,200+ page reference work which summarizes particle properties and reviews the current status of elementary particle physics , general relativity and Big Bang cosmology. Usually singled out for citation analysis , it is currently the most cited article in high energy physics , being cited more than 2,000 times annually in the scientific literature (as of 2009 [update] ). [ 3 ] [ 4 ] The Review is currently divided into three sections: A condensed version of the Review , with the Summary Tables , a significantly shortened Reviews, Tables and Plots , and without the Particle Listings , is available as a 300-page, pocket-sized Particle Physics Booklet . The history of the Review of Particle Physics can be traced back to the 1957 article Hyperons and Heavy Mesons (Systematics and Decay) by Murray Gell-Mann and Arthur H. Rosenfeld , [ 5 ] and the unpublished update tables for its data with the title Data for Elementary Particle Physics ( University of California Radiation Laboratory Technical Report UCRL-8030 ) [ 6 ] [ 7 ] that were circulated before the actual publication of the original article. In 1963, Matts Roos independently published a compilation Data on Elementary Particles and Resonant States . [ 8 ] [ 9 ] On his suggestion, the two publications were merged a year later into the 1964 Data on Elementary Particles and Resonant States . The publication underwent three renamings thereafter: in 1965 to Data on Particles and Resonant States , in 1970 to Review of Particle Properties , and in 1996 to the present form Review of Particle Physics . Starting in 1972, the Review no longer appeared exclusively in Reviews of Modern Physics , but also in Physics Letters B , European Physical Journal C , Journal of Physics G , Physical Review D , and Chinese Physics C (depending on the year).
https://en.wikipedia.org/wiki/Particle_Data_Group
Particle agglomeration refers to the formation of assemblages in a suspension and represents a mechanism leading to the functional destabilization of colloidal systems. During this process, particles dispersed in the liquid phase stick to each other , and spontaneously form irregular particle assemblages, flocs, or agglomerates. This phenomenon is also referred to as coagulation or flocculation and such a suspension is also called unstable . Particle agglomeration can be induced by adding salts or other chemicals referred to as coagulant or flocculant . [ 1 ] Particle agglomeration can be a reversible or irreversible process. Particle agglomerates defined as "hard agglomerates" are more difficult to redisperse to the initial single particles. In the course of agglomeration, the agglomerates will grow in size, and as a consequence they may settle to the bottom of the container, which is referred to as sedimentation . Alternatively, a colloidal gel may form in concentrated suspensions which changes its rheological properties . The reverse process whereby particle agglomerates are re-dispersed as individual particles, referred to as peptization , hardly occurs spontaneously, but may occur under stirring or shear . Colloidal particles may also remain dispersed in liquids for long periods of time (days to years). This phenomenon is referred to as colloidal stability and such a suspension is said to be functionally stable . Stable suspensions are often obtained at low salt concentrations or by addition of chemicals referred to as stabilizers or stabilizing agents . The stability of particles, colloidal or otherwise, is most commonly evaluated in terms of zeta potential . This parameter provides a readily quantifiable measure of interparticle repulsion, which is the key inhibitor of particle aggregation. Similar agglomeration processes occur in other dispersed systems too. In emulsions , they may also be coupled to droplet coalescence , and not only lead to sedimentation but also to creaming . In aerosols , airborne particles may equally aggregate and form larger clusters (e.g., soot ). A well dispersed colloidal suspension consists of individual, separated particles and is stabilized by repulsive inter-particle forces. When the repulsive forces weaken or become attractive through the addition of a coagulant, particles start to aggregate. Initially, particle doublets A 2 will form from singlets A 1 according to the scheme [ 2 ] A 1 + A 1 ⟶ A 2 {\displaystyle {\ce {A1 + A1 -> A2}}} In the early stage of the aggregation process, the suspension mainly contains individual particles. The rate of this phenomenon is characterized by the aggregation rate coefficient k . Since doublet formation is a second order rate process, the units of this coefficients are m 3 s −1 since particle concentrations are expressed as particle number per unit volume (m −3 ). Since absolute aggregation rates are difficult to measure, one often refers to the dimensionless stability ratio W , defined as W = k fast k {\displaystyle W={\frac {k_{\text{fast}}}{k}}} where k fast is the aggregation rate coefficient in the fast regime, and k the coefficient at the conditions of interest. The stability ratio is close to unity in the fast regime, increases in the slow regime, and becomes very large when the suspension is stable. Often, colloidal particles are suspended in water. In this case, they accumulate a surface charge and an electrical double layer forms around each particle. [ 3 ] The overlap between the diffuse layers of two approaching particles results in a repulsive double layer interaction potential, which leads to particle stabilization. When salt is added to the suspension, the electrical double layer repulsion is screened, and van der Waals attraction become dominant and induce fast aggregation. The figure on the right shows the typical dependence of the stability ratio W versus the electrolyte concentration, whereby the regimes of slow and fast aggregation are indicated. The table below summarizes the critical coagulation concentration (CCC) ranges for different net charge of the counter ion . [ 4 ] The charge is expressed in units of elementary charge . This dependence reflects the Schulze–Hardy rule, [ 5 ] [ 6 ] which states that the CCC varies as the inverse sixth power of the counter ion charge. The CCC also depends on the type of ion somewhat, even if they carry the same charge. This dependence may reflect different particle properties or different ion affinities to the particle surface. Since particles are frequently negatively charged, multivalent metal cations thus represent highly effective coagulants. Adsorption of oppositely charged species (e.g., protons, specifically adsorbing ions, surfactants , or polyelectrolytes ) may destabilize a particle suspension by charge neutralization or stabilize it by buildup of charge, leading to a fast aggregation near the charge neutralization point, and slow aggregation away from it. Quantitative interpretation of colloidal stability was first formulated within the DLVO theory . [ 2 ] This theory confirms the existence slow and fast aggregation regimes, even though in the slow regime the dependence on the salt concentration is often predicted to be much stronger than observed experimentally. The Schulze–Hardy rule can be derived from DLVO theory as well. Other mechanisms of colloid stabilization are equally possible, particularly, involving polymers. Adsorbed or grafted polymers may form a protective layer around the particles, induce steric repulsive forces, and lead to steric stabilization at it is the case with polycarboxylate ether (PCE), the last generation of chemically tailored superplasticizer specifically designed to increase the workability of concrete while reducing its water content to improve its properties and durability. When polymers chains adsorb to particles loosely, a polymer chain may bridge two particles, and induce bridging forces. This situation is referred to as bridging flocculation. When particle aggregation is solely driven by diffusion, one refers to perikinetic aggregation. Aggregation can be enhanced through shear stress (e.g., stirring). The latter case is called orthokinetic aggregation. As the aggregation process continues, larger clusters form. The growth occurs mainly through encounters between different clusters, and therefore one refers to cluster-cluster aggregation process. The resulting clusters are irregular, but statistically self-similar. They are examples of mass fractals , whereby their mass M grows with their typical size characterized by the radius of gyration R g as a power-law [ 2 ] where d is the mass fractal dimension . Depending whether the aggregation is fast or slow, one refers to diffusion limited cluster aggregation (DLCA) or reaction limited cluster aggregation (RLCA). The clusters have different characteristics in each regime. DLCA clusters are loose and ramified ( d ≈ 1.8), while the RLCA clusters are more compact ( d ≈ 2.1). [ 7 ] The cluster size distribution is also different in these two regimes. DLCA clusters are relatively monodisperse, while the size distribution of RLCA clusters is very broad. The larger the cluster size, the faster their settling velocity. Therefore, aggregating particles sediment and this mechanism provides a way for separating them from suspension. At higher particle concentrations, the growing clusters may interlink, and form a particle gel. Such a gel is an elastic solid body, but differs from ordinary solids by having a very low elastic modulus . When aggregation occurs in a suspension composed of similar monodisperse colloidal particles, the process is called homoaggregation (or homocoagulation ). When aggregation occurs in a suspension composed of dissimilar colloidal particles, one refers to heteroaggregation (or heterocoagulation ). The simplest heteroaggregation process occurs when two types of monodisperse colloidal particles are mixed. In the early stages, three types of doublets may form: [ 8 ] A + A ⟶ A 2 B + B ⟶ B 2 A + B ⟶ A B {\displaystyle {\begin{aligned}\mathrm {A+A} &\longrightarrow \mathrm {A} _{2}\\[2pt]\mathrm {B+B} &\longrightarrow \mathrm {B} _{2}\\[2pt]\mathrm {A+B} &\longrightarrow \mathrm {AB} \end{aligned}}} While the first two processes correspond to homoaggregation in pure suspensions containing particles A or B, the last reaction represents the actual heteroaggregation process. Each of these reactions is characterized by the respective aggregation coefficients k AA , k BB , and k AB . For example, when particles A and B bear positive and negative charge, respectively, the homoaggregation rates may be slow, while the heteroaggregation rate is fast. In contrast to homoaggregation, the heteroaggregation rate accelerates with decreasing salt concentration. Clusters formed at later stages of such heteroaggregation processes are even more ramified that those obtained during DLCA ( d ≈ 1.4). [ 9 ] An important special case of a heteroaggregation process is the deposition of particles on a substrate. [ 1 ] Early stages of the process correspond to the attachment of individual particles to the substrate, which can be pictures as another, much larger particle. Later stages may reflect blocking of the substrate through repulsive interactions between the particles, while attractive interactions may lead to multilayer growth, and is also referred to as ripening. These phenomena are relevant in membrane or filter fouling . Numerous experimental techniques have been developed to study particle aggregation. Most frequently used are time-resolved optical techniques that are based on transmittance or scattering of light. [ 10 ] Light transmission. The variation of transmitted light through an aggregating suspension can be studied with a regular spectrophotometer in the visible region. As aggregation proceeds, the medium becomes more turbid, and its absorbance increases. The increase of the absorbance can be related to the aggregation rate constant k and the stability ratio can be estimated from such measurements. The advantage of this technique is its simplicity. Light scattering. These techniques are based on probing the scattered light from an aggregating suspension in a time-resolved fashion. Static light scattering yields the change in the scattering intensity, while dynamic light scattering the variation in the apparent hydrodynamic radius . At early-stages of aggregation, the variation of each of these quantities is directly proportional to the aggregation rate constant k . [ 11 ] At later stages, one can obtain information on the clusters formed (e.g., fractal dimension). [ 7 ] Light scattering works well for a wide range of particle sizes. Multiple scattering effects may have to be considered, since scattering becomes increasingly important for larger particles or larger aggregates. Such effects can be neglected in weakly turbid suspensions. Aggregation processes in strongly scattering systems have been studied with transmittance , backscattering techniques or diffusing-wave spectroscopy . Single particle counting. This technique offers excellent resolution, whereby clusters made out of tenths of particles can be resolved individually. [ 11 ] The aggregating suspension is forced through a narrow capillary particle counter and the size of each aggregate is being analyzed by light scattering. From the scattering intensity, one can deduce the size of each aggregate, and construct a detailed aggregate size distribution. If the suspensions contain high amounts of salt, one could equally use a Coulter counter . As time proceeds, the size distribution shifts towards larger aggregates, and from this variation aggregation and breakup rates involving different clusters can be deduced. The disadvantage of the technique is that the aggregates are forced through a narrow capillary under high shear, and the aggregates may disrupt under these conditions. Indirect techniques. As many properties of colloidal suspensions depend on the state of aggregation of the suspended particles, various indirect techniques have been used to monitor particle aggregation too. While it can be difficult to obtain quantitative information on aggregation rates or cluster properties from such experiments, they can be most valuable for practical applications. Among these techniques settling tests are most relevant. When one inspects a series of test tubes with suspensions prepared at different concentration of the flocculant, stable suspensions often remain dispersed, while the unstable ones settle. Automated instruments based on light scattering/transmittance to monitor suspension settling have been developed, and they can be used to probe particle aggregation. One must realize, however, that these techniques may not always reflect the actual aggregation state of a suspension correctly. For example, larger primary particles may settle even in the absence of aggregation, or aggregates that have formed a colloidal gel will remain in suspension. Other indirect techniques capable to monitor the state of aggregation include, for example, filtration , rheology , absorption of ultrasonic waves , or dielectric properties . [ 10 ] Particle aggregation is a widespread phenomenon, which spontaneously occurs in nature but is also widely explored in manufacturing. Some examples include. Formation of river delta . When river water carrying suspended sediment particles reaches salty water, particle aggregation may be one of the factors responsible for river delta formation. Charged particles are stable in river's fresh water containing low levels of salt, but they become unstable in sea water containing high levels of salt. In the latter medium, the particles aggregate, the larger aggregates sediment, and thus create the river delta. Papermaking . Retention aids are added to the pulp to accelerate paper formation. These aids are coagulating aids, which accelerate the aggregation between the cellulose fibers and filler particles. Frequently, cationic polyelectrolytes are being used for that purpose. Water treatment . Treatment of municipal waste water normally includes a phase where fine solid particles are removed. This separation is achieved by addition of a flocculating or coagulating agent, which induce the aggregation of the suspended solids. The aggregates are normally separated by sedimentation, leading to sewage sludge. Commonly used flocculating agents in water treatment include multivalent metal ions (e.g., Fe 3+ or Al 3+ ), polyelectrolytes , or both. Cheese making . The key step in cheese production is the separation of the milk into solid curds and liquid whey. This separation is achieved by inducing the aggregation processes between casein micelles by acidifying the milk or adding rennet. The acidification neutralizes the carboxylate groups on the micelles and induces the aggregation.
https://en.wikipedia.org/wiki/Particle_aggregation
Particle chauvinism is the term used by British astrophysicist Martin Rees to describe the (allegedly erroneous) assumption that what we think of as normal matter – atoms , quarks , electrons , etc. (excluding dark matter or other matter) – is the basis of matter in the universe, rather than a rare phenomenon. [ 1 ] With the growing recognition in the late 20th century of the presence of dark matter in the universe, ordinary baryonic matter has come to be seen as something of a cosmic afterthought. [ 2 ] As J.D. Barrow put it: The 21st century saw the share of baryonic matter in the total mass-energy of the universe downgraded further, to perhaps as low as 1%, [ 4 ] further extending what has been called the demise of particle-chauvinism , [ 5 ] before being revised up to some 5% of the contents of the universe. [ 6 ]
https://en.wikipedia.org/wiki/Particle_chauvinism
Particle damping is the use of particles moving freely in a cavity to produce a damping effect. Active and passive damping techniques are common methods of attenuating the resonant vibrations excited in a structure. Active damping techniques are not applicable under all circumstances due, for example, to power requirements, cost, environment, etc. Under such circumstances, passive damping techniques are a viable alternative. Various forms of passive damping exist, including viscous damping, viscoelastic damping, friction damping, and impact damping. Viscous and viscoelastic damping usually have a relatively strong dependence on temperature. Friction dampers, while applicable over wide temperature ranges, may degrade with wear. Due to these limitations, attention has been focused on impact dampers, particularly for application in cryogenic environments or at elevated temperatures. Particle damping technology is a derivative of impact damping with several advantages. Impact damping refers to only a single (somewhat larger) auxiliary mass in a cavity, whereas particle damping is used to imply multiple auxiliary masses of small size in a cavity. The principle behind particle damping is the removal of vibratory energy through losses that occur during impact of granular particles which move freely within the boundaries of a cavity attached to a primary system. In practice, particle dampers are highly nonlinear dampers whose energy dissipation , or damping, is derived from a combination of loss mechanisms, including friction and momentum exchange. Because of the ability of particle dampers to perform through a wide range of temperatures and frequencies and survive for a longer life, they have been used in applications such as the weightless environments of outer space, [ 1 ] [ 2 ] in aircraft structures, to attenuate vibrations of civil structures, [ 3 ] and even in tennis rackets. [ 4 ] Therefore, they are suited for applications where there is a need for long service in harsh environments. The analysis of particle dampers is mainly conducted by experimental testing, simulations by discrete element method or finite element method , and by analytical calculations. The discrete element method makes use of particle mechanics, whereby individual particles are modeled with 6-degrees of freedom dynamics and their interactions result in the amount of energy absorbed/dissipated. This approach, although requires high power computing and the dynamic interactions of millions of particles, it is promising and may be used to estimate the effects of various mechanisms on damping. For instance, a study was performed [ 5 ] using a model that simulated 10,000 particles in a cavity and studied the damping under various gravitational force effects. A significant amount of research has been carried out in the area of analysis of particle dampers. Olson [ 6 ] presented a mathematical model that allows particle damper designs to be evaluated analytically. The model utilized the particle dynamics method and took into account the physics involved in particle damping, including frictional contact interactions and energy dissipation due to viscoelasticity of the particle material. Fowler et al. [ 7 ] discussed results of studies into the effectiveness and predictability of particle damping. Efforts were concentrated on characterizing and predicting the behaviour of a range of potential particle materials, shapes, and sizes in the laboratory environment, as well as at elevated temperature. Methodologies used to generate data and extract the characteristics of the nonlinear damping phenomena were illustrated with test results. Fowler et al. [ 8 ] developed an analytical method, based on the particle dynamics method, that used characterized particle damping data to predict damping in structural systems. A methodology to design particle damping for dynamic structures was discussed. The design methodology was correlated with tests on a structural component in the laboratory. Mao et al. [ 9 ] utilized DEM for computer simulation of particle damping. By considering thousands of particles as Hertz balls, the discrete element model was used to describe the motions of these multi-bodies and determine the energy dissipation. Prasad et al. [ 10 ] have investigated the damping performance of twenty different granular materials, which can be used to design particle dampers for different industries. They have also introduced the hybrid particle damper concept in which two different types of granular materials are mixed in order to achieve significantly higher vibration reduction in comparison to the particle dampers with a single type of granular materials. Prasad et al. [ 11 ] have developed a honeycomb damping plate concept, based on particle damping technique, to reduce low-frequency vibration amplitude from an onshore wind turbine generator. Prasad et al. [ 12 ] have suggested three different strategies to implement particle dampers in a wind turbine blade to reduce the vibration amplitude.
https://en.wikipedia.org/wiki/Particle_damping
In particle physics , particle decay is the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process (the final state ) must each be less massive than the original, although the total mass of the system must be conserved. A particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability . Decays are mediated by one or several fundamental forces . The particles in the final state may themselves be unstable and subject to further decay. The term is typically distinct from radioactive decay , in which an unstable atomic nucleus is transformed into a lighter nucleus accompanied by the emission of particles or radiation , although the two are conceptually similar and are often described using the same terminology. Particle decay is a Poisson process , and hence the probability that a particle survives for time t before decaying (the survival function ) is given by an exponential distribution whose time constant depends on the particle's velocity: P ( t ) = exp ⁡ ( − t γ τ ) {\displaystyle P(t)=\exp \left(-{\frac {t}{\gamma \tau }}\right)} All data are from the Particle Data Group . This section uses natural units , where c = ℏ = 1. {\displaystyle c=\hbar =1.\,} The lifetime of a particle is given by the inverse of its decay rate, Γ , the probability per unit time that the particle will decay. For a particle of a mass M and four-momentum P decaying into particles with momenta p i , the differential decay rate is given by the general formula (expressing Fermi's golden rule ) d Γ n = S | M | 2 2 M d Φ n ( P ; p 1 , p 2 , … , p n ) {\displaystyle d\Gamma _{n}={\frac {S\left|{\mathcal {M}}\right|^{2}}{2M}}d\Phi _{n}(P;p_{1},p_{2},\dots ,p_{n})\,} The factor S is given by S = ∏ j = 1 m 1 k j ! {\displaystyle S=\prod _{j=1}^{m}{\frac {1}{k_{j}!}}\,} The phase space can be determined from d Φ n ( P ; p 1 , p 2 , … , p n ) = ( 2 π ) 4 δ 4 ( P − ∑ i = 1 n p i ) ∏ i = 1 n d 3 p → i 2 ( 2 π ) 3 E i {\displaystyle d\Phi _{n}(P;p_{1},p_{2},\dots ,p_{n})=(2\pi )^{4}\delta ^{4}\left(P-\sum _{i=1}^{n}p_{i}\right)\prod _{i=1}^{n}{\frac {d^{3}{\vec {p}}_{i}}{2(2\pi )^{3}E_{i}}}} One may integrate over the phase space to obtain the total decay rate for the specified final state. If a particle has multiple decay branches or modes with different final states, its full decay rate is obtained by summing the decay rates for all branches. The branching ratio for each mode is given by its decay rate divided by the full decay rate. This section uses natural units , where c = ℏ = 1. {\displaystyle c=\hbar =1.\,} Say a parent particle of mass M decays into two particles, labeled 1 and 2 . In the rest frame of the parent particle, | p → 1 | = | p → 2 | = [ M 2 − ( m 1 + m 2 ) 2 ] [ M 2 − ( m 1 − m 2 ) 2 ] 2 M , {\displaystyle |{\vec {p}}_{1}|=|{\vec {p}}_{2}|={\frac {\sqrt {[M^{2}-(m_{1}+m_{2})^{2}][M^{2}-(m_{1}-m_{2})^{2}]}}{2M}},\,} which is obtained by requiring that four-momentum be conserved in the decay, i.e. ( M , 0 → ) = ( E 1 , p → 1 ) + ( E 2 , p → 2 ) . {\displaystyle (M,{\vec {0}})=(E_{1},{\vec {p}}_{1})+(E_{2},{\vec {p}}_{2}).\,} Also, in spherical coordinates, d 3 p → = | p → | 2 d | p → | d ϕ d ( cos ⁡ θ ) . {\displaystyle d^{3}{\vec {p}}=|{\vec {p}}\,|^{2}\,d|{\vec {p}}\,|\,d\phi \,d\left(\cos \theta \right).\,} Using the delta function to perform the d 3 p → 2 {\displaystyle d^{3}{\vec {p}}_{2}} and d | p → 1 | {\displaystyle d|{\vec {p}}_{1}|\,} integrals in the phase-space for a two-body final state, one finds that the decay rate in the rest frame of the parent particle is d Γ = | M | 2 32 π 2 | p → 1 | M 2 d ϕ 1 d ( cos ⁡ θ 1 ) . {\displaystyle d\Gamma ={\frac {\left|{\mathcal {M}}\right|^{2}}{32\pi ^{2}}}{\frac {|{\vec {p}}_{1}|}{M^{2}}}\,d\phi _{1}\,d\left(\cos \theta _{1}\right).\,} The angle of an emitted particle in the lab frame is related to the angle it has emitted in the center of momentum frame by the equation tan ⁡ θ ′ = sin ⁡ θ γ ( β / β ′ + cos ⁡ θ ) {\displaystyle \tan {\theta '}={\frac {\sin {\theta }}{\gamma \left(\beta /\beta '+\cos {\theta }\right)}}} This section uses natural units , where c = ℏ = 1. {\displaystyle c=\hbar =1.\,} The mass of an unstable particle is formally a complex number , with the real part being its mass in the usual sense, and the imaginary part being its decay rate in natural units . When the imaginary part is large compared to the real part, the particle is usually thought of as a resonance more than a particle. This is because in quantum field theory a particle of mass M (a real number ) is often exchanged between two other particles when there is not enough energy to create it, if the time to travel between these other particles is short enough, of order 1 M , {\displaystyle {\tfrac {1}{M}},} according to the uncertainty principle . For a particle of mass M + i Γ {\displaystyle M+i\Gamma } , the particle can travel for time 1 M , {\displaystyle {\tfrac {1}{M}},} but decays after time of order of 1 Γ . {\displaystyle {\tfrac {1}{\Gamma }}.} If Γ > M {\displaystyle \Gamma >M} then the particle usually decays before it completes its travel. [ 4 ]
https://en.wikipedia.org/wiki/Particle_decay
Particle deposition is the spontaneous attachment of particles to surfaces. The particles in question are normally colloidal particles , while the surfaces involved may be planar, curved, or may represent particles much larger in size than the depositing ones (e.g., sand grains). Deposition processes may be triggered by appropriate hydrodynamic flow conditions and favorable particle-surface interactions. Depositing particles may just form a monolayer which further inhibits additional particle deposition, and thereby one refers to surface blocking . Initially attached particles may also serve as seeds for further particle deposition, which leads to the formation of thicker particle deposits, and this process is termed as surface ripening or fouling . While deposition processes are normally irreversible, initially deposited particles may also detach. The latter process is known as particle release and is often triggered by the addition of appropriate chemicals or a modification in flow conditions. Microorganisms may deposit to surfaces in a similar fashion as colloidal particles. When macromolecules, such as proteins , polymers or polyelectrolytes attach to surfaces, one rather calls this process adsorption . While adsorption of macromolecules largely resembles particle deposition, macromolecules may substantially deform during adsorption. The present article mainly deals with particle deposition from liquids, but similar process occurs when aerosols or dust deposit from the gas phase. A particle may diffuse to a surface in quiescent conditions, but this process is inefficient as a thick depletion layer develops, which leads to a progressive slowing down of the deposition. When particle deposition is efficient, it proceeds almost exclusively in a system under flow. In such conditions, the hydrodynamic flow will transport the particles close to the surface. Once a particle is situated close to the surface, it will attach spontaneously, when the particle-surface interactions are attractive. In this situation, one refers to favorable deposition conditions . When the interaction is repulsive at larger distances, but attractive at shorter distances, deposition will still occur but it will be slowed down. One refers to unfavorable deposition conditions here. The initial stages of the deposition process can be described with the rate equation [ 1 ] where Γ {\displaystyle \Gamma } ; is the number density of deposited particles, t {\displaystyle t} is the time, c {\displaystyle c} the particle number concentration, and k {\displaystyle k} the deposition rate coefficient. The rate coefficient depends on the flow velocity, flow geometry, and the interaction potential of the depositing particle with the substrate. In many situations, this potential can be approximated by a superposition of attractive van der Waals forces and repulsive electrical double layer forces and can be described by DLVO theory . When the charge of the particles is of the same sign as the substrate, deposition will be favorable at high salt levels, while it will be unfavorable at lower salt levels. When the charge of the particles is of the opposite sign as the substrate, deposition is favorable for all salt levels, and one observes a small enhancement of the deposition rate with decreasing salt level due to attractive electrostatic double layer forces. Initial stages of the deposition process are relatively similar to the early stages of particle heteroaggregation , whereby one of the particles is much larger than the other. When depositing particles repel each other, the deposition will stop by the time when enough particles have deposited. At one point, such a surface layer will repel any particles that may still make attempts to deposit. The surface is said to be saturated or blocked by the deposited particles. The blocking process can be described by the following equation [ 2 ] where B ( Γ ) {\displaystyle B(\Gamma )} is the surface blocking function. When there are no deposited particles, Γ = 0 {\displaystyle \Gamma =0} and B ( 0 ) = 1 {\displaystyle B(0)=1} . With increasing number density of deposited particles, the blocking function decreases. The surface saturates at Γ = Γ 0 {\displaystyle \Gamma =\Gamma _{0}} and B ( Γ 0 ) = 0 {\displaystyle B(\Gamma _{0})=0} . The simplest blocking function is [ 3 ] and it is referred to as the Langmuir blocking function, as it is related to the Langmuir isotherm . The blocking process has been studied in detail in terms of the random sequential adsorption (RSA) model. [ 4 ] The simplest RSA model related to deposition of spherical particles considers irreversible adsorption of circular disks. One disk after another is placed randomly at a surface. Once a disk is placed, it sticks at the same spot, and cannot be removed. When an attempt to deposit a disk would result in an overlap with an already deposited disk, this attempt is rejected. Within this model, the surface is initially filled rapidly, but the more one approaches saturation the slower the surface is being filled. Within the RSA model, saturation is referred to as jamming. For circular disks, jamming occurs at a coverage of 0.547. When the depositing particles are polydisperse, much higher surface coverage can be reached, since the small particles will be able to deposit into the holes in between the larger deposited particles. On the other hand, rod like particles may lead to much smaller coverage, since a few misaligned rods may block a large portion of the surface. Since the repulsion between particles in aqueous suspensions originates from electric double layer forces, the presence of salt has an important effect on surface blocking. For small particles and low salt, the diffuse layer will extend far beyond the particle, and thus create an exclusion zone around it. Therefore, the surface will be blocked at a much lower coverage than what would be expected based on the RSA model. [ 5 ] At higher salt and for larger particles, this effect is less important, and the deposition can be well described by the RSA model. When the depositing particles attract each other, they will deposit and aggregate at the same time. This situation will result in a porous layer made of particle aggregates at the surface, and is referred to as ripening. The porosity of this layer will depend whether the particle aggregation process is fast or slow. Slow aggregation will lead to a more compact layer, while fast aggregation to a more porous one. The structure of the layer will resemble the structure of the aggregates formed in the later stages of the aggregation process. Particle deposition can be followed by various experimental techniques. Direct observation of deposited particles is possible with an optical microscope , scanning electron microscope , or the atomic force microscope . Optical microscopy has the advantage that the deposition of particles can be followed in real time by video techniques and the sequence of images can be analyzed quantitatively. [ 6 ] On the other hand, the resolution of optical microscopy requires that the particle size investigated exceeds at least 100 nm. An alternative is to use surface sensitive techniques to follow particle deposition, such as reflectivity , ellipsometry , surface plasmon resonance , or quartz crystal microbalance . [ 5 ] These techniques can provide information on the amount of particles deposited as a function of time with good accuracy, but they do not permit to obtain information concerning the lateral arrangement of the particles. Another approach to study particle deposition is to investigate their transport in a chromatographic column. The column is packed with large particles or with a porous medium to be investigated. Subsequently, the column is flushed with the solvent to be investigated, and the suspension of the small particles is injected at the column inlet. The particles are detected at the outlet with a standard chromatographic detector. When particles deposit in the porous medium, they will not arrive at the outlet, and from the observed difference the deposition rate coefficient can be inferred. Particle deposition occurs in numerous natural and industrial systems. Few examples are given below.
https://en.wikipedia.org/wiki/Particle_deposition