text
stringlengths
11
320k
source
stringlengths
26
161
Prefoldin (GimC) is a superfamily of proteins used in protein folding complexes. It is classified as a heterohexameric molecular chaperone in both archaea and eukarya , including humans . A prefoldin molecule works as a transfer protein in conjunction with a molecule of chaperonin to form a chaperone complex and correctly fold other nascent proteins. One of prefoldin's main uses in eukarya is the formation of molecules of actin for use in the eukaryotic cytoskeleton . Prefoldin is one family of chaperone proteins found in the domains of eukarya and archaea. Prefoldin acts in combination with other molecules to promote protein folding in cells where there are many other competing pathways for folding. [ 1 ] Chaperone proteins perform non-covalent assembly of other polypeptide-containing structures in vivo . They are implicated in the folding of most other proteins. In archaea, prefoldins are believed to function in combination with group II chaperonins [ 2 ] in de novo protein folding . In eukarya however, prefoldins have acquired a more specific function: they are used to establish correct tubular assembly for many tubular proteins, such as actin . [ 3 ] Actin accounts for 5-10% of all protein found in eukaryotic cells, which therefore means that prefoldin is quite prevalent in the cells. Actin is made of two strings of beads wound round each other and is one of the three main parts of the cytoskeleton of eukaryotic cells. [ 4 ] Prefoldin bonds specifically to cytosolic chaperonin protein. This complex of prefoldin and chaperonin then forms molecules of actin in the cytosol. The prefoldin acts as a transporter molecule that transports bound, unfolded target proteins to the chaperonin (C-CPN) molecule. [ 3 ] For example, the prefoldin that is used in the formation of actin also transfers α or β tubulin to a cytosolic chaperonin. The prefoldin, however, does not form a ternary complex with tubulin and chaperonin. Once the tubulins are in contact with the chaperonin, the prefoldin automatically lets go and leaves the active site, due to its high affinity for the chaperonin molecule. Once the prefoldin is in contact with the chaperonin protein, it loses its affinity for the unfolded target protein. Prefoldin is triggered only to bind to nonnative target proteins in the cytosol so that it will only bind to unfolded proteins. Unlike many other molecular chaperones, prefoldin does not use chemical energy, in the form of adenosine triphosphate (ATP), to promote protein folding. [ 5 ] Prefoldin was found by the laboratory of Nicholas J. Cowan from the Department of Biochemistry at the New York University Medical Center. It was discovered using chromatography . Unfolded labeled β-actin from bovine testes was put into solution. This solution contained an excess of cytosolic chaperonin (C-CPN), a eukaryotic chaperone protein necessary for actin folding. After gel filtration of the actin, the actin complex, consisting of actin and its bonded proteins, began to form and the molecular weight of the complex was observed. Gel electrophoresis was used to analyze the protein complex, the complex formed a single band that was excised and ran on an SDS gel. It resolved into five bands, therefore proving that a heterooligomeric protein is used to bind to unfolded actin. [ 3 ] An archaeal homolog of prefoldin that also functions as a molecular chaperone has been identified. [ 6 ] Eukaryotic prefoldin likely evolved from archaea, as it is not present (or has been lost) from bacteria. Prefoldin is a hetero hexameric protein consisting of two α subunits and four β subunits. [ 7 ] [ 2 ] The beta subunits contain 120 amino acid residues each, while the α subunits contain 140 amino acid residues each. [ 2 ] Each subunit was found to have a width of 8.4 nm in the archaea Methanococcus thermoautrophicum . [ 2 ] The height was calculated at 1.8-2.6 nm. [ 2 ] The subunits are arranged by hydrophobic interactions with two β barrels at the center and coiled-coil α helices protruding down from them as if it were a jellyfish . The lower "tentacles" of the jellyfish shape is the interface between prefoldin and chaperonin. [ 8 ]
https://en.wikipedia.org/wiki/Prefoldin
Prefrontal synthesis (PFS, also known as mental synthesis ) is the conscious purposeful process of synthesizing novel mental images . PFS is neurologically different from the other types of imagination, such as simple memory recall and dreaming. Unlike dreaming , which is spontaneous and not controlled by the prefrontal cortex (PFC), [ 1 ] PFS is controlled by and completely dependent on the intact lateral prefrontal cortex . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Unlike simple memory recall that involves activation of a single neuronal ensemble (NE) encoded at some point in the past, PFS involves active combination of two or more object-encoding neuronal ensembles (objectNE). The mechanism of PFS is hypothesized to involve synchronization of several independent objectNEs. [ 8 ] When objectNEs fire out-of-sync, the objects are perceived one at a time. However, once those objectNEs are time-shifted by the lateral PFC to fire in-phase with each other, they are consciously experienced as one unified object or scene. The earliest reference [ citation needed ] to mental synthesis is found in the doctoral dissertation of S. J. Rowton written in 1864. Paraphrasing Cicero’s description of nature that can only be unified in someone’s mind, S. J. Rowton writes: "... there cannot be one thing unless by a mental synthesis of many things or parts ..." [ 9 ] In the 20th century the term mental synthesis was often used in psychology to describe the experiments of combinatorial nature. In a common experimental setup, subjects are instructed to mentally assemble the verbally described shapes in various ways. For example, the shapes may have been the capital letters ‘J’ and ‘D’, and the subject would then be asked to combine them into as many objects as possible, with size being flexible. A suitable answer in this example would be: an umbrella. The performance in this task is then quantified by counting the number of legitimate patterns that participants construct using the presented shapes. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] As the neurobiological study of imagination advanced in the 21st century, there was a need to distinguish the neurologically distinct components of imagination: first in terms of their dependence on the lateral PFC and second in terms of the number of involved neuronal ensembles. As a result, "mental synthesis" was adapted to describe the active process of assembling two or more independent objectNEs from memory into novel combinations. [ 8 ] [ 15 ] [ 16 ] The term "prefrontal synthesis" was later proposed for use in place of "mental synthesis" in order to emphasize the role of the PFC and further distance this type of voluntary imagination from other types of involuntary imagination, such as REM-sleep dreaming , day-time dreaming, hallucination , and spontaneous insight . [ 17 ] There is evidence that a deficit in PFS in humans presents as language which is "impoverished and show[s] an apparent diminution of the capacity to 'propositionize'. The length and complexity of sentences are reduced. There is a dearth of dependent clauses and, more generally, an underutilization of what Chomsky characterizes as the potential for recursiveness of language " [ 18 ] [ 19 ] The mechanism of PFS is hypothesized to involve synchronization of several independent object-encoding neuronal ensembles (objectNEs). When objectNEs fire out-of-sync, the objects are perceived one at a time. However, once those objectNEs are time-shifted by the lateral prefrontal cortex (LPFC) to fire in-phase with each other, they are consciously experienced as one unified object or scene. The synchronization hypothesis has never been directly tested but is indirectly supported by several lines of experimental evidence. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] Furthermore, it is the most parsimonious way to explain the formation of new imaginary memories since the same mechanism of Hebbian learning ("neurons that fire together wire together") that is responsible for externally-driven sensory memories of objects and scenes can be also responsible for memorizing internally-constructed novel images, such as plans and engineering designs. In the process of formation of novel receptive memories, neurons are synchronized by simultaneous external stimulation (e.g., light reflected from a moving object is falling on the retina at the same time). In the process of formation of novel imaginary memories, neurons are synchronized by the LPFC during waking or spontaneously during dreaming . In both cases it is the synchronous firing of neurons that wires them together into new stable objectNEs that can later be consolidated into long-term memory .
https://en.wikipedia.org/wiki/Prefrontal_synthesis
The pregap on a Red Book audio CD is the portion of the audio track that precedes " index 01" for a given track in the table of contents (TOC). The pregap ("index 00") is typically two seconds long and usually, but not always, contains silence. Popular uses for having the pregap contain audio are live CDs, track interludes, and hidden songs in the pregap of the first track (detailed below). The track 01 pregap was used to hide computer data, allowing computers to detect a data track whereas conventional CD players would continue to see the CD as an audio CD. This method was made obsolete in mid 1996 when an update to Windows 95 in driver SCSI1HLP.VXD made the pregap track inaccessible. It is unclear whether this change in Microsoft Windows' behavior was intentional: for instance, it may have been intended to steer developers away from the pregap method and encourage what became the Blue Book specification "CD Extra" format. On certain CDs, such as Light Years by Kylie Minogue , HoboSapiens by John Cale , or Factory Showroom by They Might Be Giants , the pregap before track 1 contains a hidden track . The track is truly hidden in the sense that most conventional standalone players and software CD players will not see it. Such hidden tracks can be played by playing the first song and "rewinding" (more accurately, seeking in reverse) until the actual start of the whole CD audio track. Not all CD drives can properly extract such hidden tracks. Some drives will report errors when reading these tracks, and some will seem to extract them properly, but the extracted file will contain only silence. Other CDs contain additional audio information in the pre-gap area of other tracks, resulting in the audio only being heard on a conventional CD player if the CD is allowed to "play through," but not if you jump to the next track. Some CDs also contain phantom tracks consisting of only index 0 data, meaning the track can only be played on a conventional CD player by allowing the CD to play through a previous track to the next track. Mac OS X: Currently does not support more than a 2-second pre-gap in the first track under its CD burning utilities . Using a combination of Roxio Toast and a custom .cue file can provide a way around this. Ripping of pregap audio is supported by the application X Lossless Decoder. [ 1 ] Windows: Linux:
https://en.wikipedia.org/wiki/Pregap
In physics , a pregeometry is a hypothetical structure from which the geometry of the universe develops. Some cosmological models feature a pregeometric universe before the Big Bang. The term was championed by John Archibald Wheeler in the 1960s and 1970s as a possible route to a theory of quantum gravity . Since quantum mechanics allowed a metric to fluctuate, it was argued that the merging of gravity with quantum mechanics required a set of more fundamental rules regarding connectivity that were independent of topology and dimensionality . Where geometry could describe the properties of a known surface, the physics of a hypothetical region with predefined properties, "pregeometry" might allow one to work with deeper underlying rules of physics that were not so strongly dependent on simplified classical assumptions about the properties of space. No single proposal for pregeometry has gained wide consensus support in the physics community. Some notions related to pregeometry predate Wheeler, other notions depart considerably from his outline of pregeometry but are still associated with it. A 2006 paper [ 1 ] provided a survey and critique of pregeometry or near-pregeometry proposals up to that time. A summary of these is given below: Some additional or related pregeometry proposals are:
https://en.wikipedia.org/wiki/Pregeometry_(physics)
Pregnancy over the age of 50 has become possible for more women because of advances in assisted reproductive technology , in particular egg donation . Typically, a woman's fecundity ends with menopause , which, by definition, is 12 consecutive months without any menstrual flow at all. During perimenopause , the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods. [ 1 ] In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55. [ 2 ] The oldest recorded mother to date to conceive was 74 years. According to statistics from the Human Fertilisation and Embryology Authority , in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs). [ 3 ] Maria del Carmen Bousada de Lara formerly held the record of being the oldest verified mother; she was aged 66 years 358 days when she gave birth to twins, 130 days older than Adriana Iliescu , who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. [ 4 ] The oldest verified mother to conceive naturally (listed currently as of 26 January 2017 [update] in the Guinness Records [ 5 ] ) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997. [ 6 ] Erramatti Mangamma , who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad , India, currently holds the record for being the oldest living mother. She delivered twin baby girls, making her also the oldest mother to give birth to twins. [ 7 ] The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar , India, who gave birth to a baby boy at age 72 through in-vitro fertilisation. Menopause typically occurs between 44 and 58 years of age. [ 8 ] DNA testing is rarely carried out to confirm claims of maternity at advanced ages, but in one large study, among 12,549 African and Middle Eastern immigrant mothers, confirmed by DNA testing, only two mothers were found to be older than fifty; the oldest mother being 52.1 years at conception (and the youngest mother 10.7 years old). [ 9 ] The risk of pregnancy complications increases as the mother's age increases. Risks associated with childbearing over the age of 50 include an increased incidence of gestational diabetes , hypertension , delivery by caesarean section , miscarriage , preeclampsia , and placenta previa . [ 2 ] [ 10 ] [ unreliable medical source? ] In comparison to mothers between 20 and 29 years of age, mothers over 50 are at almost three times the risk of low birth weight , premature birth , and extremely premature birth; their risk of extremely low birth weight, small size for gestational age , and fetal mortality was almost double. [ 11 ] Maria died on July 11, 2009, from stomach cancer, which she developed soon after giving birth to her twins; her sons were only 2 + 1 ⁄ 2 years old then. [ 169 ] Pregnancies among older women have been a subject of controversy and debate. Some argue against motherhood late in life on the basis of the health risks involved, or out of concern that an older mother might not be able to give proper care for a child as she ages, while others contend that having a child is a fundamental right and that it is commitment to a child's wellbeing, not the parents' ages, that matters. [ 179 ] [ 180 ] [ 181 ] A survey of attitudes towards pregnancy over age 50 among Australians found that 54.6% believed it was acceptable for a post-menopausal woman to have her own eggs transferred and that 37.9% believed it was acceptable for a post-menopausal woman to receive donated ova or embryos . [ 182 ] Governments have sometimes taken actions to regulate or restrict later-in-life childbearing. In the 1990s, France approved a bill which prohibited post-menopausal pregnancy, which the French Minister of Health at the time, Philippe Douste-Blazy , said was "... immoral as well as dangerous to the health of mother and child". In Italy , the Association of Medical Practitioners and Dentists prevented its members from providing women aged 50 and over with fertility treatment. Britain's then- Secretary of State for Health , Virginia Bottomley , stated, "Women do not have the right to have a child; the child has a right to a suitable home". [ 181 ] However, in 2005, age restrictions on IVF in the United Kingdom were officially withdrawn. [ 183 ] Legal restrictions are only one of the barriers confronting women seeking IVF, as many fertility clinics and hospitals set age limits of their own. [ 168 ]
https://en.wikipedia.org/wiki/Pregnancy_over_age_50
Pregnancy-specific biological substances , which include the placenta , umbilical cord , amniotic fluid , and amniotic membrane are being studied for a number of health uses. [ 1 ] For example, Placental-derived stem cells are being studied so they can serve as a potential treatment method for cell therapy . [ 2 ] Hepatocyte-like cells (HLC) are generated from differentiated human amniotic epithelial cells (hAEC) that are abundant in the placenta. [ 2 ] [ 3 ] HLC may replace hepatocytes for hepatocyte transplantation to treat acute or chronic liver damage. [ 3 ] Recent research has shown that the placenta and placenta derivatives are being regenerative cell therapies and also includes immunological features. Placenta structures consist of unique physiognomies. Placenta's structure not only regulates its function but also gives the probability of efficient use in clinics and in biotechnology. [ 4 ] According to a research study by Bhattacharya N., Anemia caused by Diabetes mellitus in patients with albuminuria can be treated with cord blood transfusion . [ 5 ] The research showed increased in albumin per gram of creatinine that assessed for albuminuria for patients that received cord blood transfusions. [ 5 ] This biochemistry article is a stub . You can help Wikipedia by expanding it . This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pregnancy_specific_biological_substances
Pregnanediol , or 5β-pregnane-3α,20α-diol , is an inactive metabolic product of progesterone . A test can be done to measure the amount of pregnanediol in urine, which offers an indirect way to measure progesterone levels in the body. [ 1 ] From the urine of pregnant women from London clinics, Guy Frederic Marrian isolated a substance that contained two hydroxyl groups and could be converted into a diacetate with acetic anhydride. However, the formula had not been clearly clarified. [ 2 ] Almost at the same time, Adolf Butenandt at the Chemical University Laboratory in Göttingen investigated the constituents of pregnant urine and clarified the structure of the diol. [ 3 ] The name pregnandiol, coined by Butenandt, is derived from the Latin verb praegnans (pregnant) or the English pregnant and pregnancy. This gave rise to the name pregnane for the underlying parent hydrocarbon. In 1936, Venning and Browne demonstrated the presence of pregnanediol, specifically the glucuronide of pregnanediol in pregnancy urine. Their study extracted pregnanediol from pregnancy urine and revealed that pregnanediol concentration in urine indicates the amount of progesterone excreted. Since progesterone levels indicate the functionality of a corpus luteum, and pregnanediol concentration represents 40-45% of the progesterone excreted, estimations of pregnanediol reveal the functionality of a corpus luteum. However, pregnanediol concentrations vary with menstrual cycle phases, so it is essential to consider the menstrual cycle phase when examining them. [ 4 ] Furthermore, current research has demonstrated that pregnanediol concentration in urine is also a measure of ovarian activity. [ 5 ] This article about a steroid is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pregnanediol
Pregnanediol glucuronide , or 5β-pregnane-3α,20α-diol 3α-glucuronide , is the major metabolite of progesterone and the C3α glucuronide conjugate of pregnanediol (5β-pregnane-3α,20α-diol). [ 1 ] [ 2 ] Approximately 15 to 30% of a parenteral dose of progesterone is metabolized into pregnanediol glucuronide. [ 1 ] [ 2 ] While this specific isomer is referred to as pregnanediol glucuronide and is the most major form, there are actually many possible isomers of the metabolite. [ 3 ] [ 4 ] This article about a steroid is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pregnanediol_glucuronide
Pregnanetriol , or 5β-pregnane-3α,17α,20α-triol , is a steroid and inactive metabolite of progesterone . Urine excretion of pregnanetriol can be measured over a period of 24 hours. Elevated urine pregnanetriol levels suggest adrenogenital syndrome . In monitoring treatment with cortisol replacement , elevated urine pregnanetriol levels indicate insufficient dosage of cortisol. [ 1 ] [ unreliable medical source? ] For females: [ 1 ] For males: [ 1 ] Pregnanetriolone This article about a steroid is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pregnanetriol
Pregnant leach solution or pregnant liquor solution ( PLS ) is acidic metal-laden water generated from stockpile leaching and heap leaching . Pregnant leach solution is used in the solvent extraction and electrowinning (SX/EW) process. [ 1 ] [ 2 ] The portion of an original liquid that remains after other components have been dissolved by a solvent is called raffinate . This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pregnant_leach_solution
Prehensility is the quality of an appendage or organ that has adapted for grasping or holding. The word is derived from the Latin term prehendere , meaning "to grasp". The ability to grasp is likely derived from a number of different origins. The most common are tree-climbing and the need to manipulate food. [ 1 ] Appendages that can become prehensile include: Prehensility affords animals a great natural advantage in manipulating their environment for feeding, climbing, digging , and defense. It enables many animals, such as primates, to use tools to complete tasks that would otherwise be impossible without highly specialized anatomy. For example, chimpanzees have the ability to use sticks to obtain termites and grubs in a manner similar to human fishing . However, not all prehensile organs are applied to tool use; the giraffe tongue, for instance, is instead used in feeding and self-cleaning .
https://en.wikipedia.org/wiki/Prehensility
A prehormone is a biochemical substance secreted by glandular tissue and has minimal or no significant biological activity , but it is converted in peripheral tissues into an active hormone . Calcifediol is an example of a prehormone which is produced by hydroxylation of vitamin D 3 (cholecalciferol) in the liver. [ 1 ] Another example is adrenal androgens like dehydroepiandrosterone and androstenedione , which can be converted into testosterone and dihydrotestosterone . [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prehormone
Prehydrated electrons are free electrons that occur in water under irradiation. [ which? ] Usually they form complexes with water molecules and become hydrated electrons . They can also react with the bases of the nucleotides dGMP and dTMP in aqueous solution. This suggests they may also react with the bases of the DNA double helix , ultimately breaking molecular bonds and causing DNA damage. This mechanism is hypothesized to be a cause of radiation damage to DNA. [ 1 ]
https://en.wikipedia.org/wiki/Prehydrated_electrons
Preimplantation factor (PIF) is a peptide secreted by trophoblast cells prior to placenta formation in early embryonic development . [ 1 ] Human embryos begin to express PIF at the 4-cell stage , with expression increasing by the morula stage and continuing to do so throughout the first trimester. [ 2 ] [ 1 ] [ 3 ] Expression of preimplantation factor in the blastocyst was discovered as an early correlate of the viability of the eventual pregnancy . [ 1 ] [ 4 ] Preimplantation factor was identified in 1994 by a lymphocyte platelet-binding assay, where it was thought to be an early biomarker of pregnancy. [ 5 ] It has a simple primary structure with a short sequence of fifteen amino acids without any known quaternary structure . [ 6 ] A synthetic analogue of preimplantation factor (commonly abbreviated in studies as sPIF or PIF*) that has an identical amino acid sequence and mimics the normal biological activity of PIF has been developed and is commonly used in research studies, particularly those that aim to study potential adult therapeutics. [ 7 ] [ 8 ] [ 9 ] Preimplantation factor acts by paracrine signaling ; that is to say trophoblast cells, which collectively form extra-embryonic tissues, secrete it onto the surface of the endometrium. PIF is known to influence many events in the implantation process , the process by which an early embryo implants into the uterine wall. A crucial event in human implantation is when trophoblast cells expressing preimplantation factor invade the uterine wall and found the placenta, an organ that connects maternal blood supply, and along with it, nutrients, to the growing fetus. This requires changes to the histology of the endometrium; a process called decidualisation . Upregulated expression of PIF increases the presence of integrins on the endometrium wall, promoting the embryo's adhesion to the uterine wall. [ 10 ] PIF is thought to modulate and facilitate the depth of the trophoblast's invasion into the uterus at physiological doses. [ 1 ] Maternal immune system regulation is also a critical event in implantation as the early embryo is essentially a partial allograft , that is a tissue that is recognised as fully identical to that of the mother. [ 11 ] [ 12 ] Consequently, the embryo may be rejected and attacked if it is not recognised, an event that normally causes spontaneous miscarriage . [ 11 ] [ 12 ] Preimplantation factor regionally modulates the mother's immune system, decreasing the activity of peripheral maternal leukocytes , reducing inflammation and consequently also increasing the chance that the embryo will be tolerated . [ 13 ] Preimplantation factor is also an anti-apoptotic effector , maintaining the trophoblast cell integrity through the intrinsic p53 signalling pathway . [ 14 ] Moreover, preimplantation factor protects the central nervous system by downregulating pathways that promote neurone death and promoting neurogenesis. [ 7 ] [ 9 ] PIF is also known to signal against neonatal prematurity and rescues embryos from toxic uterine environments. [ 7 ] [ 11 ] [ 15 ] Due to its multiple autoimmune and neuroprotective effects in the embryonic environment, preimplantation factor has been studied in clinical environments as a potential novel therapy for reproductive, autoimmune and neurodegenerative diseases. PIF has been successfully studied as a therapy for recurrent pregnancy loss , as it is able to rescue non-viable embryos from a hostile maternal environment. [ 16 ] It has also been shown to prevent diabetes mellitus type 1 in mice due to its ability to modulate immunological tolerance in the pancreas. [ 8 ] Finally, it reverses paralysis and neuroinflammation whilst promoting neurogenesis in adult patients with neurodegenerative diseases . [ 11 ] [ 17 ] It also may be able to decrease the severity of brain injuries by modulating the behaviour of supporting cells in the nervous system. [ 9 ] Preimplantation factor has a simple primary peptide structure with a 15 amino acid sequence (MVRIKPGSANKPSDD). [ 18 ] As the regulation of the maternal immune system is a requisite for successful implantation, the immune system shows different characteristics in pregnant women and non-pregnant women. In 1994, preimplantation factor was isolated by a lymphocyte platelet-binding assay that compared immune responses and proteins found in pregnant women and non-pregnant women. [ 5 ] The assay also compared immune responses with men to verify if the proteins were specific to female reproductive tissues. [ 5 ] Results generated in the preliminary study showed that "a preimplantation factor" was being expressed exclusively in pregnant women. [ 5 ] On the fourth day after embryo transfer in women who had undergone successful in-vitro fertilisation , this protein was also found, suggesting that it had a role in the determination of the viability of the embryo. [ 5 ] Subsequent studies, most seminally including a 1996 study that partially characterised the biological activity of PIF, adopted and established the current term "preimplantation factor" as the name for this novel peptide. [ 6 ] Trophoblast cells form the outer lining of the blastocyst in preimplantation development, eventually forming more differentiated extra-embryonic tissues including the placenta. [ 19 ] Before this differentiation can occur the embryo's invasion and infiltration into the uterine wall must be tightly regulated by both maternal and foetal signals, including secretion of PIF by trophoblast cells. [ 20 ] In particular, preimplantation factor is thought to have a paracrine effect on the decidualisation process, which ultimately primes trophoblast cells to invade appropriately into the endometrium. [ 1 ] When compared to non-functional short peptides at the same concentration, application of PIF to the endometrium at the implantation stage promoted deeper invasion of the embryo. [ 1 ] This effect was not observed to occur indefinitely with successive increases of concentration and any artificial increases of PIF above the human physiological concentration (approximately 50 nmol/L ) did not meaningfully increase the invasion of the embryo. [ 1 ] Consequently, it is thought that PIF is limited in its promotion of trophoblast invasion by maternal signals. [ 1 ] [ 12 ] The outermost layer of the uterine wall is an epithelial tissue called the endometrium that requires cell surface adhesion molecules called integrins to adhere the embryo. This additional paracrine effect of PIF has been shown to increase the expression of the integrin molecule α2β3 on the cell membranes of cells in the endometrium. [ 10 ] Integrins are a broad class of cell adhesion molecules that allow cells to bind to extracellular matrix . [ 10 ] In this way, they assist the entire embryo in binding to the uterine wall, an important event in successfully generating a placenta. [ 10 ] The embryo is immunologically characterised as a partial allograft as it is not a maternal tissue. [ 3 ] [ 11 ] During fertilisation , a paternal spermatozoon fuses with a maternal oocyte producing a zygote . Phenotypically, the zygote expresses certain epitopes that are controlled by genes inherited from the father, making the embryo a foreign material. In order for successful implantation to occur, the maternal immune system must tolerate the presence of the embryo while not completely inactivating its innate responsiveness to foreign pathogens. This process is not always successful; indeed maternal immune rejection of the embryo is a common and well-characterised cause of recurrent pregnancy loss. [ 16 ] Preimplantation factor has a significant role in signalling this grafting behaviour; it has been, for instance shown to signal an anti-inflammatory response in a broad range of peripheral blood mononuclear cells . [ 3 ] PIF also impacts similar cytoskeletal proteins in CD14+ , CD8+ and CD4+ cells suggesting that they have a broad and integrative role in modulating the immune system of the mother. [ 21 ] In particular, PIF inhibits the process of platelet aggregation in helper T lymphocytes and skeletal proteins in cytotoxic T cells. [ 21 ] While PIF attenuates or modulates the immune system, it does not effect the response to other pathogens or foreign material. [ 11 ] This modulatory effect on immunological tolerance is responsible for a strong correlation between PIF expression and the viability of pregnancy. [ 4 ] The expression of preimplantation factor in the embryo is strongly correlated with the likelihood of a live birth . [ 4 ] [ 21 ] This observed viability is not solely due to PIF's ability to mediate the implantation and allografting process but also due to its ability to promote the upregulation and integrity of certain intracellular targets that are positively associated with normal developmental processes. [ 21 ] For instance, PIF is known to target the enzyme disulfide isomerase , which reduces intracellular oxidative stress and also heat-shock proteins , which are molecular chaperones that ensure proteins produced by a cell will fold into the correct conformation for their function. [ 22 ] Additionally, PIF is known to promote the production of vital cytoskeletal proteins including actin and tubulin that are required for the current morphological development of nerve axons and the viscera of vital organs. [ 15 ] Axons use circular tubulin polymers called microtubules to transport intracellular material between the cell body and the axon terminal and require actin to form synapses . [ 23 ] They are hence important for the organisation and function of the growing immune system. Additionally, when uterine serum from patients with recurrent pregnancy loss is applied to embryos that are positive for PIF, they display the capacity to resist the toxin and are able to survive. [ 22 ] Combined, these observations and combination of intracellular effects suggest that PIF has multifaceted impacts directed towards viable pregnancy. In the prenatal environment, PIF has neuroprotective impacts. It protects the growing fetus against neonatal prematurity , preventing the fetus from being delivered before adequate neural development has taken place. [ 7 ] [ 11 ] The neurogenic effects of PIF are not isolated to the prenatal environment; in fact PIF is thought to have impacts throughout life. In adult models, PIF has multiple neurogenic effects: it promotes the growth of neurons and reduces neuroinflammation. [ 7 ] [ 11 ] [ 17 ] It is thought to have these impacts by modulating signalling through the ubiquitous protein kinase A and protein kinase C intracellular signalling pathways. [ 7 ] PIF also inhibits microRNA let-7, a sequence that is highly upregulated in the central nervous system. The Let-7 system has been associated with cell death in neurons, and PIF is known to inhibit this process from occurring. [ 9 ] In rats that were induced to have a hypoxic-ischemic brain injury, PIF was able to promote neuron growth, reduced detrimental responses by neuroglia and was able to generate a significant cerebral cortex volume, suggesting it could rescue rats from side effects of brain damage. [ 9 ] PIF also has a series of anti-apoptotic impacts in human extravillous trophoblasts, mediated by the TP53 gene . [ 14 ] Apoptosis is a controlled cell death process that must not occur if a cell is to proliferate. PIF has specific anti-apoptotic impacts by reducing the phosphorylation of the p53 protein at the serine-15 residue. Without phosphorylation p53 is unstable and undergoes ubiquitylation , signalling the trophoblast and endometrial cells to degrade it in proteasomes and attenuating downstream apoptotic effects. PIF, in particular, has been correlated with increasing the expression of anti-apoptotic effector BCL2 and decreasing the expression of pro-apoptotic effector BAX . [ 14 ] BCL2, which is upregulated by PIF, ensures that cytochrome c remains within the inner mitochondrial membrane and hence does not trigger the production of an apoptosome in the cell cytosol. BAX, which is downregulated by PIF, produces transmembrane transport channels that liberate cytochrome c, triggering apoptosis. Collectively, these biochemical effects show that PIF signals against the internal mechanisms of apoptosis in extravillous trophoblast cells, allowing them to proliferate before they implant into the uterine wall. Given its multifaceted functionality, including autoimmune, neuroprotective and anti-apoptotic effects, preimplantation factor has been extensively studied as a potential therapeutic agent in both reproductive and non-reproductive medical contexts. PIF is also advantageous because of its easily replicable biochemical structure. [ 6 ] In reproductive contexts, PIF has been studied as a treatment for infertility . In women with recurrent pregnancy loss, treatment with PIF is able to rescue a non-viable embryo and promotes a successful implantation and pregnancy. [ 16 ] It does this by mitigating the toxic influence of certain factors that naturally occur in the uterus, such as acidity. [ 16 ] PIF has also been studied in a range of other non-reproductive contexts. Due to the ability of PIF to attenuate the attack mechanisms of mononuclear immune cells, it has been implicated as a successful treatment for autoimmune diseases including diabetes mellitus type 1 in mice studies. Diabetes mellitus type 1 is characterised by the misrecognition of pancreatic beta islet cells as foreign material. [ 8 ] These studies show that PIF is able to preserve the pancreatic beta islet cell's integrity, rescuing them from the autoimmune attacks which cause diabetes. [ 8 ] In adult models, PIF also reverses the pathological neuroinflammation caused by autoimmune diseases such as multiple sclerosis . [ 17 ] It also reverses paralysis and promotes growth of neurons in patients with neurodegeneration. [ 11 ]
https://en.wikipedia.org/wiki/Preimplantation_factor
In electromagnetism , the Preisach model of hysteresis is a model of magnetic hysteresis . Originally, it generalized hysteresis as the relationship between the magnetic field and magnetization of a magnetic material as the parallel connection of independent relay hysterons . It was first suggested in 1935 by Ferenc (Franz) Preisach in the German academic journal Zeitschrift für Physik . [ 1 ] In the field of ferromagnetism , the Preisach model is sometimes thought to describe a ferromagnetic material as a network of small independently acting domains , each magnetized to a value of either h {\displaystyle h} or − h {\displaystyle -h} . A sample of iron , for example, may have evenly distributed magnetic domains, resulting in a net magnetic moment of zero. Mathematically similar models seem to have been independently developed in other fields of science and engineering. One notable example is the model of capillary hysteresis in porous materials developed by Everett and co-workers. Since then, following the work of people like M. Krasnoselkii, A. Pokrovskii, A. Visintin, and I.D. Mayergoyz, the model has become widely accepted as a general mathematical tool for the description of hysteresis phenomena of different kinds. [ 2 ] [ 3 ] The relay hysteron is the fundamental building block of the Preisach model. It is described as a two-valued operator denoted by R α , β {\displaystyle R_{\alpha ,\beta }} . Its I/O map takes the form of a loop, as shown: Above, a relay of magnitude 1, α {\displaystyle \alpha } defines the "switch-off" threshold, and β {\displaystyle \beta } defines the "switch-on" threshold. Graphically, if x {\displaystyle x} is less than α {\displaystyle \alpha } , the output y {\displaystyle y} is "low" or "off." As we increase x {\displaystyle x} , the output remains low until x {\displaystyle x} reaches β {\displaystyle \beta } —at which point the output switches "on." Further increasing x {\displaystyle x} has no change. Decreasing x {\displaystyle x} , y {\displaystyle y} does not go low until x {\displaystyle x} reaches α {\displaystyle \alpha } again. It is apparent that the relay operator R α , β {\displaystyle R_{\alpha ,\beta }} takes the path of a loop, and its next state depends on its past state. Mathematically, the output of R α , β {\displaystyle R_{\alpha ,\beta }} is expressed as: y ( x ) = { 1 if x ≥ β 0 if x ≤ α k if α < x < β {\displaystyle y(x)={\begin{cases}1&{\mbox{ if }}x\geq \beta \\0&{\mbox{ if }}x\leq \alpha \\k&{\mbox{ if }}\alpha <x<\beta \end{cases}}} Where k = 0 {\displaystyle k=0} if the last time x {\displaystyle x} was outside of the boundaries α < x < β {\displaystyle \alpha <x<\beta } , it was in the region of x ≤ α {\displaystyle x\leq \alpha } ; and k = 1 {\displaystyle k=1} if the last time x {\displaystyle x} was outside of the boundaries α < x < β {\displaystyle \alpha <x<\beta } , it was in the region of x ≥ β {\displaystyle x\geq \beta } . This definition of the hysteron shows that the current value y {\displaystyle y} of the complete hysteresis loop depends upon the history of the input variable x {\displaystyle x} . The Preisach model consists of many relay hysterons connected in parallel, given weights, and summed. This can be visualized by a block diagram: Each of these relays has different α {\displaystyle \alpha } and β {\displaystyle \beta } thresholds and is scaled by μ {\displaystyle \mu } . With increasing N {\displaystyle N} , the true hysteresis curve is approximated better. In the limit as N {\displaystyle N} approaches infinity, we obtain the continuous Preisach model. [ 4 ] [ 5 ] One of the easiest ways to look at the Preisach model is using a geometric interpretation. Consider a plane of coordinates ( α , β ) {\displaystyle (\alpha ,\beta )} . On this plane, each point ( α i , β i ) {\displaystyle (\alpha _{i},\beta _{i})} is mapped to a specific relay hysteron R α i , β i {\displaystyle R_{\alpha _{i},\beta _{i}}} . Each relay can be plotted on this so-called Preisach plane with its ( α , β ) {\displaystyle (\alpha ,\beta )} values. Depending on their distribution on the Preisach plane, the relay hysterons can represent hysteresis with good accuracy. We consider only the half-plane α < β {\displaystyle \alpha <\beta } as any other case does not have a physical equivalent in nature. Next, we take a specific point on the half plane and build a right triangle by drawing two lines parallel to the axes, both from the point to the line α = β {\displaystyle \alpha =\beta } . We now present the Preisach density function, denoted μ ( α , β ) {\displaystyle \mu (\alpha ,\beta )} . This function describes the amount of relay hysterons of each distinct values of ( α i , β i ) {\displaystyle (\alpha _{i},\beta _{i})} . As a default we say that outside the right triangle μ ( α , β ) = 0 {\displaystyle \mu (\alpha ,\beta )=0} . A modified formulation of the classical Preisach model has been presented, allowing analytical expression of the Everett function. [ 6 ] This makes the model considerably faster and especially adequate for inclusion in electromagnetic field computation or electric circuit analysis codes. The vector Preisach model is constructed as the linear superposition of scalar models. [ 7 ] For considering the uniaxial anisotropy of the material, Everett functions are expanded by Fourier coefficients. In this case, the measured and simulated curves are in a very good agreement. [ 8 ] Another approach uses different relay hysteron, closed surfaces defined on the 3D input space. In general spherical hysteron is used for vector hysteresis in 3D, [ 9 ] and circular hysteron is used for vector hysteresis in 2D. [ 10 ] The Preisach model has been applied to model hysteresis in a wide variety of fields, including to study irreversible changes in soil hydraulic conductivity as a result of saline and sodic conditions, [ 11 ] the modeling of soil water retention [ 12 ] [ 13 ] [ 14 ] [ 15 ] and the effect of stress and strains on soil and rock structures. [ 16 ]
https://en.wikipedia.org/wiki/Preisach_model_of_hysteresis
Preload is an engineering term with several meanings. In the general sense, it refers to the internal application of stress to certain mechanical systems. The most common usage is to describe the load applied to a fastener as a result of its being installed, i.e., before any external loads are applied (e.g., tightening the nut on a bolt). Preload in such cases is important for several reasons. First, a tightened bolt experiences only a small fraction of any external load that will be applied later, so that a fully tightened bolt can (depending on the exact application) sustain a much greater load than a loosely tightened bolt. Second, a nut that is correctly tightened will resist becoming loose under the influence of vibration, temperature cycling, etc. Internal stress to a bearing through application of negative clearance is known as bearing preloading. Advantages of preloading include the following: maintain axial and radial position for accurate displacements of angular movements; increase bearing rigidity; prevent sliding or gyroscope-like movements, especially with high acceleration or rotation rates; maintain relative position of bearing elements. Preloading methods include position preload and constant pressure preload. [ 1 ] It is also used in testing a specimen, for a process where the crosshead moves to load the specimen to a specified value before a test starts. Data is not captured during the preload segment. When tensile specimens are initially placed into testing grips, they can be subjected to small compressive forces. These forces can cause specimens to bend imperceptibly, causing inaccurate and inconsistent results. Establishing a small preload as a part of the test method eliminates those compressive forces on specimens and improves the repeatability of results. In civil engineering, soil preloading refers to the process of applying a compressive load to a soil or rock layer to consolidate it before starting construction. [ 2 ] The applied vertical stress is necessary to deter uneven settlement due to the weight of structures built upon the soil. Preload becomes very important for large mechanical and high performance system such as large Telescopes. [ 3 ] In the general sense, it refers to the internal application of stress to certain mechanical systems. By tensioning, preloading increases the natural frequency of a structure, avoiding resonance due to external disturbances. It also prevents buckling if stresses change depending on position in certain systems. In the particular case for bearings and fasteners, preload reduces or cancels backlash or dead zones. In addition, preload aids to limit applied loads to a system.
https://en.wikipedia.org/wiki/Preload_(engineering)
In organic chemistry , transannular strain (also called Prelog strain after chemist Vladimir Prelog ) is the unfavorable interactions of ring substituents on non-adjacent carbons. These interactions, called transannular interactions, arise from a lack of space in the interior of the ring , which forces substituents into conflict with one another. In medium-sized cycloalkanes , which have between 8 and 11 carbons constituting the ring, transannular strain can be a major source of the overall strain , especially in some conformations , to which there is also contribution from large-angle strain and Pitzer strain . [ 1 ] [ 2 ] In larger rings, transannular strain drops off until the ring is sufficiently large that it can adopt conformations devoid of any negative interactions. [ 1 ] [ 3 ] Transannular strain can also be demonstrated in other cyclo-organic molecules, such as lactones , lactams , ethers , cycloalkenes , and cycloalkynes . These compounds are not without significance, since they are particularly useful in the study of transannular strain. Furthermore, transannular interactions are not relegated to only conflicts between hydrogen atoms, but can also arise from larger, more complicated substituents interacting across a ring. By definition, strain implies discomfiture, so it should follow that molecules with large amounts of transannular strain should have higher energies than those without. Cyclohexane, for the most part, is without strain and is therefore quite stable and low in energy. Rings smaller than cyclohexane , like cyclopropane and cyclobutane , have significant tension caused by small-angle strain , but there is no transannular strain. While there is no small-angle strain present in medium-sized rings, there does exist something called large-angle strain . Some angle and torsional strain is used by rings with more than nine members to relieve some of the distress caused by transannular strain. [ 1 ] [ 3 ] As the plot to the left indicates, the relative energies of cycloalkanes increases as the size of the ring increases, with a peak at cyclononane (with nine members in its ring.) At this point, the flexibility of the rings increases with increasing size; this allows for conformations that can significantly mitigate transannular interactions. [ 1 ] Rates of reaction can be affected by the size of rings. Essentially each reaction should be studied on a case-by-case basis but some general trends have been seen. Molecular mechanics calculations of strain energy differences ΔSI between a sp 2 and sp 3 state in cycloalkanes show linear correlations with rates (as log ⁡ k {\displaystyle \log k} ) of many reactions involving the transition between sp 2 and sp 3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution, the contribution of transannular strain is below 3%. [ 4 ] Rings with transannular strain have faster S N 1 , S N 2 , and free radical reactions compared to most smaller and normal sized rings. Five membered rings show an exception to this trend. On the other hand, some nucleophilic addition reactions involving addition to a carbonyl group in general show the opposite trend. Smaller and normal rings, with five membered rings being the anomaly, have faster reaction rates while those with transannular strain are slower. [ 5 ] One specific example of a study of rates of reactions for an S N 1 reaction is shown on the right. Various sized rings, ranging from four to seventeen members, were used to compare the relative rates and better understand the effect of transannular strain on this reaction. The solvolysis reaction in acetic acid involved the formation of a carbocation as the chloride ion leaves the cyclic molecule. This study fits the general trend seen above that rings with transannular strain show increased reactions rates compared to smaller rings in S N 1 reactions. [ 5 ] The regioselectivity of water elimination is highly influenced by ring size. When water is eliminated from cyclic tertiary alcohols by an E1 route , three major products are formed. The semicyclic isomer (so-called because the double bond is shared by a ring atom and an exocyclic atom) and the (E) endocyclic isomer are expected to predominate; the (Z) endocyclic isomer is not expected to be formed until the ring size is large enough to accommodate the awkward angles of the trans configuration. The exact population of each product relative to the others differs considerably depending upon the size of the ring involved. As the ring size increases, the semicyclic isomer decreases rapidly and the (E) endocyclic isomer increases, but after a certain point, the semicyclic isomer begins to increase again. This can be attributed to transannular strain; this strain is significantly reduced in the (E) endocyclic isomer because it has one less substituent in the ring than the semicyclic isomer. [ 6 ] One of the effects of transannular strain is the difficulty of synthesizing medium-sized rings. Illuminati et al. have studied the kinetics of intramolecular ring closing using the simple nucleophilic substitution reaction of ortho-bromoalkoxyphenoxides. Specifically, they studied the ring closing of 5 to 10 carbon cyclic ethers. They found that as the number of carbons increased, so did the enthalpy of activation for the reaction. This indicates that strain within the cyclic transition states is higher if there are more carbons in the ring. Since transannular strain is the largest source of strain in rings this size, the larger enthalpies of activation result in much slower cyclizations due to transannular interactions in the cyclic ethers. [ 7 ] Transannular strain can be eliminated by the simple addition of a carbon bridge. E,Z,E,Z,Z-[10]-annulene is quite unstable; while it has the requisite number of π-electrons to be aromatic, they are for the most part isolated. Ultimately, the molecule itself is very difficult to observe. However, by the simple addition of a methylene bridge between the 1 and 6 positions, a stable, flat, aromatic molecule can be made and observed. [ 8 ]
https://en.wikipedia.org/wiki/Prelog_strain
Premature convergence is an unwanted effect in evolutionary algorithms (EA), a metaheuristic that mimics the basic principles of biological evolution as a computer algorithm for solving an optimization problem . The effect means that the population of an EA has converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators , are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. [ 1 ] [ 2 ] An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene. [ 3 ] Strategies to regain genetic variation can be: The genetic variation can also be regained by mutation though this process is highly random. A general strategy to reduce the risk of premature convergence is to use structured populations instead of the commonly used panmictic ones . It is hard to determine when premature convergence has occurred, and it is equally hard to predict its presence in the future. [ 2 ] [ 1 ] One measure is to use the difference between the average and maximum fitness values, as used by Patnaik & Srinivas, to then vary the crossover and mutation probabilities. [ 6 ] Population diversity is another measure which has been extensively used in studies to measure premature convergence. However, although it has been widely accepted that a decrease in the population diversity directly leads to premature convergence, there have been little studies done on the analysis of population diversity. In other words, by using the term population diversity, the argument for a study in preventing premature convergence lacks robustness, unless specified what their definition of population diversity is. [ 7 ] There are a number of presumed or hypothesized causes for the occurrence of premature convergence. Rechenberg introduced the idea of self-adaptation of mutation distributions in evolution strategies . [ 8 ] According to Rechenberg, the control parameters for these mutation distributions evolved internally through self-adaptation, rather than predetermination. He called it the 1/5-success rule of evolution strategies (1 + 1)-ES: The step size control parameter would be increased by some factor if the relative frequency of positive mutations through a determined period of time is larger than 1/5, vice versa if it is smaller than 1/5. Self-adaptive mutations may very well be one of the causes for premature convergence. [ 7 ] Accurately locating of optima can be enhanced by self-adaptive mutation, as well as accelerating the search for this optima. This has been widely recognized, though the mechanism's underpinnings of this have been poorly studied, as it is often unclear whether the optima is found locally or globally. [ 7 ] Self-adaptive methods can cause global convergence to global optimum, provided that the selection methods used are using elitism , as well as that the rule of self-adaptation doesn't interfere with the mutation distribution, which has the property of ensuring a positive minimum probability when hitting a random subset. [ 9 ] This is for non-convex objective functions with sets that include bounded lower levels of non-zero measurements. A study by Rudolph suggests that self-adaption mechanisms among elitist evolution strategies do resemble the 1/5-success rule, and could very well get caught by a local optimum that include a positive probability. [ 7 ] Most EAs use unstructured or panmictic populations where basically every individual in the population is eligible for mate selection based on fitness. [ 10 ] [ 11 ] Thus, The genetic information of an only slightly better individual can spread in a population within a few generations, provided that no better other offspring is produced during this time. Especially in comparatively small populations, this can quickly lead to a loss of genotypic diversity and thus to premature convergence. [ 1 ] A well-known countermeasure is to switch to alternative population models which introduce substructures into the population [ 12 ] [ 13 ] that preserve genotypic diversity over a longer period of time and thus counteract the tendency towards premature convergence. This has been shown for various EAs such as genetic algorithms, [ 12 ] the evolution strategy, [ 14 ] other EAs [ 15 ] or memetic algorithms . [ 15 ] [ 16 ]
https://en.wikipedia.org/wiki/Premature_convergence
Premature thelarche (PT) is a medical condition , characterised by isolated breast development in female infants. It occurs in females younger than 8 years, with the highest occurrence before the age of 2. PT is rare, occurring in 2.2-4.7% of females aged 0 to 2 years old. [ 1 ] The exact cause of the condition is still unknown, but it has been linked to a variety of genetic , dietary and physiological factors. [ 2 ] PT is a form of Incomplete Precocious Puberty (IPP). IPP is the presence of a secondary sex characteristic in an infant, without a change in their sex hormone levels. Central Precocious Puberty (CPP) is a more severe condition than IPP. CPP is the presentation of secondary sex characteristics, with a change in sex hormones due to alteration of the hypothalamic-pituitary-gonadal (HPG) axis . [ 1 ] CPP is an aggressive endocrine disorder with harmful developmental consequences for the patient. At the presentation of PT, diagnostics are used to ensure it is not early stage CPP. CPP can be differentiated from PT through biochemical testing, ultrasounds and ongoing observation . [ 3 ] There is no treatment for PT but regular observation is important to ensure it does not progress to CPP. CPP diagnosis is important as treatment is necessary. [ 1 ] Premature thelarche is breast hypertrophy before puberty. This form of hypertrophy is an increase in breast tissue . PT occurs in pre-pubescent females, under the age of 8, having a peak occurrence in the first two years of life. [ 4 ] The breast development is usually bi-lateral : both breasts show development. In some cases development may be unilateral : one breast develops. [ citation needed ] There are four patterns of PT development. Most patients have hypertrophy followed by complete loss of the excess breast tissue (51% of cases) or loss of most excess tissue, but some remains until puberty (36% of cases). Less commonly patients have ongoing patterns of thelarche: 9.7% suffer from a cyclic pattern where the size of the breast tissue varies over time, and 3.2% experience continual increase in tissue size. [ 1 ] The main symptom of PT is enlarged breast tissue in infants. estrogen 's role in PT, also leads to increased bone age and growth in some cases. [ 5 ] In PT these secondary symptoms are minimal: bone age only varies from actual age by a few months and growth velocity only slightly varies from the norm. Diagnostic tests will distinguish these PT secondary symptoms from the more severe bone aging and growth occurring in early CPP . [ 3 ] The direct pathophysiology behind PT is still unknown, but there are many postulated causes. [ 2 ] PT is linked to increased sensitivity of the breast tissue to estradiol , an estrogen derivative, in certain prepubertal individuals. [ 1 ] Sporadic estrogen or estradiol production in the adrenal glands , follicles or ovarian cysts is also linked to the condition. [ 2 ] [ 6 ] Follicle Stimulating Hormone (FSH) is secreted from the anterior pituitary . FSH plays a key role in development, growth and puberty, thus it is suspected to play a role in PT. Gondotropin-releasing hormone (GnRH) stimulation testing in some patients with PT has shown a dominant response from FSH. This response is linked to active mutations in the FSH receptor and Gs-a subunit in PT. Genetic investigation indicated these mutations only account for few cases of premature PT. [ 2 ] [ 7 ] PT may also be caused by transient partial activation of the HPG axis . Partial activation would release a surplus of FSH from the anterior pituitary without further disruption of the HPG axis. [ 6 ] The consumption or exposure to certain endocrine disrupters have also been linked to PT. [ 2 ] PT is the benign growth of breasts in infants, while CPP is a condition that involves the frequent activation of the HPG axis in patients. PT does not require treatment, as the condition is limited to enlarged breast tissue that usually subsides with time. CPP is associated with a wider range of symptoms including thelarche , pubic hair growth, accelerated bone aging , increased growth velocity and early epiphyseal growth . If an individual is affected with CPP they will need to begin treatment immediately. CPP is treated with lutenizing hormone (LH) releasing hormone agonists. PT can impact growth velocity and bone age slightly, but CPP affects these characteristics to the point of detriment to the adult stature . [ 1 ] Patients with suspected PT must undergo diagnostic testing to ensure it isn’t CPP or exaggerated thelarche, the intermediate stage before CPP. [ 3 ] Notable hormone differences occur between CPP and PT patients, so studying these hormone levels is the main biochemical diagnostic used in CPP. [ 4 ] Individuals with CPP usually have a higher basal LH levels and LH:FSH ratios. [ 1 ] [ 4 ] Few PT patients, 9 to 14%, are predicted to develop CPP. [ 1 ] [ 4 ] Observation allows clinicians to identify the presentation of CPP indicative symptoms in PT patients. No diagnostics tests can indicate if a PT patient is at risk of developing CPP. [ 4 ] Premature thelarche does not require treatment. In PT, breast hypertrophy will usually stop completely and patients will experience regression of the breast tissue over 3 to 60 months. Less commonly, patients may remain with residual breast tissue or continue through cycles of breast hypertrophy and regression until puberty. [ 1 ] Diagnostics are utilised in individuals with PT, especially at the presentation of other secondary sex characteristics. Diagnostics aim to ensure PT patients are not suffering from CPP . [ 1 ] Pelvic ultrasounds are important in diagnosing CPP. [ 3 ] Patients with CPP have an increased ovary and uterus size. The ovary and uterus volume of CPP patients is similar to that of females undergoing puberty. [ 1 ] The pelvis ultrasound is problematic as a diagnostic, as there is not a specific cut-off for the uterine and ovary volumes that indicate the patient has CPP. Patients with PT should have a uterine and ovarian volume within the normal range for their age. Pelvic ultrasounds are a desirable diagnostic as they are non-invasive and easy to continually review. The pelvic ultrasound should be paired with biochemical tests to determine the presence of CPP. [ 3 ] Biochemical tests study the hormone levels in patients. CPP patients have elevated LH levels and peak LH: FSH ratios when compared to PT patients. It is hard to use LH as a diagnostic for CPP, as the LH assay has varying sensitivity and specificity. [ 1 ] The GnRH stimulation test is the main diagnostic biochemical test used to distinguish PT from CPP. [ 3 ] The GnRH test demonstrates the pituitary responsiveness to GnRH. GnRH stimulates the release of LH and FSH from the anterior pituitary . The peak LH:FSH ratio in CPP patients is similar to the ratio of pubertal females. Females with PT demonstrated a LH:FSH ratio lower than pubertal females. [ 8 ] The disadvantages of the GnRH stimulation test is it takes a long time to perform and requires multiple collections from the patient, making the process time consuming and inconvenient. The test is highly specific but has low sensitivity as the LH hormone response is usually observed in later stages of CPP. [ 3 ] There are also overlaps in the expected value in the GnRH test results of individuals with CPP and PT. [ 1 ] The diagnostic inconsistency in CPP means that a combination of all of pelvic ultrasounds and biochemical tests should be paired with observation, to ensure PT doesn’t progress to CPP. [ 1 ] Natural commodities like fennel , lavender and tea tree oils have been linked to PT. Lavender and tea tree oil have weak estrogenic activities. These estrogenic properties may cause an imbalance in endocrine signalling pathways , leading to PT in regular users of these products. [ 1 ] Fennel tea has been studied as an endocrine disrupter linked to PT. Fennel seed oil contains anethole a compound with estrogenic effects. The tea contains fennel seed oil and regular use results in increased estradiol levels in the infant. Infants with fennel tea related PT, were given the tea as a homeopathic remedy for restlessness . The tea was consumed for at least four months before the presentation of PT symptoms. PT resulting from fennel tea subsides approximately six months after stopping the use of fennel tea. [ 9 ] Leptin is an adipocyte hormone that has important implications of puberty and sex hormone secretion . Increased leptin has been linked to estrogen and estradiol secretion. Leptin has key roles in maintaining age appropriate body composition and desired weight . Leptin receptors are also found in mammary epithelial cells and leptin has been observed as a growth factor in breast tissue. Increased leptin levels have been observed in some cases of PT. The increase in leptin levels cause increased estradiol levels and development of breast tissue. [ 6 ] The form of PT with fluctuating hypertrophy in patients has been linked to activating mutations in the GNAS1 gene. This mutation accounts for a small number of cases of PT. [ 5 ] [ 7 ]
https://en.wikipedia.org/wiki/Premature_thelarche
Premelting (also surface melting ) refers to a quasi-liquid film that can occur on the surface of a solid even below melting point ( T m {\displaystyle T_{m}} ). The thickness of the film is temperature ( T {\displaystyle T} ) dependent. This effect is common for all crystalline materials. Premelting shows its effects in frost heave , and, taking grain boundary interfaces into account, maybe even in the movement of glaciers . Considering a solid-vapour interface, complete and incomplete premelting can be distinguished. During a temperature rise from below to above T m {\displaystyle T_{m}} , in the case of complete premelting, the solid melts homogeneously from the outside to the inside; in the case of incomplete premelting, the liquid film stays very thin during the beginning of the melting process, but droplets start to form on the interface. In either case, the solid always melts from the outside inwards, never from the inside. The first to mention premelting might have been Michael Faraday in 1842 for ice surfaces. [ 1 ] He compared the effect which holds a snowball together to that which makes buildings from moistured sand stable. Another interesting thing he mentioned is that two blocks of ice can freeze together. Later Tammann (1910) and Stranski (1942) suggested that all crystals might, due to the reduction of surface energy, start melting at their surfaces. [ 2 ] [ 3 ] Frenkel strengthened this by noting that, in contrast to liquids, no overheating can be found for solids. [ 4 ] After extensive studies on many materials, it can be concluded that it is a common attribute of the solid state that the melting process begins at the surface. [ 5 ] There are several ways to approach the topic of premelting, the most figurative way might be thermodynamically. A more detailed or abstract view on what physics is important for premelting is given by the Lifshitz and the Landau theories. One always starts with looking at a crystalline solid phase (fig. 1: (1) solid) and another phase. This second phase (fig. 1: (2)) can either be vapour , liquid or solid . Further it can consist of the same chemical material or another. In the case of the second phase being a solid of the same chemical material one speaks of grain boundaries. This case is very important when looking at polycrystalline materials. In the following thermodynamical equilibrium is assumed, as well as for simplicity (2) should be a vaporous phase. The first (1) and the second (2) phase are always divided by some form of interface, what results in an interfacial energy γ 1 − 2 {\displaystyle \gamma _{1-2}} . One can now ask whether this energy can be lowered by inserting a third phase (l) in between (1) and (2). Written in interfacial energies this would mean: If this is the case then it is more efficient for the system to form a separating phase (3). The only possibility for the system to form such a layer is to take material of the solid and "melt" it to a quasi-liquid. In further notation there will be no distinction between quasi-liquid and liquid but one should always keep in mind that there is a difference. This difference to a real liquid becomes clear when looking at a very thin layer (l). As, due to the long range forces of the molecules of the solid material the liquid very near the solid still "feels" the order of crystalline solid and hence itself is in a state providing a not liquid like amount of order. As considering a very thin layer at the moment it is clear that the whole separating layer (l) is too well ordered for a liquid. Further comments on ordering can be found in the paragraph on Landau theory . Now, looking closer at the thermodynamics of the newly introduced phase (l), its Gibbs energy can be written as: where T {\displaystyle T} is the temperature, P {\displaystyle P} the pressure, d {\displaystyle d} the thickness of (l) corresponding to the number or particles N {\displaystyle N} in this case. n l {\displaystyle n_{l}} and μ l {\displaystyle \mu _{l}} are the atomic density and the chemical potential in (l) and γ t o t a l = γ 1 − l + γ l − 2 {\displaystyle \gamma _{total}=\gamma _{1-l}+\gamma _{l-2}} . Note that one has to consider that the interfacial energies can just be added to the Gibbs energy in this case. As noted before d {\displaystyle d} corresponds N {\displaystyle N} so the derivation to d {\displaystyle d} results in: Where γ t o t a l = Δ γ 1 − l ⋅ f ( d ) + γ 1 − 2 {\displaystyle \gamma _{total}=\Delta \gamma _{1-l}\cdot f\left(d\right)+\gamma _{1-2}} . Hence μ 1 {\displaystyle \mu _{1}} and μ l {\displaystyle \mu _{l}} differ and Δ μ = μ 1 − μ l {\displaystyle \Delta \mu =\mu _{1}-\mu _{l}} can be defined. Assuming that a Taylor expansion around the melting point ( T m , P m ) {\displaystyle \left(T_{m},P_{m}\right)} is possible and using the Clausius–Clapeyron equation one can get the following results: ( d = − 2 σ 2 Δ γ n l q m t ) 1 / 3 {\displaystyle \left(d=-{\frac {2\sigma ^{2}\Delta \gamma }{n_{l}q_{m}t}}\right)^{1/3}} d ∝ | l n | t | | {\displaystyle d\propto \left|ln\left|t\right|\right|} Where σ {\displaystyle \sigma } is in the order of molecular dimensions q m {\displaystyle q_{m}} the specific melting heat and t = T m − T T m {\displaystyle t={\frac {T_{m}-T}{T_{m}}}} These formulas also show that the more the temperature increases, the more increases the thickness of the premelt as this is energetically advantageous. This is the explanation why no overheating exists for this type of phase transition . [ 5 ] With the help of the Lifshitz Theory on Casimir, respectively van der Waals, interactions of macroscopic bodies premelting can be viewed from an electrodynamical perspective. A good example for determining the difference between complete and incomplete premelting is ice. From vacuum ultraviolet (VUV) frequencies upwards the polarizability of ice is greater than that of water, at lower frequencies this is reversed. Assuming there is already a film of thickness d {\displaystyle d} on the solid it is easy for any components of electromagnetic waves to travel through the film in the direction perpendicular to the solid surface as long d {\displaystyle d} is small. Hence as long as the film is thin compared to the frequency interaction from the solid to the whole film is possible. But when d {\displaystyle d} gets large compared to typical VUV frequencies the electronic structure of the film will be too slow to pass the high frequencies to the other end of the liquid phase. Thus this end of the liquid phase feels only a retarded van der Waals interaction with the solid phase. Hence the attraction between the liquid molecules themselves will predominate and they will start forming droplets instead of thickening the film further. So the speed of light limits complete premelting. This makes it a question of solid and surface free energies whether complete premelting occurs. Complete surface melting will occur when γ t o t a l ( d ) {\displaystyle \gamma _{total}\left(d\right)} is monotonically decreasing. If γ t o t a l ( d ) {\displaystyle \gamma _{total}\left(d\right)} instead shows a global minimum at finite d {\displaystyle d} than the premelting will be incomplete. This implies: When the long range interactions in the system are attractive than there will be incomplete premelting — assuming the film thickness is larger than any repulsive interactions. Is the film thickness small compared to the range of the repulsive interactions present and the repulsive interactions are stronger than the attractive ones than complete premelting can occur. For van der Waals interactions Lifshitz theory can now calculate which type of premelting should occur for a special system. In fact small differences in systems can affect the type of premelting. For example, ice in an atmosphere of water vapour shows incomplete premelting, whereas the premelting of ice in air is complete. For solid–solid interfaces it cannot be predicted in general whether the premelting is complete or incomplete when only considering van der Waals interactions. Here other types of interactions become very important. This also accounts for grain boundaries. [ 5 ] Most insight in the problem probably emerges when approaching the effect form Landau Theory. Which is a little bit problematic as the melting of a bulk in general has to be considered as a first order phase transition, meaning the order parameter η {\displaystyle \eta } jumps at t = 0 {\displaystyle t=0} . The derivation of Lipowski (basic geometry shown in fig.2) leads to the following results when T ≤ T m {\displaystyle T\leq T_{m}} : η 0 ∝ { c o n s t . , a < a m | t | 1 / 4 , a = a m | t | 1 / 2 , a > a m {\displaystyle \eta _{0}\propto {\begin{cases}const.&{\text{, }}a<{\sqrt {a_{m}}}\\\left|t\right|^{1/4}&{\text{, }}a={\sqrt {a_{m}}}\\\left|t\right|^{1/2}&{\text{, }}a>{\sqrt {a_{m}}}\end{cases}}} Where η 0 {\displaystyle \eta _{0}} is the order parameter at the border between (2) and (l), 1 / a {\displaystyle 1/a} the so-called extrapolation length and a m {\displaystyle a_{m}} a constant that enters the model and has to be determined using experiment and other models. Hence one can see that the order parameter in the liquid film can undergo a continuous phase transition for large enough extrapolation length. A further result is that d ∝ | l n | t | | {\displaystyle d\propto \left|ln\left|t\right|\right|} what corresponds to the result of the thermodynamical model in the case of short range interactions. Landau Theory does not consider fluctuations like capillary waves, this could change the results qualitatively. [ 6 ] There are several techniques to prove the existence of a liquid layer on a well-ordered surface. Basically it is all about showing that there is a phase on top of the solid which has hardly any order (quasi-liquid, see fig. order parameter). One possibility was done by Frenken and van der Veen using proton scattering on a lead (Pb) single crystal (110) surface. First the surface was atomically cleaned in [UHV], because one obviously has to have a very well ordered surface for such experiments. Then they did proton shadowing and blocking measurements. An ideal shadowing and blocking measurements results in an energy spectrum of the scattered protons that shows only a peak for the first surface layer and nothing else. Due to the non ideality of the experiment the spectrum also shows effects of the underlying layers. That means the spectrum is not one well defined peak but has a tail to lower energies due to protons scattered on deeper layers which results in losing energies because of stopping. This is different for a liquid film on the surface: This film does hardly (to the meaning of hardly see Landau theory) have any order. So the effects of shadowing and blocking vanish what means all the liquid film contributes the same amount of scattered electrons to the signal. Therefore, the peak does not only have a tail, but also becomes broadened. During their measurements Frenken and van der Veen raised the temperature to the melting point and hence could show that with increasing temperature a disordered film formed on the surface in equilibrium with a still well ordered Pb crystal. [ 7 ] Up to here, an ideal surface was considered, but going beyond the idealized case there are several effects which influence premelting: The friction coefficient for ice, without a liquid film on the surface, is measured to be μ = 0.6 {\displaystyle \mu =0.6} . [ 8 ] A comparable friction coefficient is that of rubber or bitumen (roughly 0.8), which would be very difficult to ice skate on. The friction coefficients needs to be around or below 0.005 for ice skating to be possible. [ 9 ] The reason ice skating is possible is because there is a thin film of water present between the blade of the ice skate and the ice. The origin of this water film has been a long-standing debate. There are three proposed mechanisms that could account for a film of liquid water on the ice surface: [ 10 ] While contributions from all three of these factors are usually in effect when ice skating, the scientific community has long debated over which is the dominating mechanism. For several decades it was common to explain the low friction of the skates on ice by pressure melting, but there are several recent arguments that contradict this hypothesis. [ 10 ] The strongest argument against pressure melting is that ice skating is still possible below temperatures under -20 °C (253K). At this temperature, a great deal of pressure (>100MPa) is required to induce melting. Just below -23 °C (250K), increasing the pressure can only form a different solid structure of ice ( Ice III ) since the isotherm no longer passes through the liquid phase on the phase diagram . While impurities in the ice will suppress the melting temperature, many materials scientists agree that pressure melting is not the dominant mechanism. [ 11 ] The thickness of the water film due to premelting is also limited at low temperatures. While the water film can reach thicknesses on the order of μm, at temperatures around -10 °C the thickness is on the order of nm. Although, De Koning et al. found in their measurements that the adding of impurities to the ice can lower the friction coefficient up to 15%. The friction coefficient increases with skating speed, which could yield different results depending on the skating technique and speeds. [ 9 ] While the pressure melting hypothesis may have been put to rest, the debate between premelting and friction as the dominant mechanism still rages on.
https://en.wikipedia.org/wiki/Premelting
The Premio Presidente della Repubblica is an Italian award introduced by the former president and academic Luigi Einaudi . Since 1949 it has been awarded on a regular basis by the Accademia dei Lincei , the Accademia di San Luca , and the Accademia Nazionale di Santa Cecilia . It is among the most distinguished awards of the three prestigious academies. [ 1 ] [ 2 ] The award was established on 11 October 1948 by Luigi Einaudi with a letter to the president of the Lincei National Academy to continue the tradition of royal awards. The prize was first introduced to the class of physical, mathematical, and natural sciences and the class of moral, historical, and philological sciences. [ 3 ] In the same year, Einaudi established a national prize for artists and architects awarded by the academies of San Luca and Santa Cecilia. The prize is given by the President of Italy in charge in an official ceremony. [ 2 ] Among the people awarded, there are several winners of other important awards such as the Nobel Prize , the Wolf Prize , and the Academy Award . [ 4 ]
https://en.wikipedia.org/wiki/Premio_Presidente_della_Repubblica_(prize)
A premise or premiss [ a ] is a proposition —a true or false declarative statement—used in an argument to prove the truth of another proposition called the conclusion . [ 1 ] Arguments consist of a set of premises and a conclusion. An argument is meaningful for its conclusion only when all of its premises are true . If one or more premises are false, the argument says nothing about whether the conclusion is true or false. For instance, a false premise on its own does not justify rejecting an argument's conclusion; to assume otherwise is a logical fallacy called denying the antecedent . One way to prove that a proposition is false is to formulate a sound argument with a conclusion that negates that proposition. An argument is sound and its conclusion logically follows (it is true) if and only if the argument is valid and its premises are true. An argument is valid if and only if it is the case that whenever the premises are all true, the conclusion must also be true. If there exists a logical interpretation where the premises are all true but the conclusion is false, the argument is invalid. Key to evaluating the quality of an argument is determining if it is valid and sound. That is, whether its premises are true and whether their truth necessarily results in a true conclusion. In logic , an argument requires a set of declarative sentences (or "propositions" ) known as the "premises" (or "premisses"), along with another declarative sentence (or "proposition"), known as the conclusion . Complex arguments can use a sequence of rules to connect several premises to one conclusion, or to derive a number of conclusions from the original premises which then act as premises for additional conclusions. An example of this is the use of the rules of inference found within symbolic logic . Aristotle held that any logical argument could be reduced to two premises and a conclusion. [ 2 ] Premises are sometimes left unstated, in which case, they are called missing premises, for example: Socrates is mortal because all men are mortal. It is evident that a tacitly understood claim is that Socrates is a man. The fully expressed reasoning is thus: Because all men are mortal and Socrates is a man, Socrates is mortal. In this example, the dependent clauses preceding the comma (namely, "all men are mortal" and "Socrates is a man") are the premises, while "Socrates is mortal" is the conclusion. The proof of a conclusion depends on both the truth of the premises and the validity of the argument. Also, additional information is required over and above the meaning of the premise to determine if the full meaning of the conclusion coincides with what is. [ 3 ] For Euclid , premises constitute two of the three propositions in a syllogism , with the other being the conclusion. [ 4 ] These categorical propositions contain three terms: subject and predicate of the conclusion, and the middle term. The subject of the conclusion is called the minor term while the predicate is the major term. The premise that contains the middle term and major term is called the major premise while the premise that contains the middle term and minor term is called the minor premise. [ 5 ] A premise can also be an indicator word if statements have been combined into a logical argument and such word functions to mark the role of one or more of the statements. [ 6 ] It indicates that the statement it is attached to is a premise. [ 6 ]
https://en.wikipedia.org/wiki/Premise
A premixed flame is a flame formed under certain conditions during the combustion of a premixed charge (also called pre-mixture) of fuel and oxidiser . Since the fuel and oxidiser—the key chemical reactants of combustion—are available throughout a homogeneous stoichiometric premixed charge, the combustion process once initiated sustains itself by way of its own heat release. The majority of the chemical transformation in such a combustion process occurs primarily in a thin interfacial region which separates the unburned and the burned gases. The premixed flame interface propagates through the mixture until the entire charge is depleted. [ 1 ] The propagation speed of a premixed flame is known as the flame speed (or burning velocity) which depends on the convection-diffusion-reaction balance within the flame, i.e. on its inner chemical structure. The premixed flame is characterised as laminar or turbulent depending on the velocity distribution in the unburned pre-mixture (which provides the medium of propagation for the flame). Under controlled conditions (typically in a laboratory) a laminar flame may be formed in one of several possible flame configurations. The inner structure of a laminar premixed flame is composed of layers over which the decomposition, reaction and complete oxidation of fuel occurs. These chemical processes are much faster than the physical processes such as vortex motion in the flow and, hence, the inner structure of a laminar flame remains intact in most circumstances. The constitutive layers of the inner structure correspond to specified intervals over which the temperature increases from the specified unburned mixture up to as high as the adiabatic flame temperature (AFT). In the presence of volumetric heat transfer and/or aerodynamic stretch, or under the development intrinsic flame instabilities , the extent of reaction and, hence, the temperature attained across the flame may be different from the AFT. For a one-step irreversible chemistry, i.e., ν F F + ν O O 2 → P r o d u c t s {\displaystyle \nu _{F}{\rm {{F}+\nu _{O}{\rm {{O}_{2}\rightarrow {\rm {Products}}}}}}} , the planar, adiabatic flame has explicit expression for the burning velocity derived from activation energy asymptotics when the Zel'dovich number β ≫ 1. {\displaystyle \beta \gg 1.} The reaction rate ω {\displaystyle \omega } (number of moles of fuel consumed per unit volume per unit time) is taken to be Arrhenius form , where B {\displaystyle B} is the pre-exponential factor , ρ {\displaystyle \rho } is the density , Y F {\displaystyle Y_{F}} is the fuel mass fraction , Y O 2 {\displaystyle Y_{O_{2}}} is the oxidizer mass fraction , E a {\displaystyle E_{a}} is the activation energy , R {\displaystyle R} is the universal gas constant , T {\displaystyle T} is the temperature , W F & W O 2 {\displaystyle W_{F}\ \&\ W_{O_{2}}} are the molecular weights of fuel and oxidizer, respectively and m & n {\displaystyle m\ \&\ n} are the reaction orders. Let the unburnt conditions far ahead of the flame be denoted with subscript u {\displaystyle u} and similarly, the burnt gas conditions by b {\displaystyle b} , then we can define an equivalence ratio ϕ {\displaystyle \phi } for the unburnt mixture as Then the planar laminar burning velocity for fuel-rich mixture ( ϕ > 1 {\displaystyle \phi >1} ) is given by [ 2 ] [ 3 ] where and a = β ( ϕ − 1 ) / L e F {\displaystyle a=\beta (\phi -1)/\mathrm {Le} _{F}} . Here λ {\displaystyle \lambda } is the thermal conductivity , c p {\displaystyle c_{p}} is the specific heat at constant pressure and L e {\displaystyle \mathrm {Le} } is the Lewis number . Similarly one can write the formula for lean ϕ < 1 {\displaystyle \phi <1} mixtures. This result is first obtained by T. Mitani in 1980. [ 4 ] Second order correction to this formula with more complicated transport properties were derived by Forman A. Williams and co-workers in the 80s. [ 5 ] [ 6 ] [ 7 ] Variations in local propagation speed of a laminar flame arise due to what is called flame stretch. Flame stretch can happen due to the straining by outer flow velocity field or the curvature of flame; the difference in the propagation speed from the corresponding laminar speed is a function of these effects and may be written as: [ 8 ] [ 9 ] where δ L {\displaystyle \delta _{L}} is the laminar flame thickness, κ {\displaystyle \kappa } is the flame curvature, n {\displaystyle \mathbf {n} } is the unit normal on the flame surface pointing towards the unburnt gas side, v {\displaystyle \mathbf {v} } is the flow velocity and M c & M a {\displaystyle {\mathcal {M}}_{c}\ \&\ {\mathcal {M}}_{a}} are the respective Markstein numbers of curvature and strain. In practical scenarios, turbulence is inevitable and, under moderate conditions, turbulence aids the premixed burning process as it enhances the mixing process of fuel and oxidiser. If the premixed charge of gases is not homogeneously mixed, the variations on equivalence ratio may affect the propagation speed of the flame. In some cases, this is desirable as in stratified combustion of blended fuels. A turbulent premixed flame can be assumed to propagate as a surface composed of an ensemble of laminar flames so long as the processes that determine the inner structure of the flame are not affected. [ 10 ] Under such conditions, the flame surface is wrinkled by virtue of turbulent motion in the premixed gases increasing the surface area of the flame. The wrinkling process increases the burning velocity of the turbulent premixed flame in comparison to its laminar counterpart. The propagation of such a premixed flame may be analysed using the field equation called as G equation [ 11 ] [ 12 ] for a scalar G {\displaystyle G} as: which is defined such that the level-sets of G represent the various interfaces within the premixed flame propagating with a local velocity U L {\displaystyle U_{L}} . This, however, is typically not the case as the propagation speed of the interface (with resect to unburned mixture) varies from point to point due to the aerodynamic stretch induced due to gradients in the velocity field. Under contrasting conditions, however, the inner structure of the premixed flame may be entirely disrupted causing the flame to extinguish either locally (known as local extinction) or globally (known as global extinction or blow-off). Such opposing cases govern the operation of practical combustion devices such as SI engines as well as aero-engine afterburners. The prediction of the extent to which the inner structure of flame is affected in turbulent flow is a topic of extensive research. The flow configuration of premixed gases affects the stabilization and burning characteristics of the In a Bunsen flame, a steady flow rate is provided which matches the flame speed so as to stabilize the flame. If the flow rate is below the flame speed, the flame will move upstream until the fuel is consumed or until it encounters a flame holder . If the flow rate is equal to the flame speed, we would expect a stationary flat flame front normal to the flow direction. If the flow rate is above the flame speed, the flame front will become conical such that the component of the velocity vector normal to the flame front is equal to the flame speed. Here, the pre-mixed gases flow in such a way so as to form a region of stagnation (zero velocity) where the flame may be stabilized. In this configuration, the flame is typically initiated by way of a spark within a homogeneous pre-mixture. The subsequent propagation of the developed premixed flame occurs as a spherical front until the mixture is transformed entirely or the walls of the combustion vessel are reached. Since the equivalence ratio of the premixed gases may be controlled, premixed combustion offers a means to attain low temperatures and, thereby, reduce NO x emissions. Due to improved mixing in comparison with diffusion flames , soot formation is mitigated as well. Premixed combustion has therefore gained significance in recent times. The uses involve lean-premixed-prevaporized (LPP) gas turbines and SI engines .
https://en.wikipedia.org/wiki/Premixed_flame
In a premixed turbulent flame , fuel and oxidizer are being mixed by turbulence during a sufficiently long time before combustion is initiated. The deposition of energy from the spark generates a flame kernel that grows at first by laminar, then by turbulent flame propagation. And in which the oxidizer has been mixed with the fuel before it reaches the flame front . This creates a thin flame front as all of the reactants are readily available. This combustion article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Premixed_turbulent_flames
Premunition , also known as infection-immunity, [ 1 ] is a host response that protects against high numbers of parasite and illness without eliminating the infection. [ 2 ] This type of immunity is relatively rapid, progressively acquired, short-lived, and partially effective. [ 3 ] For malaria , premunition is maintained by repeated antigen exposure from infective bites. [ 3 ] Thus, if an individual departs from an endemic area, he or she may lose premunition and become susceptible to malaria. [ 3 ] Antibody action contributes to premunition. [ 4 ] However, premunition is probably much more complex than simple antibody and antigen interaction. [ 3 ] In the case of malaria, the sporozoite and merozoite stages of Plasmodium elicit the antibody response which leads to premunition. [ 4 ] Immunoglobulin E targets the parasites and leads to eosinophil degranulation which releases major basic protein that damages the parasites, and other factors elicit a local inflammatory response. [ 4 ] However, Plasmodium can change its surface antigens, so the development of an antibody repertoire that can recognize multiple surface antigens is important for premunition to be achieved. [ 5 ] Premunition has not been well-studied, and although it likely occurs broadly, it is mainly emphasized for its role in malaria, tuberculosis , syphilis and relapsing fever . [ 6 ] Premunization is the artificial induction of premunition. [ 7 ] Premunity is progressive development of immunity in individuals exposed to an infective agent , [ 8 ] mainly belonging to protozoa and Rickettsia , but not in viruses. [ 9 ] After the initial infection , which generally occurs in childhood, the effect in subsequent infections is diminished. Infections thereafter may exhibit little or no symptomatology in spite of parasitemia . The next stage is resistance to infection altogether. Loss of premunity is estimated to be the cause of the rebound of malaria [ 10 ] in 1965, in India after the dramatic success of the National Malaria Control Programme that was launched for rural India in 1953. Premunity occurs in infections of babesiosis , [ 11 ] [ 12 ] malaria [ 8 ] [ 13 ] Onchocerca volvulus , [ 14 ] and Trichomonas . [ 15 ]
https://en.wikipedia.org/wiki/Premunition
PreonVM is an implementation of the Java virtual machine developed by Virtenio. The PreonVM was initially developed to run on the Atmel AVR ATmega256, but has been ported to ARM Cortex-M 3 systems. Therefore the VM can run on a microcontroller with 8 kB RAM and 256 kB ROM at a minimum. The PreonVM requires no additional operating system and runs directly on the microcontroller. Every class file of the application is transformed via a ClassLinker to strip all parts of class files that is not required. This makes it possible to reduce the class file size by about 80%, which is required for a small device. The ClassLinker builds a .vmm file which combines all application class files in a special format which can be read and executed by the PreonVM on the microcontroller. The VM supports all Java data types incl. long and double, threads, synchronization, Garbage collection with memory defragmentation, exceptions, system properties and IRQ/event system. The PreonVM comes with a library of driver classes for IO like I2C , SPI , USART , CAN , PWM , IRQ , RTC , GPIO , ADC , DAC and with drivers for some sensors and IC's. The following code examples uses an SHT21 sensor and reads the relative humidity. This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/PreonVM
Preoperational anxiety , or preoperative anxiety , is a common reaction experienced by patients who are admitted to a hospital for surgery . [ 1 ] It can be described as an unpleasant state of tension or uneasiness that results from a patient's doubts or fears before an operation. [ 1 ] The State-Trait Anxiety Inventory (STAI) is a widespread method of measuring preoperative anxiety for research purposes. It consists of two 20-item scales on which patients are asked to rate particular symptoms. [ 2 ] The STAI is based on the theory that there are two distinct aspects of anxiety. The State scale is designed to measure the circumstantial or temporary arousal of anxiety, and the Trait scale is designed to measure longstanding personality characteristics related to anxiety. The items on each scale are based on a two-factor model: "anxiety present" or "anxiety absent". [ 2 ] In a 2009 paper in The Journal of Nursing Measurement , researchers argued that fast-paced hospital environments make it difficult to get each patient through all 20 items, especially when other assessments must also be done. [ 2 ] Shorter versions of the STAI have been developed. For example, Marteau and Bekker's six-item version of the State scale was found in 2009 to have "favorable internal consistency reliability and validity when correlated with the parent 20-item State scale". [ 2 ] A variety of fears can cause preoperative anxiety. They include fear of: Other factors in the intensity of preoperative anxiety are: Irving Janis separates the factor trends that are commonly seen affecting anxiety into three different levels: [ 6 ] [ needs update ] Anxiety can cause physiological responses such as tachycardia , hypertension , elevated temperature, sweating , nausea , and a heightened sense of touch, smell, or hearing. [ 1 ] [ 3 ] A patient may also experience peripheral vasoconstriction , which makes it difficult for the hospital staff to obtain blood. [ 1 ] Anxiety may cause behavioral and cognitive changes that result in increased tension, apprehension, nervousness, and aggression. [ 1 ] Some patients may become so apprehensive that they cannot understand or follow simple instructions. Some may be so aggressive and demanding that they require constant attention of the nursing staff. [ 1 ] In research conducted by Irving Janis, common reactions and strategies were separated into three different levels of preoperative anxiety: Low anxiety Patients in this category tend to adopt a joking attitude or to say things like "there's nothing to it!" Because most pain is not preconceived by the patient, the patients tends to blame their pain on the hospital staff. [ 6 ] In this case, the patient feels as if they have been mistreated. This is because the patient doesn't have the usual mindset that pain is an unavoidable result of an operation. [ 6 ] Other trends include displaying a calm and relaxed attitude during preoperative care. They don't usually experience any sleeping disturbances. [ 6 ] They also tend to make little effort to seek more information about medical procedures. This may be due to the fact that they are unaware of the potential threats, or it may just be because they have succeeded in shutting themselves out and eliminating all thought of doubt and fear. [ 6 ] The main concern that low anxiety patients tend to express is finances, and they usually deny apprehension about operational dangers. [ 6 ] Moderate anxiety Patients in this category may only experience minor emotional tension. The occasional worry or fear that is experienced by a patient with moderate anxiety can usually be suppressed. [ 6 ] Some may develop insomnia , but they also usually respond well to mild sedatives. Their outward manner may seem relatively calm and well controlled, except for small moments where it is apparent to others that the patient is suffering from an inner conflict. They can usually perform daily tasks, only becoming restless from time to time. [ 6 ] These patients are usually very motivated to develop reliable information from medical authority in order to reach a point of comfortable relief. [ 6 ] High anxiety Patients in this category will usually try to reassure themselves by seeking information, but these attempts, in the long-run, are unsuccessful at helping the patient reach a comfortable point because the fear is so dominant. [ 6 ] It is common for patients in this level of anxiety to engage in mentally distracting activities in an attempt to get their mind off of anticipated danger. They have a hard time idealizing their situation or maintaining any sort of conception that things could turn out well in the end. This because they tend to dwell on improbable dangers. [ 6 ] On the positive side, if a patient experiences moderate amounts of anxiety, the anxiety can aid in the preparation for surgery. [ 1 ] On the negative side, the anxiety can cause harm if the patient experiences an excessive or diminutive amount. One reason for this is that small amounts of anxiety will not adequately prepare the patient for pain. [ 1 ] Also, higher levels of anxiety can over-sensitize the patient to unpleasant stimuli, which would heighten their senses of touch, smell or hearing. This results in intense pain, dizziness, and nausea. It can also increase the patient's feelings of uneasiness in the unfamiliar surroundings. [ 4 ] Anxiety has also been proven to cause higher analgesic and anaesthetic requirement, postoperative pain, and prolonged hospital stay. [ 7 ] Irving L. Janis separates the effects of preoperative anxiety on postoperative reactions into three levels: [ 6 ] Treatment of preoperative anxiety may include:
https://en.wikipedia.org/wiki/Preoperational_anxiety
The preorbital gland is a paired exocrine gland found in many species of artiodactyls , which is homologous to the lacrimal gland found in humans. These glands are trenchlike slits of dark blue to black, nearly bare skin extending from the medial canthus of each eye. They are lined by a combination of sebaceous and sudoriferous glands , and they produce secretions which contain pheromones and other semiochemical compounds. [ 1 ] Ungulates frequently deposit these secretions on twigs and grass as a means of communication with other animals. [ 2 ] [ 3 ] The preorbital gland serves different roles in different species. Pheromone-containing secretions from the preorbital gland may serve to establish an animal's dominance (especially in preparation for breeding ), [ 4 ] mark its territory , or simply to produce a pleasurable sensation to the animal. [ 5 ] Because of its critical role in scent marking, the preorbital gland is usually considered as a type of scent gland . A further function of these glands may be to produce antimicrobial compounds to fight against skin pathogens . Antimicrobial compounds found in these glands may be biosynthesized by the animal itself, or by microorganisms that live in these glands. [ 6 ] Deer have seven types of external scent glands distributed across their bodies. These are the forehead glands (on the forehead), the preorbital glands (below the eyes), the nasal glands (inside the nostrils), the interdigital glands (between the toes), the preputial gland (inside the foreskin of the deer's penis ), the metatarsal glands (outside of the hind legs), and the tarsal glands (located inside of the hind legs). [ 7 ] Although it is not their primary function, the salivary glands also function as scent glands. Deer rely heavily on the scent glands to communicate with other members of their species, and possibly even with members of other species. A deer may rub its preorbital gland (e.g., on a branch) purely for pleasure. [ 5 ] The two major species of deer found in North America are the white-tailed deer ( Odocoileus virginianus ) and the mule deer ( Odocoileus hemionus ). The most important sense in these animals is olfaction (the sense of smell)—so much so that they have an accessory olfaction system . The vomeronasal organ , located at the base of the nasal cavity , is the sensory organ for this system. Besides locating food and water, deer rely on their two separate olfactory systems to detect the presence of predators, as well as to supply them with information about the identity, sex, dominance status and reproductive status of other deer. [ 8 ] The preorbital gland of O. virginianus is about 22 millimeters (0.87 in) in length, while that of O. hemionus is roughly 40 millimeters (1.6 in) in length. In black-tailed deer ( O. h. columbianus ), a subspecies of O. hemionus , the preorbital gland measures about 32 millimeters (1.3 in). [ 1 ] In all of these animals, the preorbital glands are surrounded by muscle which is under voluntary control, at least to some extent. [ 8 ] It is not entirely clear whether the preorbital gland secretions of North American deer are significant for chemical communication. Most of the time the glands remain closed, but deer are capable of opening them to emit an odor in certain circumstances. For example, a rutting male may dilate its preorbital glands in order to signal aggression to another nearby male. Female deer often open their glands while caring for their young. [ 8 ] In juvenile red deer ( Cervus elaphus ), the preorbital gland appears to play a role in the response to stress. The preorbital gland is closed in a relaxed calf, whereas it is opened in a stressed calf. [ 10 ] One example of this is the signalling of hunger and satiety. Fawns open their preorbital glands as a signal that they are hungry, and close the gland after feeding, when they are no longer hungry. [ 11 ] The adult Indian muntjac ( Muntiacus muntjac ) is a solitary animal, other than during the rut (mating season) and for the first six months after giving birth. Adult males in particular are widely separated. Marking grass and bushes with secretions from their preorbital glands appears to be involved in the acquisition and maintenance of territory. [ 12 ] The bovids ( family Bovidae) comprise some 140 species of ruminants in which at least the males bear unbranched, hollow horns covered in a permanent sheath of keratin . Most species of bovids have means of spacing themselves across their habitat; territorial behavior is the most consistent type of spacing behavior. [ 14 ] Caprids (dwarf antelope, such as the sheep , goats , muskox , serows , gorals , and several similar species) use their preorbital glands to establish social rank . For example, when competition arises between two grazing sheep ( Ovis aries ), they have been observed to nuzzle each other's preorbital glands. By sending and receiving olfactory cues, this behavior appears to be a means of establishing dominance and of avoiding a fight, which would otherwise involve potentially injurious butting or clashing with the forehead. [ 15 ] The antilopine bovids (dwarf antelope, such as the springbok , blackbuck , gazelles , dik-diks , oribi , and several similar species) have well-developed preorbital glands. [ 3 ] Among the cephalophines , members of the Philantomba and Sylvicapra genera are all solitary animals which display territorial behavior and have well-developed preorbital glands. Maxwell's duiker ( Philantomba maxwellii ) is a solitary animal which utilizes preorbital gland secretions to mark its territory. This behavior is observed most in adult males, less frequently in females, and less still in subadults of this species. [ 16 ] Secretions from the preorbital gland of the common duiker contain at least 33 different chemical compounds . Two thiazole compounds and an epoxy ketone are present in significantly higher concentrations in male than in female secretions, suggesting that they could serve as sex recognition cues. [ 17 ] The alcephine bovids ( wildebeests , hartebeests , hirola , bontebok , blesbok , and several similar species) have preorbital glands which secrete complex mixtures of chemical compounds. [ 3 ] The preorbital glands of the bontebok ( Damaliscus pygargus pygarus ) are larger in males than in females. Their secretions contain at least forty different chemical compounds, and are deposited on grass and twigs at the borders of their territory. They then appear to transfer the secretions from the grass to their horns and forehead by waving the head from side to side across the stalk bearing the secretion. Marking of plant stalks with preorbital gland secretions is seen in both sexes. [ 18 ] In contrast to the duikers and raphicerids, the klipspringer ( Oreotragus oreotragus ) is a semi- gregarious species, while the hirola ( Beatragus hunteri ) is fully gregarious. Nevertheless, these animals display territorial scent marking of grasses with secretions from their preorbital glands. [ 16 ] [ 19 ] Differences in the social structure and marking behavior among different species may lead to a different size and position of the preorbital glands on the animal's face. For example, Günther's dik-dik ( Madoqua guentheri ) is a monogamous species of antelope that lives in a permanent territory, the boundaries of which the animals mark several times a day by actively pressing the preorbital glands to grasses and low-lying plants and applying the secretions. In this territorial animal, the preorbital glands remain of considerable size throughout the year. The glands are located in large preorbital pits in the lacrimal bone , and are surrounded by specialized facial muscles that compress them to express the secretions more effectively. In contrast, the saiga antelope ( Saiga tatarica ) is a polygamous and somewhat nomadic species which does not occupy any permanent territory at any time during the year. For most of the year the preorbital glands remain small, only growing to substantial size during the rut. At that time of year, secretions ooze more or less continuously from the glands. In this nonterritorial animal, the preorbital glands are not as well-developed, lack well-developed surrounding facial muscles, and are positioned in an inconspicuous and shallow depression of the lacrimal bone. [ 20 ] The recent identification of several antimicrobial compounds from the secretions of animal dermal scent glands may be the beginning of a promising new area of drug development . Assuming functional analogs of these lead compounds can be synthesized and found to be effective in vivo , the potential exists for producing new antimicrobial agents against pathogenic skin microorganisms. [ 6 ]
https://en.wikipedia.org/wiki/Preorbital_gland
All definitions tacitly require the homogeneous relation R {\displaystyle R} be transitive : for all a , b , c , {\displaystyle a,b,c,} if a R b {\displaystyle aRb} and b R c {\displaystyle bRc} then a R c . {\displaystyle aRc.} A term's definition may require additional properties that are not listed in this table. In mathematics , especially in order theory , a preorder or quasiorder is a binary relation that is reflexive and transitive . The name preorder is meant to suggest that preorders are almost partial orders , but not quite, as they are not necessarily antisymmetric . A natural example of a preorder is the divides relation "x divides y" between integers, polynomials , or elements of a commutative ring . For example, the divides relation is reflexive as every integer divides itself. But the divides relation is not antisymmetric, because 1 {\displaystyle 1} divides − 1 {\displaystyle -1} and − 1 {\displaystyle -1} divides 1 {\displaystyle 1} . It is to this preorder that "greatest" and "lowest" refer in the phrases " greatest common divisor " and " lowest common multiple " (except that, for integers, the greatest common divisor is also the greatest for the natural order of the integers). Preorders are closely related to equivalence relations and (non-strict) partial orders. Both of these are special cases of a preorder: an antisymmetric preorder is a partial order, and a symmetric preorder is an equivalence relation. Moreover, a preorder on a set X {\displaystyle X} can equivalently be defined as an equivalence relation on X {\displaystyle X} , together with a partial order on the set of equivalence class . Like partial orders and equivalence relations, preorders (on a nonempty set) are never asymmetric . A preorder can be visualized as a directed graph , with elements of the set corresponding to vertices, and the order relation between pairs of elements corresponding to the directed edges between vertices. The converse is not true: most directed graphs are neither reflexive nor transitive. A preorder that is antisymmetric no longer has cycles; it is a partial order, and corresponds to a directed acyclic graph . A preorder that is symmetric is an equivalence relation; it can be thought of as having lost the direction markers on the edges of the graph. In general, a preorder's corresponding directed graph may have many disconnected components. As a binary relation, a preorder may be denoted ≲ {\displaystyle \,\lesssim \,} or ≤ {\displaystyle \,\leq \,} . In words, when a ≲ b , {\displaystyle a\lesssim b,} one may say that b covers a or that a precedes b , or that b reduces to a . Occasionally, the notation ← or → is also used. Let ≲ {\displaystyle \,\lesssim \,} be a binary relation on a set P , {\displaystyle P,} so that by definition, ≲ {\displaystyle \,\lesssim \,} is some subset of P × P {\displaystyle P\times P} and the notation a ≲ b {\displaystyle a\lesssim b} is used in place of ( a , b ) ∈ ≲ . {\displaystyle (a,b)\in {\lesssim }.} Then ≲ {\displaystyle \,\lesssim \,} is called a preorder or quasiorder if it is reflexive and transitive ; that is, if it satisfies: A set that is equipped with a preorder is called a preordered set (or proset ). [ 1 ] Given a preorder ≲ {\displaystyle \,\lesssim \,} on S {\displaystyle S} one may define an equivalence relation ∼ {\displaystyle \,\sim \,} on S {\displaystyle S} such that a ∼ b if and only if a ≲ b and b ≲ a . {\displaystyle a\sim b\quad {\text{ if and only if }}\quad a\lesssim b\;{\text{ and }}\;b\lesssim a.} The resulting relation ∼ {\displaystyle \,\sim \,} is reflexive since the preorder ≲ {\displaystyle \,\lesssim \,} is reflexive; transitive by applying the transitivity of ≲ {\displaystyle \,\lesssim \,} twice; and symmetric by definition. Using this relation, it is possible to construct a partial order on the quotient set of the equivalence, S / ∼ , {\displaystyle S/\sim ,} which is the set of all equivalence classes of ∼ . {\displaystyle \,\sim .} If the preorder is denoted by R + = , {\displaystyle R^{+=},} then S / ∼ {\displaystyle S/\sim } is the set of R {\displaystyle R} - cycle equivalence classes: x ∈ [ y ] {\displaystyle x\in [y]} if and only if x = y {\displaystyle x=y} or x {\displaystyle x} is in an R {\displaystyle R} -cycle with y {\displaystyle y} . In any case, on S / ∼ {\displaystyle S/\sim } it is possible to define [ x ] ≤ [ y ] {\displaystyle [x]\leq [y]} if and only if x ≲ y . {\displaystyle x\lesssim y.} That this is well-defined, meaning that its defining condition does not depend on which representatives of [ x ] {\displaystyle [x]} and [ y ] {\displaystyle [y]} are chosen, follows from the definition of ∼ . {\displaystyle \,\sim .\,} It is readily verified that this yields a partially ordered set. Conversely, from any partial order on a partition of a set S , {\displaystyle S,} it is possible to construct a preorder on S {\displaystyle S} itself. There is a one-to-one correspondence between preorders and pairs (partition, partial order). Example : Let S {\displaystyle S} be a formal theory , which is a set of sentences with certain properties (details of which can be found in the article on the subject ). For instance, S {\displaystyle S} could be a first-order theory (like Zermelo–Fraenkel set theory ) or a simpler zeroth-order theory . One of the many properties of S {\displaystyle S} is that it is closed under logical consequences so that, for instance, if a sentence A ∈ S {\displaystyle A\in S} logically implies some sentence B , {\displaystyle B,} which will be written as A ⇒ B {\displaystyle A\Rightarrow B} and also as B ⇐ A , {\displaystyle B\Leftarrow A,} then necessarily B ∈ S {\displaystyle B\in S} (by modus ponens ). The relation ⇐ {\displaystyle \,\Leftarrow \,} is a preorder on S {\displaystyle S} because A ⇐ A {\displaystyle A\Leftarrow A} always holds and whenever A ⇐ B {\displaystyle A\Leftarrow B} and B ⇐ C {\displaystyle B\Leftarrow C} both hold then so does A ⇐ C . {\displaystyle A\Leftarrow C.} Furthermore, for any A , B ∈ S , {\displaystyle A,B\in S,} A ∼ B {\displaystyle A\sim B} if and only if A ⇐ B and B ⇐ A {\displaystyle A\Leftarrow B{\text{ and }}B\Leftarrow A} ; that is, two sentences are equivalent with respect to ⇐ {\displaystyle \,\Leftarrow \,} if and only if they are logically equivalent . This particular equivalence relation A ∼ B {\displaystyle A\sim B} is commonly denoted with its own special symbol A ⟺ B , {\displaystyle A\iff B,} and so this symbol ⟺ {\displaystyle \,\iff \,} may be used instead of ∼ . {\displaystyle \,\sim .} The equivalence class of a sentence A , {\displaystyle A,} denoted by [ A ] , {\displaystyle [A],} consists of all sentences B ∈ S {\displaystyle B\in S} that are logically equivalent to A {\displaystyle A} (that is, all B ∈ S {\displaystyle B\in S} such that A ⟺ B {\displaystyle A\iff B} ). The partial order on S / ∼ {\displaystyle S/\sim } induced by ⇐ , {\displaystyle \,\Leftarrow ,\,} which will also be denoted by the same symbol ⇐ , {\displaystyle \,\Leftarrow ,\,} is characterized by [ A ] ⇐ [ B ] {\displaystyle [A]\Leftarrow [B]} if and only if A ⇐ B , {\displaystyle A\Leftarrow B,} where the right hand side condition is independent of the choice of representatives A ∈ [ A ] {\displaystyle A\in [A]} and B ∈ [ B ] {\displaystyle B\in [B]} of the equivalence classes. All that has been said of ⇐ {\displaystyle \,\Leftarrow \,} so far can also be said of its converse relation ⇒ . {\displaystyle \,\Rightarrow .\,} The preordered set ( S , ⇐ ) {\displaystyle (S,\Leftarrow )} is a directed set because if A , B ∈ S {\displaystyle A,B\in S} and if C := A ∧ B {\displaystyle C:=A\wedge B} denotes the sentence formed by logical conjunction ∧ , {\displaystyle \,\wedge ,\,} then A ⇐ C {\displaystyle A\Leftarrow C} and B ⇐ C {\displaystyle B\Leftarrow C} where C ∈ S . {\displaystyle C\in S.} The partially ordered set ( S / ∼ , ⇐ ) {\displaystyle \left(S/\sim ,\Leftarrow \right)} is consequently also a directed set. See Lindenbaum–Tarski algebra for a related example. If reflexivity is replaced with irreflexivity (while keeping transitivity) then we get the definition of a strict partial order on P {\displaystyle P} . For this reason, the term strict preorder is sometimes used for a strict partial order. That is, this is a binary relation < {\displaystyle \,<\,} on P {\displaystyle P} that satisfies: Any preorder ≲ {\displaystyle \,\lesssim \,} gives rise to a strict partial order defined by a < b {\displaystyle a<b} if and only if a ≲ b {\displaystyle a\lesssim b} and not b ≲ a {\displaystyle b\lesssim a} . Using the equivalence relation ∼ {\displaystyle \,\sim \,} introduced above, a < b {\displaystyle a<b} if and only if a ≲ b and not a ∼ b ; {\displaystyle a\lesssim b{\text{ and not }}a\sim b;} and so the following holds a ≲ b if and only if a < b or a ∼ b . {\displaystyle a\lesssim b\quad {\text{ if and only if }}\quad a<b\;{\text{ or }}\;a\sim b.} The relation < {\displaystyle \,<\,} is a strict partial order and every strict partial order can be constructed this way. If the preorder ≲ {\displaystyle \,\lesssim \,} is antisymmetric (and thus a partial order) then the equivalence ∼ {\displaystyle \,\sim \,} is equality (that is, a ∼ b {\displaystyle a\sim b} if and only if a = b {\displaystyle a=b} ) and so in this case, the definition of < {\displaystyle \,<\,} can be restated as: a < b if and only if a ≲ b and a ≠ b ( assuming ≲ is antisymmetric ) . {\displaystyle a<b\quad {\text{ if and only if }}\quad a\lesssim b\;{\text{ and }}\;a\neq b\quad \quad ({\text{assuming }}\lesssim {\text{ is antisymmetric}}).} But importantly, this new condition is not used as (nor is it equivalent to) the general definition of the relation < {\displaystyle \,<\,} (that is, < {\displaystyle \,<\,} is not defined as: a < b {\displaystyle a<b} if and only if a ≲ b and a ≠ b {\displaystyle a\lesssim b{\text{ and }}a\neq b} ) because if the preorder ≲ {\displaystyle \,\lesssim \,} is not antisymmetric then the resulting relation < {\displaystyle \,<\,} would not be transitive (consider how equivalent non-equal elements relate). This is the reason for using the symbol " ≲ {\displaystyle \lesssim } " instead of the "less than or equal to" symbol " ≤ {\displaystyle \leq } ", which might cause confusion for a preorder that is not antisymmetric since it might misleadingly suggest that a ≤ b {\displaystyle a\leq b} implies a < b or a = b . {\displaystyle a<b{\text{ or }}a=b.} Using the construction above, multiple non-strict preorders can produce the same strict preorder < , {\displaystyle \,<,\,} so without more information about how < {\displaystyle \,<\,} was constructed (such knowledge of the equivalence relation ∼ {\displaystyle \,\sim \,} for instance), it might not be possible to reconstruct the original non-strict preorder from < . {\displaystyle \,<.\,} Possible (non-strict) preorders that induce the given strict preorder < {\displaystyle \,<\,} include the following: If a ≤ b {\displaystyle a\leq b} then a ≲ b . {\displaystyle a\lesssim b.} The converse holds (that is, ≲ = ≤ {\displaystyle \,\lesssim \;\;=\;\;\leq \,} ) if and only if whenever a ≠ b {\displaystyle a\neq b} then a < b {\displaystyle a<b} or b < a . {\displaystyle b<a.} In computer science, one can find examples of the following preorders. Further examples: Example of a total preorder : Every binary relation R {\displaystyle R} on a set S {\displaystyle S} can be extended to a preorder on S {\displaystyle S} by taking the transitive closure and reflexive closure , R + = . {\displaystyle R^{+=}.} The transitive closure indicates path connection in R : x R + y {\displaystyle R:xR^{+}y} if and only if there is an R {\displaystyle R} - path from x {\displaystyle x} to y . {\displaystyle y.} Left residual preorder induced by a binary relation Given a binary relation R , {\displaystyle R,} the complemented composition R ∖ R = R T ∘ R ¯ ¯ {\displaystyle R\backslash R={\overline {R^{\textsf {T}}\circ {\overline {R}}}}} forms a preorder called the left residual , [ 5 ] where R T {\displaystyle R^{\textsf {T}}} denotes the converse relation of R , {\displaystyle R,} and R ¯ {\displaystyle {\overline {R}}} denotes the complement relation of R , {\displaystyle R,} while ∘ {\displaystyle \circ } denotes relation composition . If a preorder is also antisymmetric , that is, a ≲ b {\displaystyle a\lesssim b} and b ≲ a {\displaystyle b\lesssim a} implies a = b , {\displaystyle a=b,} then it is a partial order . On the other hand, if it is symmetric , that is, if a ≲ b {\displaystyle a\lesssim b} implies b ≲ a , {\displaystyle b\lesssim a,} then it is an equivalence relation . A preorder is total if a ≲ b {\displaystyle a\lesssim b} or b ≲ a {\displaystyle b\lesssim a} for all a , b ∈ P . {\displaystyle a,b\in P.} A preordered class is a class equipped with a preorder. Every set is a class and so every preordered set is a preordered class. Preorders play a pivotal role in several situations: Note that S ( n , k ) refers to Stirling numbers of the second kind . As explained above, there is a 1-to-1 correspondence between preorders and pairs (partition, partial order). Thus the number of preorders is the sum of the number of partial orders on every partition. For example: For a ≲ b , {\displaystyle a\lesssim b,} the interval [ a , b ] {\displaystyle [a,b]} is the set of points x satisfying a ≲ x {\displaystyle a\lesssim x} and x ≲ b , {\displaystyle x\lesssim b,} also written a ≲ x ≲ b . {\displaystyle a\lesssim x\lesssim b.} It contains at least the points a and b . One may choose to extend the definition to all pairs ( a , b ) {\displaystyle (a,b)} The extra intervals are all empty. Using the corresponding strict relation " < {\displaystyle <} ", one can also define the interval ( a , b ) {\displaystyle (a,b)} as the set of points x satisfying a < x {\displaystyle a<x} and x < b , {\displaystyle x<b,} also written a < x < b . {\displaystyle a<x<b.} An open interval may be empty even if a < b . {\displaystyle a<b.} Also [ a , b ) {\displaystyle [a,b)} and ( a , b ] {\displaystyle (a,b]} can be defined similarly.
https://en.wikipedia.org/wiki/Preorder
In mathematics , a preordered class is a class equipped with a preorder . When dealing with a class C , it is possible to define a class relation on C as a subclass of the power class C × {\displaystyle \times } C . Then, it is convenient to use the language of relations on a set. A preordered class is a class with a preorder on it. Partially ordered class and totally ordered class are defined in a similar way. These concepts generalize respectively those of preordered set , partially ordered set and totally ordered set . However, it is difficult to work with them as in the small case because many constructions common in a set theory are no longer possible in this framework. Equivalently, a preordered class is a thin category , that is, a category with at most one morphism from an object to another.
https://en.wikipedia.org/wiki/Preordered_class
According to EN 13523-0, a prepainted metal (or coil coated metal ) is a ‘ metal on which a coating material (e.g. paint, film…) has been applied by coil coating ’. When applied onto the metallic substrate, the coating material (in liquid, in paste or powder form) forms a film possessing protective, decorative and/or other specific properties. In 40 years, the European prepainted metal production has multiplied by 18. [ 1 ] The choice of metallic substrate [ 2 ] is determined by the dimensional, mechanical and corrosion resistance properties required of the coated product in use. The most common metallic substrates that are organically coated are: Coil coating is the continuous and highly automated industrial process for efficiently coating metal coils. Because the metal is treated before it is cut and formed, the entire surface is cleaned and treated, providing tightly-bonded finishes. ( Formed parts can have many holes, recessed areas, valleys, and hidden areas that make it difficult to clean and uniformly paint.) Coil-coated metal (often called prepainted metal) is often considered more durable and more corrosion-resistant than most post painted metal. [ 3 ] Annually, 4.5 million tons of coil-coated steel and aluminum are produced and shipped in North America, and 5 million tons in Europe. In almost every five-year period since the early 1980s, the growth rate of coil-coated metal has exceeded the growth rates of either steel and/or aluminum production. [ 4 ] The definition of a coil coating process according to EN 10169:2010 is a ‘process in which an (organic) coating material is applied on rolled metal strip in a continuous process which includes cleaning, if necessary, and chemical pre-treatment of the metal surface and either one-side or two-side, single or multiple application of (liquid) paints or coating powders which are subsequently cured or/and laminating with permanent plastic films’. [ 5 ] The metal substrate (steel or aluminum) is delivered in coil form from the rolling mills. Coil weights vary from 5-6 tons for aluminum and up to about 25 tons for steel. The coil is positioned at the beginning of the line, then unwound at a constant speed, passing through the various pre-treatment and coating processes before being recoiled. Two strip accumulators at the beginning and the end of the line enable the work to be continuous, allowing new coils to be added (and finished coils removed) by a metal stitching process without slowing down or stopping the line. The continuous process of applying up to three separate coating layers onto one or both sides of a metal strip substrate occurs on a coil coating line. These lines vary greatly in size, with widths from 18 to 60 inches (46 to 152 cm) and speeds from 100 to 700 feet per minute (0.5 to 3.6 m/s); however, all coil-coating lines share the same basic process steps. [ 6 ] A typical organic coil coating line consists of decoilers, entry strip accumulator, cleaning, chemical pretreatment, primer coat application, curing, final coat application, curing, exit accumulator and recoilers. The following steps take place on a modern coating line: Available coatings include polyesters , plastisols , polyurethanes , polyvinylidene fluorides (PVDF), epoxies , primers , backing coats and laminate films. For each product, the coating is built up in a number of layers. Primer coatings form the essential link between the pretreatment and the finish coating. Essentially, a primer is required to provide inter-coat adhesion between the pretreatment and the finish coat and is also required to promote corrosion resistance in the total system. The composition of the primer will vary depending on the type of finish coat used. Primers require compatibility with various pretreatments and top coat paint systems; therefore, they usually comprise a mixture of resin systems to achieve this end. Backing coats are applied to the underside of the strip with or without a primer. The coating is generally not as thick as the finish coating used for exterior applications. Backing coats are generally not exposed to corrosive environments and not visible in the end application. [ 7 ] Prepainted metal is used in a variety of products. It can be formed for many different applications, including those with T bends, without loss of coating quality. Major industries use prepainted metal in products such as building panels, [ 8 ] metal roofs [ 9 ] wall panels, garage doors, office furniture (desks, cubicle divider panels, file cabinets, and modular cabinets), home appliances (refrigerators, dishwashers, freezers, range hoods, microwave ovens, and washers and dryers), heating and air-conditioning outer panels and ductwork, commercial appliances, vending machines, foodservice equipment and cooking tins, beverage cans, and automotive panels and parts (fuel tanks, body panels, bumpers), The list continues to grow, with new industries making the switch from post-painted to prepainted processes each year. [ 10 ] Some high-tech, complex coatings are applied with the coil coating process. [ 11 ] Coatings for cool metal roofing materials, smog-eating building panels, antimicrobial products, anti-corrosive metal parts, and solar panels use this process. Pretreatments and coatings can be applied with the coil coating process in very precise, thin, uniform layers, and makes some complex coatings feasible and more cost-effective. The largest market for prepainted metal is in both commercial and residential construction. [ 12 ] It is chosen for the quality, low cost, design flexibility, and environmentally beneficial properties. Using prepainted metal can contribute to credit toward LEED certification for sustainable design. A wide arrange of color options are available with prepainted metal, including vibrant colors for modern designs, and natural weathered finishes in rustic expressions. Prepainted metal also can be formed, almost like plastic, in fluid shapes. This flexibility allows architects to achieve unique, expressive designs using metal. [ 13 ] The output of the coil coating industry is a prepainted metal strip. This has numerous applications in various industries, including in: In the old days of traditional manufacturing, steel and other metals arrived at factories in an untreated and unpainted state. Companies would fabricate and paint or treat the metal components of their product before assembly. This was costly, time-consuming, and environmentally harmful. The coil coating process was pioneered in the 1930s for painting, coating and pre-treating large coils of metals before they arrived at a manufacturing facility. The venetian blind industry was the first to utilize pre-painted metal. [ 15 ]
https://en.wikipedia.org/wiki/Prepainted_metal
In coding theory , the Preparata codes form a class of non-linear double- error-correcting codes . They are named after Franco P. Preparata who first described them in 1968. Although non-linear over GF(2) the Preparata codes are linear over Z 4 with the Lee distance . Let m be an odd number, and n = 2 m − 1 {\displaystyle n=2^{m}-1} . We first describe the extended Preparata code of length 2 n + 2 = 2 m + 1 {\displaystyle 2n+2=2^{m+1}} : the Preparata code is then derived by deleting one position. The words of the extended code are regarded as pairs ( X , Y ) of 2 m -tuples, each corresponding to subsets of the finite field GF(2 m ) in some fixed way. The extended code contains the words ( X , Y ) satisfying three conditions The Preparata code is obtained by deleting the position in X corresponding to 0 in GF(2 m ). The Preparata code is of length 2 m +1 − 1, size 2 k where k = 2 m + 1 − 2 m − 2, and minimum distance 5. When m = 3, the Preparata code of length 15 is also called the Nordstrom–Robinson code .
https://en.wikipedia.org/wiki/Preparata_code
See text Preplasmiviricota is a phylum of viruses . [ 1 ] Its name means "precursor of certain plasmids". [ 2 ] The phylum contains two subphyla that contain five classes. Subphyla are suffixed with - viricotina , and classes are suffixed with - viricetes . This taxonomy is shown hereafter. [ 1 ] This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Preplasmiviricota
In polymer chemistry , the term prepolymer or pre-polymer , refers to a monomer or system of monomers that have been reacted to an intermediate- molecular mass state. This material is capable of further polymerization by reactive groups to a fully cured , high-molecular-mass state. As such, mixtures of reactive polymers with un-reacted monomers may also be referred to as pre-polymers. The term "pre-polymer" and "polymer precursor " may be interchanged. [ 1 ] In polyurethane chemistry, prepolymers and oligomers are frequently produced and then further formulated into CASE applications - Coatings , Adhesives , Sealants , and Elastomers . An isocyanate (usually a diisocyanate) is reacted with a polyol . All types of polyol may in theory be used to produce polyurethane prepolymers. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] These then find use in CASE applications. When polyurethane dispersions are synthesized, a prepolymer is first produced usually modified with DMPA . In polyurea prepolymer production, instead of a polyol a poly amine is used. [ 7 ] Two molecules of lactic acid can be dehydrated to the cyclic molecule lactide , a lactone . A variety of catalysts can polymerise lactide to either heterotactic or syndiotactic polylactide , which as biodegradable polyesters with valuable (inter alia) medical properties are currently attracting much attention. [ 8 ] Nowadays, lactic acid is used as a monomer for producing polylactic acid (PLA) which later has application as biodegradable plastic . [ 9 ] This kind of plastic is a good option for substituting conventional plastic produced from petrochemicals because of low emission of carbon dioxide . The commonly used process in producing lactic acid is via fermentation ; to obtain the polylactic acid, the polymerization process follows.
https://en.wikipedia.org/wiki/Prepolymer
A preprohormone is the precursor protein to one or more prohormones , which are in turn precursors to peptide hormones . [ 1 ] In general, the protein consists of the amino acid chain that is created by the hormone -secreting cell , before any changes have been made to it. It contains a signal peptide , the hormone (s) itself (themselves), and intervening amino acids. Before the hormone is released from the cell, the signal peptide and other amino acids are removed. [ 2 ] [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Preprohormone
Presburger arithmetic is the first-order theory of the natural numbers with addition , named in honor of Mojżesz Presburger , who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality , omitting the multiplication operation entirely. The theory is computably axiomatizable; the axioms include a schema of induction . Presburger arithmetic is much weaker than Peano arithmetic , which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory . This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential , however, as shown by Fischer & Rabin (1974) . The language of Presburger arithmetic contains constants 0 and 1 and a binary function +, interpreted as addition. In this language, the axioms of Presburger arithmetic are the universal closures of the following: (5) is an axiom schema of induction , representing infinitely many axioms. These cannot be replaced by any finite number of axioms, that is, Presburger arithmetic is not finitely axiomatizable in first-order logic. [ 1 ] Presburger arithmetic can be viewed as a first-order theory with equality containing precisely all consequences of the above axioms. Alternatively, it can be defined as the set of those sentences that are true in the intended interpretation : the structure of non-negative integers with constants 0, 1, and the addition of non-negative integers. Presburger arithmetic is designed to be complete and decidable. Therefore, it cannot formalize concepts such as divisibility or primality , or, more generally, any number concept leading to multiplication of variables. However, it can formulate individual instances of divisibility; for example, it proves "for all x , there exists y : ( y + y = x ) ∨ ( y + y + 1 = x )". This states that every number is either even or odd. Presburger (1929) proved Presburger arithmetic to be: The decidability of Presburger arithmetic can be shown using quantifier elimination , supplemented by reasoning about arithmetical congruence . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The steps used to justify a quantifier elimination algorithm can be used to define computable axiomatizations that do not necessarily contain the axiom schema of induction. [ 2 ] [ 7 ] In contrast, Peano arithmetic , which is Presburger arithmetic augmented with multiplication, is not decidable, as proved by Church alongside the negative answer to the Entscheidungsproblem . By Gödel's incompleteness theorem , Peano arithmetic is incomplete and its consistency is not internally provable (but see Gentzen's consistency proof ). The decision problem for Presburger arithmetic is an interesting example in computational complexity theory and computation . Let n be the length of a statement in Presburger arithmetic. Then Fischer & Rabin (1974) proved that, in the worst case, the proof of the statement in first-order logic has length at least 2 2 c n {\displaystyle 2^{2^{cn}}} , for some constant c >0. Hence, their decision algorithm for Presburger arithmetic has runtime at least exponential. Fischer and Rabin also proved that for any reasonable axiomatization (defined precisely in their paper), there exist theorems of length n that have doubly exponential length proofs. Fischer and Rabin's work also implies that Presburger arithmetic can be used to define formulas that correctly calculate any algorithm as long as the inputs are less than relatively large bounds. The bounds can be increased, but only by using new formulas. On the other hand, a triply exponential upper bound on a decision procedure for Presburger arithmetic was proved by Oppen (1978) . A more tight complexity bound was shown using alternating complexity classes by Berman (1980) . The set of true statements in Presburger arithmetic (PA) is shown complete for TimeAlternations (2 2 n O(1) , n). Thus, its complexity is between double exponential nondeterministic time (2-NEXP) and double exponential space (2-EXPSPACE). Completeness is under Karp reductions . (Also, note that while Presburger arithmetic is commonly abbreviated PA, in mathematics in general PA usually means Peano arithmetic .) For a more fine-grained result, let PA(i) be the set of true Σ i PA statements, and PA(i, j) the set of true Σ i PA statements with each quantifier block limited to j variables. '<' is considered to be quantifier-free; here, bounded quantifiers are counted as quantifiers. PA(1, j) is in P, while PA(1) is NP-complete. [ 8 ] For i > 0 and j > 2, PA(i + 1, j) is Σ i P -complete . The hardness result only needs j>2 (as opposed to j=1) in the last quantifier block. For i>0, PA(i+1) is Σ i EXP -complete . [ 9 ] Short Σ n {\displaystyle \Sigma _{n}} Presburger Arithmetic ( n > 2 {\displaystyle n>2} ) is Σ n − 2 P {\displaystyle \Sigma _{n-2}^{P}} complete (and thus NP complete for n = 3 {\displaystyle n=3} ). Here, 'short' requires bounded (i.e. O ( 1 ) {\displaystyle O(1)} ) sentence size except that integer constants are unbounded (but their number of bits in binary counts against input size). Also, Σ 2 {\displaystyle \Sigma _{2}} two variable PA (without the restriction of being 'short') is NP-complete. [ 10 ] Short Π 2 {\displaystyle \Pi _{2}} (and thus Σ 2 {\displaystyle \Sigma _{2}} ) PA is in P, and this extends to fixed-dimensional parametric integer linear programming. [ 11 ] Because Presburger arithmetic is decidable, automatic theorem provers for Presburger arithmetic exist. For example, the Coq and Lean proof assistant systems feature the tactic omega for Presburger arithmetic and the Isabelle proof assistant contains a verified quantifier elimination procedure by Nipkow (2010) . The double exponential complexity of the theory makes it infeasible to use the theorem provers on complicated formulas, but this behavior occurs only in the presence of nested quantifiers: Nelson & Oppen (1978) describe an automatic theorem prover that uses the simplex algorithm on an extended Presburger arithmetic without nested quantifiers to prove some of the instances of quantifier-free Presburger arithmetic formulas. More recent satisfiability modulo theories solvers use complete integer programming techniques to handle quantifier-free fragment of Presburger arithmetic theory. [ 12 ] Presburger arithmetic can be extended to include multiplication by constants, since multiplication is repeated addition. Most array subscript calculations then fall within the region of decidable problems. [ 13 ] This approach is the basis of at least five [ citation needed ] proof-of- correctness systems for computer programs , beginning with the Stanford Pascal Verifier in the late 1970s and continuing through to Microsoft's Spec# system of 2005. Some properties are now given about integer relations definable in Presburger Arithmetic. For the sake of simplicity, all relations considered in this section are over non-negative integers. A relation is Presburger-definable if and only if it is a semilinear set . [ 14 ] A unary integer relation R {\displaystyle R} , that is, a set of non-negative integers, is Presburger-definable if and only if it is ultimately periodic. That is, if there exists a threshold t ∈ N {\displaystyle t\in \mathbb {N} } and a positive period p ∈ N > 0 {\displaystyle p\in \mathbb {N} ^{>0}} such that, for all integer n {\displaystyle n} such that | n | ≥ t {\displaystyle |n|\geq t} , n ∈ R {\displaystyle n\in R} if and only if n + p ∈ R {\displaystyle n+p\in R} . By the Cobham–Semenov theorem , a relation is Presburger-definable if and only if it is definable in Büchi arithmetic of base k {\displaystyle k} for all k ≥ 2 {\displaystyle k\geq 2} . [ 15 ] [ 16 ] A relation definable in Büchi arithmetic of base k {\displaystyle k} and k ′ {\displaystyle k'} for k {\displaystyle k} and k ′ {\displaystyle k'} being multiplicatively independent integers is Presburger definable. An integer relation R {\displaystyle R} is Presburger-definable if and only if all sets of integers that are definable in first-order logic with addition and R {\displaystyle R} (that is, Presburger arithmetic plus a predicate for R {\displaystyle R} ) are Presburger-definable. [ 17 ] Equivalently, for each relation R {\displaystyle R} that is not Presburger-definable, there exists a first-order formula with addition and R {\displaystyle R} that defines a set of integers that is not definable using only addition. Presburger-definable relations admit another characterization: by Muchnik's theorem. [ 18 ] It is more complicated to state, but led to the proof of the two former characterizations. Before Muchnik's theorem can be stated, some additional definitions must be introduced. Let R ⊆ N d {\displaystyle R\subseteq \mathbb {N} ^{d}} be a set, the section x i = j {\displaystyle x_{i}=j} of R {\displaystyle R} , for i < d {\displaystyle i<d} and j ∈ N {\displaystyle j\in \mathbb {N} } is defined as Given two sets R , S ⊆ N d {\displaystyle R,S\subseteq \mathbb {N} ^{d}} and a d {\displaystyle d} -tuple of integers ( p 0 , … , p d − 1 ) ∈ N d {\displaystyle (p_{0},\ldots ,p_{d-1})\in \mathbb {N} ^{d}} , the set R {\displaystyle R} is called ( p 0 , … , p d − 1 ) {\displaystyle (p_{0},\dots ,p_{d-1})} -periodic in S {\displaystyle S} if, for all ( x 0 , … , x d − 1 ) ∈ S {\displaystyle (x_{0},\dots ,x_{d-1})\in S} such that ( x 0 + p 0 , … , x d − 1 + p d − 1 ) ∈ S , {\displaystyle (x_{0}+p_{0},\dots ,x_{d-1}+p_{d-1})\in S,} then ( x 0 , … , x d − 1 ) ∈ R {\displaystyle (x_{0},\ldots ,x_{d-1})\in R} if and only if ( x 0 + p 0 , … , x d − 1 + p d − 1 ) ∈ R {\displaystyle (x_{0}+p_{0},\dots ,x_{d-1}+p_{d-1})\in R} . For s ∈ N {\displaystyle s\in \mathbb {N} } , the set R {\displaystyle R} is said to be s {\displaystyle s} -periodic in S {\displaystyle S} if it is ( p 0 , … , p d − 1 ) {\displaystyle (p_{0},\ldots ,p_{d-1})} -periodic for some ( p 0 , … , p d − 1 ) ∈ Z d {\displaystyle (p_{0},\dots ,p_{d-1})\in \mathbb {Z} ^{d}} such that Finally, for k , x 0 , … , x d − 1 ∈ N {\displaystyle k,x_{0},\dots ,x_{d-1}\in \mathbb {N} } let denote the cube of size k {\displaystyle k} whose lesser corner is ( x 0 , … , x d − 1 ) {\displaystyle (x_{0},\dots ,x_{d-1})} . Muchnik's Theorem — R ⊆ N d {\displaystyle R\subseteq \mathbb {N} ^{d}} is Presburger-definable if and only if: Intuitively, the integer s {\displaystyle s} represents the length of a shift, the integer k {\displaystyle k} is the size of the cubes and t {\displaystyle t} is the threshold before the periodicity. This result remains true when the condition is replaced either by min ( x 0 , … , x d − 1 ) > t {\displaystyle \min(x_{0},\ldots ,x_{d-1})>t} or by max ( x 0 , … , x d − 1 ) > t {\displaystyle \max(x_{0},\ldots ,x_{d-1})>t} . This characterization led to the so-called "definable criterion for definability in Presburger arithmetic", that is: there exists a first-order formula with addition and a d {\displaystyle d} -ary predicate R {\displaystyle R} that holds if and only if R {\displaystyle R} is interpreted by a Presburger-definable relation. Muchnik's theorem also allows one to prove that it is decidable whether an automatic sequence accepts a Presburger-definable set.
https://en.wikipedia.org/wiki/Presburger_arithmetic
Presbyosmia is the gradual degeneration of sense of smell due to ageing process, which occurs especially in those who are 70 years old or more. It is possibly due to loss of nerve endings in the nose, as well as reduced mucus production . Presbyosmia is less prevalent among elderly who are healthy, and who lack the risk factors for smell disorders . Other factors among elderly that can effect the sense of smell are medication use and some neurological disorders, in these cases the loss of smell can be much more noticeable. There is currently no established treatment for this condition. [ 1 ] This otorhinolaryngology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Presbyosmia
In Riemannian geometry , a branch of mathematics , the prescribed Ricci curvature problem is as follows: given a smooth manifold M and a symmetric 2-tensor h , construct a metric on M whose Ricci curvature tensor equals h . This Riemannian geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prescribed_Ricci_curvature_problem
In Riemannian geometry , a branch of mathematics , the prescribed scalar curvature problem is as follows: given a closed , smooth manifold M and a smooth, real-valued function ƒ on M , construct a Riemannian metric on M whose scalar curvature equals ƒ . Due primarily to the work of Jerry Kazdan and Frank Wilson Warner in the 1970s, this problem is well understood. If the dimension of M is three or greater, then any smooth function ƒ which takes on a negative value somewhere is the scalar curvature of some Riemannian metric. The assumption that ƒ be negative somewhere is needed in general, since not all manifolds admit metrics which have strictly positive scalar curvature. (For example, the three-dimensional torus is such a manifold.) However, Kazdan and Warner proved that if M does admit some metric with strictly positive scalar curvature, then any smooth function ƒ is the scalar curvature of some Riemannian metric. This Riemannian geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prescribed_scalar_curvature_problem
Prescription cascade is the process whereby the side effects of drugs are misdiagnosed as symptoms of another problem, resulting in further prescriptions and further side effects and unanticipated drug interactions , which itself may lead to further symptoms and further misdiagnoses. This is a pharmacological example of a feedback loop . Such cascades can be reversed through deprescribing . Over the past 20 years, spending on prescription drugs has increased drastically. This can be attributed to several different situations: There is the increased diagnosis of chronic conditions; and the use of numerous medications by the older population; and an increase in the incidence of obesity has meant an increase in chronic conditions such as diabetes and hypertension. As each condition is treated with a specific drug, a correlating side-effect of each drug comes into play. If a doctor fails to acknowledge all the drugs that a patient is taking, an adverse drug reaction may be misinterpreted as a new medical condition. Another drug is prescribed to treat the new condition, and an adverse drug side-effect occurs that is again mistakenly diagnosed as a new medical condition. Thus the patient is at risk of developing additional adverse effects. [ 1 ] The most frequent medical intervention performed by a doctor is the writing of a prescription. Because chronic illness increases with advancing age, older people are more likely to have conditions that require drug treatment, and they are more likely to suffer the effects of a prescription cascade . [ 2 ] A prescriber can do little to modify age-related physiological changes when trying to minimize the likelihood that an older person will develop an adverse drug reaction. However, when assessing a patient who is already taking drugs, a doctor should always consider the development of any new signs and symptoms as a possible consequence of the patient's drug treatment. Polypharmacy is the use of numerous medications at the same time (from the root "multiple pharmacies"). As people age, various health conditions may arise and must be treated. Suffering a range of issues from short-term medical conditions to chronic conditions like diabetes or high blood pressure, the older patient may be medicated by a variety of drugs at one time. A review in 2010 found that the average 81-year-old is taking an average of 15 different medications at the same time, ranging from 6 to 28 medications. It also found approximately 8.9 drug-related problems per patient in the study, ranging from 3 to 19 problems. [ 3 ] The review found that patients were commonly taking medications that they did not need anymore. More specifically, work from Australia has identified that 16% of older people use medicines that are part of a prescribing cascade. [ 4 ]
https://en.wikipedia.org/wiki/Prescription_cascade
A preselector is a name for an electronic device that connects between a radio antenna and a radio receiver . The preselector is a band-pass filter that blocks troublesome out-of-tune frequencies from passing through from the antenna into the radio receiver (or preamplifier ) that otherwise would be directly connected to the antenna. A preselector improves the performance of nearly any receiver, but is especially helpful [ a ] to receivers with broadband front-ends that are prone to overload , such as scanners , wideband software-defined radio receivers, ordinary consumer-market shortwave and AM broadcast receivers – particularly with receivers operating below 10~20 MHz where static is pervasive. Sometimes faint signals that occupy a very narrow frequency span (such as radiotelegraph or 'CW' ) can be heard more clearly if the receiving bandwidth is made narrower than the narrowest that a general-purpose receiver may be able to tune; likewise, signals which individually use a fairly wide span of frequencies, such as broadcast AM, can be made less noisy by narrowing the bandwidth of the signal, even though making the span of received frequencies narrower than was transmitted will sacrifice some audio fidelity . A good preselector often can reduce a radio's receive bandwidth to a narrower frequency span than many general-purpose radios can manage on their own. A preselector typically is tuned to have a narrow bandwidth , centered on the receiver's operating frequency. The preselector passes through unchanged the signal on its tuned frequency (or only slightly diminished) but it reduces or removes off-frequency signals, cutting down or eliminating unwanted interference . [ b ] Extra filtering can be useful because the first input stage ("front end") of receivers contains at least one RF amplifier , which has power limits (" dynamic range "). Most radios' front ends amplify all radio frequencies delivered to the antenna connection. So off-frequency signals constitute a load on the RF amplifier, wasting part of its dynamic range on unwanted and unused signals. "Limited dynamic range " means that the amplifier circuits have a limit to the total amount of incoming RF signal they can amplify without overloading ; symptoms of overload are nonlinearity (" distorted timbre ") and ultimately clipping (" buzz "). When the front-end overloads the performance of the receiver is severely reduced, and in extreme cases can damage the receiver. [ 2 ] In situations with noisy and crowded bands, or where there is loud interference from nearby, high-power stations, the dynamic range of the receiver can quickly be exceeded. Extra filtering by the preselector limits frequency range and power demands that are applied to all later stages of the receiver, only loading it with the desired signals within the preselector's pass-band. Similar to conventional radios, spectrum analyzers , heavy-duty network analyzers , and other RF measuring equipment can incorporate switchable banks of preselector circuits to reject out-of-band signals that could result in spurious signals at the frequencies being analyzed. [ 3 ] Automatically switched filter banks can likewise be incorporated into various broadband, general purpose receivers . A preselector may be engineered with extra features, so that in addition to attenuating interference from unwanted frequencies it can provide additional services which may be helpful for a receiver: None of these extra conveniences are necessary for the function of preselection, and in particular, for the typical noisy frequency bands where a preselector is needed, an amplifier in the preselector has no useful function. On the other hand, when an antenna preamplifier (preamp) is actually needed, it can be made "tunable" by incorporating a front-end preselector circuit to improve its performance. The integrated device is both a preamplifier and a preselector, and either name is correct. This ambiguity sometimes leads to confusion – conflating preselection with amplification . Ordinary, regular preselectors (that are just preselectors) contain no amplifier: They are entirely passive devices. A standard, ordinary preselector sometimes has the word "passive" prefixed – hence "passive preselector" means "standard preselector" . The adjective is redundant, but emphasizes to those only familiar with tunable preamplifiers that the preselector is ordinary, with no internal amplifier, and does not require a power source. Since all standard preselectors are "passive" , adding the redundant word is pedantic, and in the noisy longwave , mediumwave , and shortwave bands where preselectors are typically used, when used with "modern" (post 1940) receivers they function with no noticeable loss of signal strength. With all preselectors there is some very small loss at the tuned frequency; usually, most of the loss is in the inductor (the tuning coil). Turning up the inductance gives the preselector a narrower bandwidth (or higher Q , or greater selectivity) and slightly raises the loss, which nonetheless remains very small. Most preselectors have separate settings for one inductor and one capacitor (at least). [ c ] So with at least two adjustments available to tune to just one frequency, there are often a variety of possible settings that will tune the preselector to frequencies in its middle-range. For the narrowest bandwidth (highest Q ), the preselector is tuned using the highest inductance and lowest capacitance for the desired frequency, but this produces the greatest loss. It also requires retuning the preselector more often while searching for faint signals, to keep the preselector's pass band overlapping the radio's receiving frequency. For lowest loss (and widest bandwidth), the preselector is tuned using the lowest viable inductance and highest capacitance (and the lowest Q , or least selectivity) for the desired frequency range. The wider bandwidth allows more interference through from nearby frequencies, but reduces the need to retune the preselector while tuning the receiver, since any one low-inductance setting for the preselector will pass a broader span of nearby frequencies. Although a preselector is placed inbetween the radio and the antenna, in the same electrical location as a feedline matching unit , it serves a different purpose: A transmatch or "antenna" tuner connects two transmission lines with different impedances and only incidentally blocks out-of-tune frequencies (if it blocks any at all). A transmatch matches transmitter impedance to feedline impedance and phase, so that signal power from the radio transmitter smoothly transfers into the antenna's feed cable; a properly adjusted transmatch prevents transmitted power from being reflected back into the transmitter ( "backlash current" ). Some antenna tuner circuits can both impedance match and preselect, [ 4 ] for example the Series Parallel Capacitor (SPC) tuner , and many 'tuned-transformer'-type matching circuits used in many balanced line tuners (BLT) can be adjusted to also function as band-pass filters . [ d ]
https://en.wikipedia.org/wiki/Preselector
In computer and telecommunications networks , presence information is a status indicator that conveys ability and willingness of a potential communication partner—for example a user —to communicate . A user's client provides presence information (presence state) via a network connection to a presence service , which is stored in what constitutes his personal availability record (called a presentity ) and can be made available for distribution to other users (called watchers ) to convey their availability for communication. Presence information has wide application in many communication services and is one of the innovations driving the popularity of instant messaging or recent implementations of voice over IP clients. A user client may publish a presence state to indicate its current communication status. This published state informs others that wish to contact the user of his availability and willingness to communicate. The most common use of presence today is to display an indicator icon on instant messaging clients, typically from a choice of graphic symbols with easy-to-convey meanings, and a list of corresponding text descriptions of each of the states. Even when technically not the same, the "on-hook" or "off-hook" state of called telephone is an analogy, as long as the caller receives a distinctive tone indicating unavailability or availability. Common states on the user's availability are "free for chat", "busy", "away", " do not disturb ", "out to lunch". Such states exist in many variations across different modern instant messaging clients. Current standards support a rich choice of additional presence attributes that can be used for presence information, such as user mood, location, or free text status. The analogy with free/busy tone on PSTN is inexact, as the "on-hook" telephone status reflects the ability of the network to reach the recipient after the requester has initiated the conversation. The requester must commit to the connection method before discovering the recipient's availability state. Conversely, Presence shows the availability state before a conversation is initiated. A similar comparison might be the requester needing to know if the recipient is at work. The most straightforward way of checking if the recipient is available is to walk to the desk, which requires the commitment of the walk regardless of the outcome and usually requires some interaction if the recipient is at the desk. The requester can call first to save the walk, but now must commit to an interaction via phone. Presence gives the state of the recipient to the requester and the requester has the choice to interact with the recipient or use that information for non-interactive purposes (such as taking roll). Presence becomes interesting for communication systems when it spans a number of different communication channels. The idea that multiple communication devices can combine state, to provide an aggregated view of a user's presence has been termed Multiple Points of Presence (MPOP). MPOP becomes even more powerful when it is automatically inferred from passive observation of a user's actions. This idea is already familiar to instant messaging users who have their status set to "Away" (or equivalent) if their computer keyboard is inactive for some time. Extension to other devices could include whether the user's cell phone is on, whether they are logged into their computer, or perhaps checking their electronic calendar to see if they are in a meeting or on vacation. For example, if a user's calendar was marked as out of office and their cell phone was on, they might be considered in a "Roaming" state. MPOP status can then be used to automatically direct incoming messages across all contributing devices. For example, "Out of office" might translate to a system directing all messages and calls to the user's cell phone. The status "Do not disturb" might automatically save all messages for later and send all phone calls to voicemail. XMPP, discussed below, allows for MPOP by assigning each client a "resource" (a specific identifier) and a priority number for each resource. A message directed to the user's ID would go to the resource with highest priority, although messaging a specific resource is possible by using the form user@domain/resource. Presence is highly sensitive information and in non-trivial systems a presentity may define limits to which its presence information may be revealed to different watchers. For example, a worker may only want colleagues to see detailed presence information during office hours. Basic versions of this idea are already common in instant messaging clients as a "Blocking" facility, where users can appear as unavailable to selected watchers. Presence, particularly MPOP, requires collaboration between a number of electronic devices (for example IM client, home phone, cell phone, and electronic calendar) and the presence services each of them are connected with. To date, the most common and wide-scale implementations use closed systems, with a SPOP (Single Point of Presence, where a single device publishes state). Some vendors have upgraded their services to automatically log out connected clients when a new login request reaches the server from a newly connecting different device. For presence to universally work with MPOP support, multiple devices must be able to not only intercommunicate among each other, the status information must also be appropriately handled by all other interoperable, connected presence services and the MPOP scheme for their clients. 2.5G and, even more so, 3G cell phone networks can support management and access of presence information services for mobile users cell phone handsets. In the workplace, private messaging servers offer the possibility of MPOP within a company or work team. Presence information is a growing tool towards more effective and efficient communication within a business setting. Presence information allows you to instantly see who is available in your corporate network, giving more flexibility to set up short-term meetings and conference calls. The result is precise communication that all but eliminates the inefficiency of phone tag or email messaging. An example of the time-saving aspect of presence information is a driver with a GPS; he/she can be tracked and sent messages on upcoming traffic patterns that, in return, save time and money. According to IDC surveys, employees "often feel that IM gives their workdays the kind of 'flow' that they feel when sitting directly among their colleagues, being able to ask questions of them, and getting the kind of quick responses that allow them to drive on to the next task". This phenomenon has been called the "Presence Effect" [ 1 ] in contrast to its predecessor the " water cooler " effect, whereby this level of flow was only thought to be achieved in person. With presence information, privacy of the users can become an issue. For example, when an employee is on his/her day off they are still connected to the network and have greater ability to be tracked down. Therefore, a concern of presence information is to determine how far the companies want to go with staying connected. There was, and still is, significant work done in several working groups on achieving a standardization for presence-related protocols. In 1999, a group called the Instant Message and Presence Protocol (IMPP) working group (WG), was formed within the Internet Engineering Task Force organization (IETF) in order to develop protocols and data formats for simple presence and instant messaging services. Unfortunately, IMPP WG was not able to come to consensus on a single protocol for presence. Instead it issued a common profile for presence and instant messaging (CPP) which defined semantics for common services of presence to facilitate the creation of gateways between presence services. Thus any two CPP-compatible presence protocol suites are automatically interoperable. In 2001, the SIMPLE working group was formed within IETF to develop a suite of CPP-compliant standards for presence and instant messaging applications over the Session Initiation Protocol ( SIP ). The SIMPLE activity specifies extensions to the SIP protocol which deal with a publish and subscribe mechanism for presence information and sending instant messages. These extensions include rich presence document formats, privacy control, "partial publications" and notifications, past and future presence, watcher information and more. Despite its name, SIMPLE is far from simple. It is described in about 30 documents on more than 1,000 pages. This is in addition to the complexity of the SIP protocol stack on which SIMPLE is based. At the end of 2001, Nokia, Motorola, and Ericsson formed the Wireless Village (WV) initiative to define a set of universal specifications for mobile Instant Messaging and Presence Services (IMPS) and presence services for wireless networks. In October 2002, Wireless Village was consolidated into the Open Mobile Alliance (OMA) and a month later released the first version of the XML -based OMA Instant Messaging and Presence Service (IMPS). IMPS defines a system architecture, syntax, and semantics for representation of presence information and a set of protocols for the four primary features: presence, IM, groups, and shared content. Presence is the key, enabling technology for the IMPS. The XML-based XMPP or Extensible Messaging and Presence Protocol was designed and is currently maintained by the XMPP Standards Foundation . This IM protocol, which is a robust and widely extended protocol, is also the protocol used in the commercial implementation of Google Talk and Facebook Chat . In October 2004, the XMPP working group at IETF published the documents RFC 3920, RFC 3921, RFC 3922 and RFC 3923, to standardize the core XMPP protocol.
https://en.wikipedia.org/wiki/Presence_information
A presence sensing device (PSD) is a safety device for press brakes and similar metal-bending machines. The device operator often holds the sheet metal work-piece in one place while another portion of the piece is being formed in the die . If a foreign object is detected, the PSD immediately retracts the die or stops the motion of the ram. PSDs protect the operator and other employees in the area. [ 1 ] One category of presence sensing devices is Photoelectric Sensors . Light Curtains also fall into this category. Light curtains use many infrared light beams to form a perimeter around machinery. When two or more consecutively adjacent beams are interrupted, a kill-switch stops the machine until the boundary is reset. Light curtains must be placed in front of the work area. This makes it difficult for press brake operators to work on small parts. One cannot help but disrupt the beam. [ 2 ] The operator might "mute" the light curtain in order to get the job done. Certain parts of the beam can be muted. For example, muting the front and rear of the beam allows the middle to offer continued protection for the operator. [ 3 ] Additionally, it may be necessary to use auxiliary light beams if the operator will reach between the main light beams and the edge of certain machines. [ 4 ] Electronic safety devices use lasers or cameras to sense a foreign object in the vicinity of the press brake. They are less obtrusive than other safety options, which means operators are less opposed to using them. After some contention by the Occupational Safety and Health Administration (OSHA) , an electronic safety device can fall under the PSD umbrella. One such device is the Laser Sentry press brake safety device designed by Glen Koedding in 2003. The concept was challenged with OSHA by a competitor almost immediately. OSHA responded by issuing a letter of disapproval stating that the Laser Sentry did not meet the “safe distance” rule. The rule states that a presence sensing protective device must be a minimum of 6 inches away from the nearest pinch point. However, after further observation, Laser Sentry was deemed a PSD in 2004, when used in conjunction with hydraulic press brakes. [ 5 ] Cameras are another electronic safety device used for press brake safety. The camera can detect an intrusion between the upper and lower dies. [ 6 ] If an intrusion is detected, a signal will stop the downward movement of the ram. A camera safety system uses a linear scale to calculate the upper beam's position, velocity, and the stopping distance. A complete lack of machine guarding or improperly installed safety devices are the main causes of machining accidents. However, proper installation can greatly reduce this risk. Stop-time measurements can remove the guesswork from machine safety. The results of the test are applied to OSHA and American National Standards Institute (ANSI) formulas to ensure the proper installation distance of safety devices. [ 7 ] Proper installation is a must and can be ensured by following the manuals provided by the manufacturer. [ 8 ] Any machine safety device should be designed and built to the highest safety standards defined for machinery safety, EN 13849-1 Category 4, and meet the control reliability requirements of ANSI B11.19 [ 9 ] and OSHA 1910.217. Original equipment manufacturers (OEM's) often consider point-of-operation safety to be the user's responsibility. The best safety equipment can only go so far in protecting an operator from injury. Proper training is also imperative to keeping the press brake operator safe. Certification as a press brake operator is available. [ 10 ]
https://en.wikipedia.org/wiki/Presence_sensing_device
The term presentity is a combination of two words - " presence " and " entity ". [ 1 ] It basically refers to an entity that has presence information associated with it; information such as status, reachability, and willingness to communicate. [ citation needed ] The term presentity is often used to refer to users who post and update their presence information through some kind of presence applications on their devices. In this case presence information describes availability and willingness of this user to communicate via set of communication services. For example, users of an instant messaging service (such as ICQ or MSN Messenger) are presentities and their presence information is their user status (online, offline, away, etc.). Presentity can also refer to a resource or role such as a conference room or help desk. [ citation needed ] A presentity can also refer to a group of users, for example a collection of customer service agents in a call center . This presentity may be considered available if there is at least one agent ready to accept a call. [ citation needed ] This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Presentity
Presepsin (soluble CD14 subtype, sCD14-ST ) is a 13-kDa-cleavage product of CD14 receptor. [ 1 ] Presepsin is a soluble PRR . Presepsin in the circulation is an indicator of monocyte-macrophage activation in response to pathogens. [ 1 ] Several clinical studies have demonstrated that presepsin is a specific and sensitive marker for the diagnosis, severity assessment and outcome prediction of sepsis . [ 2 ] [ 3 ] [ 4 ] In addition, presepsin can be used for diagnosing infections in patients with a chronic inflammatory condition, such as liver cirrhosis . [ 5 ] This article about a biochemical receptor is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Presepsin
Preservation breeding is an attempt by many plant and animal breeders to preserve bloodlines of species, either of a rare breed , or of rare pedigrees within a breed. [ 1 ] [ 2 ] [ 3 ] Preservation breeding can have several purposes: Preservation breeding can take the following forms: The term preservation breeding was first used by notable American Kennel Club Judges Douglas Johnson and Bill Shelton in breeder seminars for dog breeders in the early 2000s. The preservation of dog breeds and the conservancy of canine genetics started gaining more traction in the mid-2010s. This zoology –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Preservation_breeding
A preservative is a substance or a chemical that is added to products such as food products, beverages, pharmaceutical drugs , paints, biological samples, cosmetics, wood, and many other products to prevent decomposition by microbial growth or by undesirable chemical changes . In general, preservation is implemented in two modes, chemical and physical. Chemical preservation entails adding chemical compounds to the product. Physical preservation entails processes such as refrigeration or drying. [ 1 ] Preservative food additives reduce the risk of foodborne infections , decrease microbial spoilage, and preserve fresh attributes and nutritional quality. Some physical techniques for food preservation include dehydration, UV-C radiation, freeze-drying, and refrigeration. Chemical preservation and physical preservation techniques are sometimes combined. Preservatives have been used since prehistoric times. Smoked meat for example has phenols and other chemicals that delay spoilage. The preservation of foods has evolved greatly over the centuries and has been instrumental in increasing food security. The use of preservatives other than traditional oils, salts, paints, [ clarification needed ] etc. in food began in the late 19th century, but was not widespread until the 20th century. [ 2 ] The use of food preservatives varies greatly depending on the country. Many developing countries that do not have strong governments to regulate food additives face either harmful levels of preservatives in foods or a complete avoidance of foods that are considered unnatural or foreign. These countries have also proven useful in case studies surrounding chemical preservatives, as they have been only recently introduced. [ 3 ] In urban slums of highly populated countries, the knowledge about contents of food tends to be extremely low, despite consumption of these imported foods. [ 4 ] Antimicrobial preservatives prevent degradation by bacteria. This method is the most traditional and ancient type of preserving—ancient methods such as pickling and adding honey prevent microorganism growth by modifying the pH level. The most commonly used antimicrobial preservative is lactic acid . Common antimicrobial preservatives are presented in the table. [ 5 ] [ 6 ] [ 7 ] Nitrates and nitrites are also antimicrobial. [ 8 ] The detailed mechanism of these chemical compounds range from inhibiting growth of the bacteria to the inhibition of specific enzymes. The oxidation process spoils most food, especially those with a high fat content. Fats quickly turn rancid when exposed to oxygen. Antioxidants prevent or inhibit the oxidation process. The most common antioxidant additives are ascorbic acid ( vitamin C ) and ascorbates. [ 11 ] Thus, antioxidants are commonly added to oils, cheese, and chips. [ 5 ] Other antioxidants include the phenol derivatives BHA , BHT , TBHQ and propyl gallate . These agents suppress the formation of hydroperoxides. [ 6 ] A variety of agents are added to sequester (deactivate) metal ions that otherwise catalyze the oxidation of fats. Common sequestering agents are disodium EDTA , citric acid (and citrates), tartaric acid , and lecithin . [ 1 ] Citric and ascorbic acids target enzymes that degrade fruits and vegetables, e.g., mono/polyphenol oxidase which turns surfaces of cut apples and potatoes brown. Ascorbic acid and tocopherol , which are vitamins, are common preservatives. Smoking entails exposing food to a variety of phenols, which are antioxidants. Natural preservatives include rosemary and oregano extract, [ 12 ] hops , salt , sugar , vinegar , alcohol , diatomaceous earth and castor oil . Traditional preservatives, such as sodium benzoate have raised health concerns in the past. Benzoate was shown in a study to cause hypersensitivity in some asthma sufferers. This has caused reexamination of natural preservatives which occur in vegetables. [ 13 ] Public awareness of food preservatives is uneven. [ 14 ] Americans have a perception that food-borne illnesses happen more often in other countries. This may be true, but the occurrence of illnesses, hospitalizations, and deaths are still high. It is estimated by the Centers for Disease Control (CDC) that each year there are 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths linked to food-borne illness. [ 15 ] Food suppliers are facing difficulties with regards to the safety and quality of their products as a result of the rising demand for ready-to-eat fresh food products. Artificial preservatives meet some of these challenges by preserving freshness for longer periods of time, but these preservatives can cause negative side-effects as well. Water-based home and personal care products use broad-spectrum preservatives, such as isothiazolinones and formaldehyde releasers , which may cause sensitization, leading to allergic skin. [ 19 ]
https://en.wikipedia.org/wiki/Preservative
The Presidential Young Investigator Award (PYI) was awarded by the National Science Foundation of the United States Federal Government . The program operated from 1984 to 1991, and was replaced by the NSF Young Investigator (NYI) Awards and Presidential Faculty Fellows (PFF) program. [ 1 ] In 1995, the NSF Young Investigator program was subsumed into the NSF CAREER Awards program, and in 1996, the Presidential Faculty Fellows program was replaced by the PECASE program. [ 2 ] Applicants could not directly apply for the award, but were nominated by others including their own institutions based on their previous record of scientific achievement. The award, a certificate from the White House signed by the President of the United States, included a minimum grant of $25,000 a year for five years from NSF to be used for any scientific research project the awardee wished to pursue, with the possibility of additional funding up to $100,000 annually if the PYI obtained matching funds from industry. Considered to be one of the highest honors granted by the National Science Foundation, the award program was criticized in 1990 as not being the best use of NSF funds in an era of tight budgets. [ 3 ] [ 4 ] At least one awardee has also won a Nobel Prize . For example, Frances Arnold , winner of this award in 1989, won the Nobel Prize in Chemistry in 2018. [ 5 ] PYI award recipients include: In 1991, the NSF renamed the Presidential Young Investigator Program as the NSF Young Investigator Program, to reflect more accurately the level of prestige of the award—the term "Presidential" should be reserved for awards more prestigious. [ 39 ] The NSF Presidential Faculty Fellowship (PFF) program was launched by President George H.W. Bush to honor 30 young engineering and science professors. The awards were up to $100,000 per year for 5 years. [ 39 ] Here are some recipients of the Presidential Faculty Fellowship. [ 39 ]
https://en.wikipedia.org/wiki/Presidential_Young_Investigator_Award
Presidents of the American Chemical Society : [ 1 ]
https://en.wikipedia.org/wiki/Presidents_of_the_American_Chemical_Society
Oshibana ( 押し花 ) is the art of using pressed flowers and other botanical materials to create an entire picture from these natural elements. [ 1 ] Such pressed flower art consists of drying flower petals and leaves in a flower press to flatten them, exclude light and press out moisture. These elements are then used to "paint" an artistic composition. The origin of this art form has been traced to 16th century Japan , but it is now practiced worldwide. The resulting artwork is referred to as an oshibana . [ 2 ] As early as the 16th century, samurai were said to have created oshibana as one of their disciplines to promote patience , harmony with nature and powers of concentration. [ citation needed ] Similarly, as botanists in Europe began systematic collection and preservation of specimens, art forms with the pressed plant materials developed, particularly during the Victorian era . [ citation needed ] The art form became popular in the Holy Land in the late 1890s and into the 20th century when elaborate souvenir books combining photographs of the holy sites and the pressed flowers gathered at these sites. These photographs and pressed, dried flowers were artistically formatted and bound between olive wood covers to be sold to visitors. [ 3 ] American actress Grace Kelly , during her years as Princess Grace of Monaco, practiced oshibana and helped promote the art of pressed flowers worldwide, employing pressed botanical materials sent to her from abroad. My Book of Flowers , published in 1980, includes chapters on her art. [ 4 ] [ 5 ] Outside of Asia, the art gained popularity in Britain during the Victorian era and experienced a revival from the 1970s to the early 2000s. Some artists outside of Asia have continued to use it. [ 6 ] Pressing flowers makes them appear flat, and there is often a change in color , ranging from faded colors to a greater intensity of vibrant colors. The pressed flowers and leaves can be used in a variety of craft projects. They are often mounted on special paper , such as handmade paper , Ingres paper , Japanese paper , or paper decorated by marbling . Each leaf and flower is glued onto a precise location. With a creative approach to the use of materials, a leaf becomes a tree and petals form mountains. [ citation needed ] Washes of watercolor painting are sometimes applied to the backing paper before the pressed material is attached. Pressed material may also be mounted on fabrics, such as velvet , silk , linen or cotton . [ 7 ] Petals and leaves can be applied to wood furnishings using the technique of decoupage . [ 8 ] Oshibana artists are employing various new technologies in pressing methods, framing techniques and color enhancing to help the pressed materials keep their beauty through the years. Nobuo Sugino, pioneering figure in contemporary oshibana , and his father used dessicant papers to press flowers, helping hold color. [ 9 ] A method of vacuum-sealing frames to lock in color, texture and clarity of the petals and leaves and help prevent moisture and fungi intrusion was also developed in Japan and is now practiced by many oshibana artists worldwide. [ 10 ] The IPFAS in an international pressed flower organization that promotes pressed flower art and offers education and holds competitions. It has members from over 20 nations (as of 2010 [update] ) including Japan, the United Kingdom, United States, France, Germany, Mexico, and Australia. It was founded in 1999 by Nobuo Sugino, a Japanese pressed flower artist and President of Japan Wonderful Oshibana Club. [ 6 ] In the UK, the Pressed Flower Craft Guild was established in 1983 by Joyce Fenton (a pressed flower artist) and Bill Edwardes (who devised the method of framing pressed flower pictures adopted by the guild). It claims to have an international membership. [ 11 ] The WWPFG was established in July 2001. In November 2008, the guild was incorporated in North Carolina as a public educational non-profit organisation. [ 12 ] In recent decades, the emergence of several international art associations and schools have helped popularize and increase the recognition of oshibana as a unique art form through classes, conferences, international exhibitions and competitions. Among them include:
https://en.wikipedia.org/wiki/Pressed_flower_craft
Pressure (symbol: p or P ) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. [ 1 ] : 445 Gauge pressure (also spelled gage pressure) [ a ] is the pressure relative to the ambient pressure. Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m 2 ); similarly, the pound-force per square inch ( psi , symbol lbf/in 2 ) is the traditional unit of pressure in the imperial and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure ; the unit atmosphere (atm) is equal to this pressure, and the torr is defined as 1 ⁄ 760 of this. Manometric units such as the centimetre of water , millimetre of mercury , and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Pressure is the amount of force applied perpendicular to the surface of an object per unit area. The symbol for it is "p" or P . [ 2 ] The IUPAC recommendation for pressure is a lower-case p . [ 3 ] However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum , and on writing style. Mathematically: [ 4 ] p = F A , {\displaystyle p={\frac {F}{A}},} where: Pressure is a scalar quantity. It relates the vector area element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates these two normal vectors: d F n = − p d A = − p n d A . {\displaystyle d\mathbf {F} _{n}=-p\,d\mathbf {A} =-p\,\mathbf {n} \,dA.} The minus sign comes from the convention that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. [ citation needed ] Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics , and it is conjugate to volume . [ 5 ] It is defined as a derivative of the internal energy of a system: [ 6 ] where: The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m 2 , or kg·m −1 ·s −2 ). This name for the unit was added in 1971; [ 7 ] before that, pressure in SI was expressed in newtons per square metre. Other units of pressure, such as pounds per square inch (lbf/in 2 ) and bar , are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm −2 , or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre ("g/cm 2 " or "kg/cm 2 ") and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is deprecated in SI. The technical atmosphere (symbol: at) is 1 kgf/cm 2 (98.0665 kPa, or 14.223 psi). Pressure is related to energy density and may be expressed in units such as joules per cubic metre (J/m 3 , which is equal to Pa). Mathematically: p = F ⋅ distance A ⋅ distance = work volume = energy (J) volume ( m 3 ) . {\displaystyle p={\frac {F\cdot {\text{distance}}}{A\cdot {\text{distance}}}}={\frac {\text{work}}{\text{volume}}}={\frac {\text{energy (J)}}{{\text{volume }}({\text{m}}^{3})}}.} Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, except aviation where the hecto- prefix is commonly used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as 101 325 Pa (IUPAC recommends the value 100 000 Pa , but prior to 1982 the value 101 325 Pa (= 1 atm) was usually used). [ 8 ] Because pressure is commonly measured by its ability to displace a column of liquid in a manometer , pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water , millimetres of mercury or inches of mercury ). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation p = ρgh , where g is the gravitational acceleration . Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury (or inches of mercury) are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. [ 9 ] One millimetre of mercury is approximately equal to one torr . The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres (or centimetres) of mercury in most of the world, and lung pressures in centimetres of water are still common. [ citation needed ] Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers . A msw is defined as 0.1 bar (= 10,000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm [ citation needed ] (1 atm = 101,325 Pa / 33.066 = 3,064.326 Pa). The pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft. [ citation needed ] Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. [ 10 ] For example, " p g = 100 psi" rather than " p = 100 psig" . Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress , pressure is defined as a scalar quantity . The negative gradient of pressure is called the force density . [ 11 ] Another example is a knife. If the flat edge is used, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressure. [ 12 ] For gases, pressure is sometimes measured not as an absolute pressure , but relative to atmospheric pressure ; such measurements are called gauge pressure . An example of this is the air pressure in an automobile tire , which might be said to be "220 kPa (32 psi)", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about 320 kPa (46 psi). In technical work, this is written "a gauge pressure of 220 kPa (32 psi)". Where space is limited, such as on pressure gauges , name plates , graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. [ 13 ] In non- SI technical work, a gauge pressure of 32 psi (220 kPa) is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred. [ 10 ] Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is 100 kPa (15 psi), a gas (such as helium) at 200 kPa (29 psi) (gauge) (300 kPa or 44 psi [absolute]) is 50% denser than the same gas at 100 kPa (15 psi) (gauge) (200 kPa or 29 psi [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. [ citation needed ] In a static gas , the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion . Because there are an extremely large number of molecules and because the motion of the individual molecules is random in every direction, no motion is detected. When the gas is at least partially confined (that is, not free to expand rapidly), the gas will exhibit a hydrostatic pressure . This confinement can be achieved with either a physical container, or in the gravitational well of a large mass, such as a planet, otherwise known as atmospheric pressure . In the case of planetary atmospheres , the pressure-gradient force of the gas pushing outwards from higher pressure, lower altitudes to lower pressure, higher altitudes is balanced by the gravitational force , preventing the gas from diffusing into outer space and maintaining hydrostatic equilibrium . In a physical container, the pressure of the gas originates from the molecules colliding with the walls of the container. The walls of the container can be anywhere inside the gas, and the force per unit area (the pressure) is the same. If the "container" is shrunk down to a very small point (becoming less true as the atomic scale is approached), the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. [ 14 ] A closely related quantity is the stress tensor σ , which relates the vector force F {\displaystyle \mathbf {F} } to the vector area A {\displaystyle \mathbf {A} } via the linear relation F = σ A {\displaystyle \mathbf {F} =\sigma \mathbf {A} } . This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. [ 15 ] According to the theory of general relativity , pressure increases the strength of a gravitational field (see stress–energy tensor ) and so adds to the mass-energy cause of gravity . This effect is unnoticeable at everyday pressures but is significant in neutron stars , although it has not been experimentally tested. [ 16 ] Fluid pressure is most often the compressive stress at some point within a fluid . (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below .) Fluid pressure occurs in one of two situations: Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics . The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure . Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics . The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli . Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal [ 17 ] and incompressible. [ 17 ] An ideal fluid is a fluid in which there is no friction, it is inviscid [ 17 ] (zero viscosity ). [ 17 ] The equation for all points of a system filled with a constant-density fluid is [ 18 ] p γ + v 2 2 g + z = c o n s t , {\displaystyle {\frac {p}{\gamma }}+{\frac {v^{2}}{2g}}+z=\mathrm {const} ,} where: Explosion or deflagration pressures are the result of the ignition of explosive gases , mists, dust/air suspensions, in unconfined and confined spaces. While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure , it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: p 0 = 1 2 ρ v 2 + p {\displaystyle p_{0}={\frac {1}{2}}\rho v^{2}+p} where The pressure of a moving fluid can be measured using a Pitot tube , or one of its variations such as a Kiel probe or Cobra probe , connected to a manometer . Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: π = F l {\displaystyle \pi ={\frac {F}{l}}} and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law , πA = k , at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". In an ideal gas , molecules have no volume and do not interact. According to the ideal gas law , pressure varies linearly with temperature and quantity, and inversely with volume: p = n R T V , {\displaystyle p={\frac {nRT}{V}},} where: Real gases exhibit a more complex dependence on the variables of state. [ 23 ] Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point ) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure . When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density and gravity at a depth within a substance is represented by the following formula: p = ρ g h , {\displaystyle p=\rho gh,} where: Another way of saying the same formula is the following: p = weight density × depth . {\displaystyle p={\text{weight density}}\times {\text{depth}}.} weight density = weight volume {\displaystyle {\text{weight density}}={\frac {\text{weight}}{\text{volume}}}} we can express this weight of liquid as weight = weight density × volume , {\displaystyle {\text{weight}}={\text{weight density}}\times {\text{volume}},} where the volume of the column is simply the area multiplied by the depth. Then we have pressure = force area = weight area = weight density × volume area , {\displaystyle {\text{pressure}}={\frac {\text{force}}{\text{area}}}={\frac {\text{weight}}{\text{area}}}={\frac {{\text{weight density}}\times {\text{volume}}}{\text{area}}},} pressure = weight density × (area × depth) area . {\displaystyle {\text{pressure}}={\frac {{\text{weight density}}\times {\text{(area}}\times {\text{depth)}}}{\text{area}}}.} With the "area" in the numerator and the "area" in the denominator canceling each other out, we are left with pressure = weight density × depth . {\displaystyle {\text{pressure}}={\text{weight density}}\times {\text{depth}}.} Written with symbols, this is our original equation: p = ρ g h . {\displaystyle p=\rho gh.} The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. The pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of 3 m (10 ft) exerts only half the average pressure that a small 6 m (20 ft) deep pond does. (The total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon. But for a given 5-foot (1.5 m)-wide section of each dam, the 10 ft (3.0 m) deep water will apply one quarter the force of 20 ft (6.1 m) deep water). A person will feel the same pressure whether their head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four interconnected vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference which vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the neighboring vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Restating this as an energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth. [ 24 ] Mathematically, it is described by Bernoulli's equation , where velocity head is zero and comparisons per unit volume in the vessel are p γ + z = c o n s t . {\displaystyle {\frac {p}{\gamma }}+z=\mathrm {const} .} Terms have the same meaning as in section Fluid pressure . An experimentally determined fact about liquid pressure is that it is exerted equally in all directions. [ 25 ] If someone is submerged in water, no matter which way that person tilts their head, the person will feel the same amount of water pressure on their ears. Because a liquid can flow, this pressure is not only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a ball is pushed upward by water pressure ( buoyancy ). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure does not have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. [ 25 ] This is why liquid particles' velocity only alters in a normal component after they are collided to the container's wall. Likewise, if the collision site is a hole, water spurting from the hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is 2 g h {\displaystyle \scriptstyle {\sqrt {2gh}}} , where h is the depth below the free surface. [ 25 ] As predicted by Torricelli's law this is the same speed the water (or anything else) would have if freely falling the same vertical distance h . P = p ρ 0 {\displaystyle P={\frac {p}{\rho _{0}}}} is the kinematic pressure, where p {\displaystyle p} is the pressure and ρ 0 {\displaystyle \rho _{0}} constant mass density. The SI unit of P is m 2 /s 2 . Kinematic pressure is used in the same manner as kinematic viscosity ν {\displaystyle \nu } in order to compute the Navier–Stokes equation without explicitly showing the density ρ 0 {\displaystyle \rho _{0}} . ∂ u ∂ t + ( u ∇ ) u = − ∇ P + ν ∇ 2 u . {\displaystyle {\frac {\partial u}{\partial t}}+(u\nabla )u=-\nabla P+\nu \nabla ^{2}u.}
https://en.wikipedia.org/wiki/Pressure
Pressure-correction method is a class of methods used in computational fluid dynamics for numerically solving the Navier-Stokes equations normally for incompressible flows . The equations solved in this approach arise from the implicit time integration of the incompressible Navier–Stokes equations . Due to the non-linearity of the convective term in the momentum equation that is written above, this problem is solved with a nested-loop approach. While so called global or inner iterations represent the real time-steps and are used to update the variables v {\displaystyle \mathbf {v} } and p {\displaystyle p} , based on a linearized system, and boundary conditions; there is also an outer loop for updating the coefficients of the linearized system. The outer iterations comprise two steps: The correction for the velocity that is obtained from the second equation one has with incompressible flow, the non-divergence criterion or continuity equation is computed by first calculating a residual value m ˙ {\displaystyle {\dot {m}}} , resulting from spurious mass flux , then using this mass imbalance to get a new pressure value. The pressure value that is attempted to compute, is such that when plugged into momentum equations a divergence-free velocity field results. The mass imbalance is often also used for control of the outer loop. The name of this class of methods stems from the fact that the correction of the velocity field is computed through the pressure-field. The discretization of this is typically done with either the finite element method or the finite volume method . With the latter, one might also encounter the dual mesh, i.e. the computation grid obtained from connecting the centers of the cells that the initial subdivision into finite elements of the computation domain yielded. Another approach which is typically used in FEM is the following. The aim of the correction step is to ensure conservation of mass . In continuous form for compressible substances mass, conservation of mass is expressed by where c 2 {\displaystyle c^{2}} is the square of the "speed of sound". For low Mach numbers and incompressible media c {\displaystyle c} is assumed to be infinite, which is the reason for the above continuity equation to reduce to The way of obtaining a velocity field satisfying the above, is to compute a pressure which when substituted into the momentum equation leads to the desired correction of a preliminary computed intermediate velocity. Applying the divergence operator to the compressible momentum equation yields ( ∗ ) {\displaystyle (\ast )} then provides the governing equation for pressure computation. The idea of pressure-correction also exists in the case of variable density and high Mach numbers, although in this case there is a real physical meaning behind the coupling of dynamic pressure and velocity as arising from the continuity equation p {\displaystyle p} is with compressibility, still an additional variable that can be eliminated with algebraic operations, but its variability is not a pure artifice as in the compressible case, and the methods for its computation differ significantly from those with ρ = constant . {\displaystyle \rho ={\text{constant}}.}
https://en.wikipedia.org/wiki/Pressure-correction_method
Pressure-driven flow is a method to displace liquids in a capillary or microfluidic channel with pressure . The pressure is typically generated pneumatically by compressed air or other gases ( nitrogen , carbon dioxide , etc) or by electrical and magnetical fields or gravitation . It is known from thermodynamics that conjugated quantities scale in a different manner. Two classes can be distinguished: intensive quantities as temperature T , pressure P and amount of substance N or extensive quantities as entropy S , volume V and chemical potential μ . Extensive quantities scale with system size, whereas the intensive quantities do not. The quantity pressure, for example, is defined as the (differential) quotient of two extensive variables: p =d E /d V (energy E and volume V ) and therefore scale independent as the same scaling factors appearing in the nominator as well as the denominator cancel. In microsystems the problem rises that the extremely small volumes are difficult to be controlled. The reason is the predominance of surface effects as surface charges, van-der-Waals forces and entropic effects (e.g. dewetting due to rough surfaces: the restriction in degrees of freedom of molecules penetrating such a surface is entropically more expensive than staying in bulk). Furthermore, the microsystem has to be controlled from a macroscopic human scale. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pressure-driven_flow
The pressure-fed engine is a class of rocket engine designs. A separate gas supply, usually helium , pressurizes the propellant tanks to force fuel and oxidizer to the combustion chamber. To maintain adequate flow, the tank pressures must exceed the combustion chamber pressure. Pressure fed engines have simple plumbing and have no need for complex and occasionally unreliable turbopumps . A typical startup procedure begins with opening a valve, often a one-shot pyrotechnic device, to allow the pressurizing gas to flow through check valves into the propellant tanks. Then the propellant valves in the engine itself are opened. If the fuel and oxidizer are hypergolic , they burn on contact; non-hypergolic fuels require an igniter. Multiple burns can be conducted by merely opening and closing the propellant valves as needed. If the pressurization system also has activating valves, they can be operated electrically, or by gas pressure controlled by smaller electrically operated valves. Care must be taken, especially during long burns, to avoid excessive cooling of the pressurizing gas due to adiabatic expansion . Cold helium won't liquify, but it could freeze a propellant, decrease tank pressures, or damage components not designed for low temperatures. The Apollo Lunar Module Descent Propulsion System was unusual in storing its helium in a supercritical but very cold state. It was warmed as it was withdrawn through a heat exchanger from the ambient temperature fuel. [ 1 ] Spacecraft attitude control and orbital maneuvering thrusters are almost universally pressure-fed designs. [ 2 ] Examples include the Reaction Control (RCS) and the Orbital Maneuvering (OMS) engines of the Space Shuttle orbiter; the RCS and Service Propulsion System (SPS) engines on the Apollo Command/Service Module ; the SuperDraco (in-flight abort) and Draco (RCS) engines on the SpaceX Dragon 2 ; and the RCS, ascent and descent engines on the Apollo Lunar Module . [ 1 ] Some launcher upper stages also use pressure-fed engines. These include the Aerojet AJ10 and TRW TR-201 used in the second stage of Delta II launch vehicle, and the Kestrel engine of the Falcon 1 by SpaceX. [ 3 ] The 1960s Sea Dragon concept by Robert Truax for a big dumb booster would have used pressure-fed engines. Pressure-fed engines have practical limits on propellant pressure, which in turn limits combustion chamber pressure. High pressure propellant tanks require thicker walls and stronger materials which make the vehicle tanks heavier, thereby reducing performance and payload capacity. The lower stages of launch vehicles often use either solid fuel or pump-fed liquid fuel engines instead, where high pressure ratio nozzles are considered desirable. [ 2 ] Other vehicles or companies using pressure-fed engine:
https://en.wikipedia.org/wiki/Pressure-fed_engine
In fluid mechanics , the pressure-gradient force is the force that results when there is a difference in pressure across a surface. In general, a pressure is a force per unit area across a surface. A difference in pressure across a surface then implies a difference in force, which can result in an acceleration according to Newton's second law of motion , if there is no additional force to balance it. The resulting force is always directed from the region of higher-pressure to the region of lower-pressure. When a fluid is in an equilibrium state (i.e. there are no net forces , and no acceleration), the system is referred to as being in hydrostatic equilibrium . In the case of atmospheres , the pressure-gradient force is balanced by the gravitational force , maintaining hydrostatic equilibrium. In Earth's atmosphere , for example, air pressure decreases at altitudes above Earth's surface, thus providing a pressure-gradient force which counteracts the force of gravity on the atmosphere. The Magnus effect is an observable phenomenon that is commonly associated with a spinning object moving through a fluid . The path of the spinning object is deflected in a manner that is not present when the object is not spinning. The deflection can be explained by the difference in pressure of the fluid on opposite sides of the spinning object. The Magnus effect is dependent on the speed of rotation. Consider a cubic parcel of fluid with a density ρ {\displaystyle \rho } , a height d z {\displaystyle dz} , and a surface area d A {\displaystyle dA} . The mass of the parcel can be expressed as, m = ρ d A d z {\displaystyle m=\rho \,dA\,dz} . Using Newton's second law, F = m a {\displaystyle F=ma} , we can then examine a pressure difference d P {\displaystyle dP} (assumed to be only in the z {\displaystyle z} -direction) to find the resulting force, F = − d P d A = ρ a d A d z {\displaystyle F=-dP\,dA=\rho a\,dA\,dz} . The acceleration resulting from the pressure gradient is then, a = − 1 ρ d P d z . {\displaystyle a=-{\frac {1}{\rho }}{\frac {dP}{dz}}.} The effects of the pressure gradient are usually expressed in this way, in terms of an acceleration, instead of in terms of a force. We can express the acceleration more precisely, for a general pressure P {\displaystyle P} as, a → = − 1 ρ ∇ → P . {\displaystyle {\vec {a}}=-{\frac {1}{\rho }}{\vec {\nabla }}P.} The direction of the resulting force (acceleration) is thus in the opposite direction of the most rapid increase of pressure. This article about atmospheric science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pressure-gradient_force
Pressure-induced hydration ( PIH ), also known as “super- hydration ”, is a special case of pressure -induced insertion whereby water molecules are injected into the pores of microporous materials. In PIH, a microporous material is placed under pressure in the presence of water in the pressure-transmitting fluid of a diamond anvil cell . [ 1 ] [ 2 ] Early physical characterization [ 3 ] and initial diffraction experiments [ 4 ] in zeolites were followed by the first unequivocal structural characterization of PIH in the small-pore zeolite natrolite ( Na 16 Al 16 Si 24 O 80 ·16 H 2 O), which in its fully super-hydrated form, Na 16 Al 16 Si 24 O 80 ·32H 2 O, doubles [ 5 ] the amount of water it contains in its pores. PIH has now been demonstrated in natrolites containing Li , K , Rb and Ag as monovalent cations [ 6 ] [ 7 ] as well as in large-pore zeolites, [ 8 ] pyrochlores , [ 9 ] clays [ 10 ] and graphite oxide . [ 11 ] Using the noble gases Ar , Kr , and Xe as well as CO 2 as pressure-transmitting fluids, researchers have prepared and structurally characterized the products of reversible, pressure-induced insertion of Ar [ 12 ] Kr, [ 13 ] and CO 2 [ 14 ] as well as the irreversible insertion of Xe [ 13 ] and water. [ 15 ]
https://en.wikipedia.org/wiki/Pressure-induced_hydration
Pressure retarded osmosis ( PRO ) is a technique to separate a solvent (for example, fresh water ) from a solution that is more concentrated (e.g. sea water ) and also pressurized. A semipermeable membrane allows the solvent to pass to the concentrated solution side by osmosis . [ 1 ] The technique can be used to generate power from the salinity gradient energy resulting from the difference in the salt concentration between sea and river water. This method of generating power was invented by Prof. Sidney Loeb in 1973 at the Ben-Gurion University of the Negev , Beersheba , Israel. [ 2 ] [ 3 ] Richard Norman submitted a manuscript describing the concept to Science in May 1974. [ 4 ] In that manuscript, Norman clearly indicated that he was unaware of any prior work on the topic. Loeb submitted a comment on Norman's cost analysis to Science in January 1975. [ 5 ] In that publication, Loeb proposed the term "pressure retarded osmosis". He further wrote "To facilitate examination of the concept in some detail, the United States-Israel Binational Science Foundation awarded a grant (No. 337) to our Research Authority in May 1974." The ideal power production formula, which applies to an idealized situation, predicts that the optimal hydraulic pressure difference, Δ P {\displaystyle \Delta P} is one-half the osmotic pressure difference between the saline and pure water streams Δ Π / 2 {\displaystyle \Delta \Pi /2} . [ 4 ] [ 6 ] For a seawater to fresh water PRO system, the ideal case corresponds to an optimal power pressure of 26 bars . This pressure is equivalent to a column of water ( hydraulic head ) 270 meters high. [ 7 ] In a real-world system, both the hydraulic pressure and the osmotic pressure will vary through the PRO system as a result of friction, water removal, and salt build up near the membranes. These factors reduce the achievable power below the ideal limit. The amount of membrane area that can be used is limited by cost and other practical considerations, and this factor limits achievable power production. [ 8 ] A significant portion of the electrical power generated by PRO must be used by the pumps that circulate water through the plant. [ 9 ] This power demand can be improved with designs that use pressure exchangers . Appropriate membranes are also necessary. A main consideration governing the performance of PRO is the degree of concentration polarization within the membrane, which is characterized in PRO by the "structural parameter" [ 10 ] S {\displaystyle S} . Lower values of S {\displaystyle S} indicate less concentration within the membrane, improving performance. The water and salt permanence of the membrane also influences its performance in PRO. [ 11 ] All these factors have limited the economic viability of PRO. [ 12 ] Although it can make seawater desalination modestly less energy intensive, PRO requires quite high costs of electrical energy to be economical. [ 13 ] PRO may be more competitive in regions where electricity prices vary dramatically, where reverse osmosis systems could be operated in a PRO mode during price spikes. [ 14 ] PRO has the potential to extract osmotic power from waste streams, such as desalination plant brine discharge or treated wastewater effluent . [ 15 ] The potential power output is proportional to the salinity difference between the fresh and saline water streams. Desalination yields very salty brine, while treated municipal wastewater has relatively little salt. Combining those streams could produce energy to power both facilities. However, powering an existing wastewater treatment plant by mixing treated wastewater with seawater in a mid-size city could require a membrane area of 2.5 million square meters. [ 16 ] PRO uses a water–permeable membrane with an osmotic pressure difference to drive water flux from a low–concentration "diluate" stream, into a slightly pressurized higher–concentration. An energy recovery device on this stream provides the energy output, and must exceed the pumping pressure input for net power production. The world's first osmotic plant with capacity of 10 kW was opened by Statkraft , a state-owned hydropower company, on 24 November 2009 in Tofte , Norway. [ 18 ] It had been estimated that PRO could generate 12 TWh annually in Norway , sufficient to meet 10% of Norway's total demand for electricity. [ 19 ] In January 2014, Statkraft terminated their pressure-retarded osmosis pilot project [ 20 ] due to economic feasibility concerns. Starting in 2021, SaltPower is building another commercial osmotic power plant in Denmark using very high salinity brine from a geothermal power plant. [ 21 ]
https://en.wikipedia.org/wiki/Pressure-retarded_osmosis
There are several different methods to derive pressure from wind speed and vice versa in tropical cyclones . Both information minimum pressure and wind speed have their utilities. Wind speed can describe the destructive potential of a tropical cyclone. [ 1 ] A tropical cyclone 's maximum sustained wind and minimum central air pressure are interlinked and can be used to describe a tropical cyclone's intensity. [ 2 ] [ 3 ] While the maximum winds are more closely related to the destructive potential of a tropical cyclone, it is harder to reliably measure. [ 1 ] These winds can be estimated from both the radius of maximum winds and the pressure gradient , but this gradient is also difficult to measure. Over water, reconnaissance flights can sample a tropical cyclone's central pressure, [ 4 ] and reliable pressure observations over land from within the eye are more likely to be retrieved than wind observations from the eyewall . [ 5 ] According to Christopher Burt from Weather Underground , the most reliable method of estimating pressure from wind involves using the Dvorak technique with an image , which shows how cold cloud tops are. [ 6 ] Joe Courtney and John Knaff noted that as several models are based on Atlantic data, it can lead to biases in other parts of the world. [ 7 ] Most pressure-wind models are in the form of: [ 8 ] where v m {\displaystyle v_{m}} is the maximum wind speed, Δ p {\displaystyle \Delta p} is the change in pressure from an external point to the center, and a {\displaystyle a} and x {\displaystyle x} are constants. [ 8 ] Ted Fujita was the first to modify the exponent; before then, it mostly stood at 0.5. [ 8 ] The efficacy of wind–pressure relationships is affected by other factors such as the storm's latitude and size, as well as the local atmospheric environment. [ 9 ] Knaff and Zehr (2007) came up with the following formula to relate wind and pressure, taking into account movement, size, and latitude: [ 10 ] M S L P = 23.286 − 0.483 V s r m − ( V s r m 24.254 ) − 12.587 S − 0.483 ϕ + P e n v ′ {\displaystyle MSLP=23.286-0.483V_{srm}-({\frac {V_{srm}}{24.254}})-12.587S-0.483\phi +P_{env'}} Where V srm is the max wind speed corrected for storm speed, phi is the latitude, and S is the size parameter. [ 10 ] S is more specifically defined as the ratio of tangential wind at a radius of 500 kilometres (310 mi) to its value under a Rankine vortex model. [ 11 ] In 2008, Greg Holland published his model to the Monthly Weather Review . [ 8 ] Joe Courtney and John A. Knaff published in 2009 a correction to the previous Knaff-Zehr model. They noted that the Knaff-Zehr model had issues with calculating for storms at low latitudes. The equation derived is: [ 7 ] P c = 23.286 − 0.483 V s r m 1 − ( V s r m 1 / 24.254 ) 2 − 12.587 S − 0.483 Φ + P e {\displaystyle P_{c}=23.286-0.483V_{srm1}-(V_{srm1}/24.254)^{2}-12.587S-0.483\Phi +P_{e}} The interchangeability of pressure and wind allows for the two to be used to give equivalencies for the public. [ 3 ] Pressure-wind relations can be used when information is incomplete, forcing forecasters to rely on the Dvorak technique. [ 11 ] Some storms may have particularly high or low pressures that do not match with their wind speed. For example, Hurricane Sandy had a lower pressure than expected with its associated wind speed. [ 3 ]
https://en.wikipedia.org/wiki/Pressure-wind_relationship_calculations_for_tropical_cyclones
The Pressure Equipment Directive (PED) 2014/68/EU (formerly 97/23/EC) [ 1 ] of the EU sets out the standards for the design and fabrication of pressure equipment ("pressure equipment" means steam boilers , pressure vessels , piping , safety valves and other components and assemblies subject to pressure loading) generally over one liter in volume and having a maximum pressure more than 0.5 bar gauge. It also sets the administrative procedures requirements for the "conformity assessment" of pressure equipment, for the free placing on the European market without local legislative barriers. It has been mandatory throughout the EU since 30 May 2002, with 2014 revision fully effective as of 19 July 2016. [ 2 ] The standards and regulations regarding pressure vessels and boiler safety are also very close to the US standards defined by the American Society of Mechanical Engineers (ASME). This enables most international inspection agencies to provide both verification and certification services to assess compliance to the different pressure equipment directives. [ 3 ] From the pressure vessel manufactures PED does not generally require a prior manufacturing permit/certificate/stamp as ASME does. Directive 97/23/EC was fully superseded by directive 2014/68/EU from 20 July 2016 onwards. Article 13 of the new directive (classification of pressure equipment) became effective 1 June 2015, replacing article 9 of directive 97/23/EC. [ 4 ] In the UK the Pressure Equipment (Safety) Regulations 2016 PE(S)R (Formerly Pressure Equipment Regulations 1999 (PER)) and the Pressure Systems Safety Regulations 2000 apply: [ 5 ] see Health and safety regulations in the United Kingdom . The Health and Safety Executive and the Health and Safety Executive for Northern Ireland are the named enforcement authorities. [ 5 ] : Regulation 2 Some of the provisions included in PE(S)R 2016 apply differently in NI for as long as the Northern Ireland Protocol is in force after UK existing EU.
https://en.wikipedia.org/wiki/Pressure_Equipment_Directive_(EU)
In fluid dynamics , the pressure coefficient is a dimensionless number which describes the relative pressures throughout a flow field . The pressure coefficient is used in aerodynamics and hydrodynamics . Every point in a fluid flow field has its own unique pressure coefficient, C p . In many situations in aerodynamics and hydrodynamics, the pressure coefficient at a point near a body is independent of body size. Consequently, an engineering model can be tested in a wind tunnel or water tunnel , pressure coefficients can be determined at critical locations around the model, and these pressure coefficients can be used with confidence to predict the fluid pressure at those critical locations around a full-size aircraft or boat. The pressure coefficient is a parameter for studying both incompressible/compressible fluids such as water and air. The relationship between the dimensionless coefficient and the dimensional numbers is [ 1 ] [ 2 ] where: Using Bernoulli's equation , the pressure coefficient can be further simplified for potential flows (inviscid, and steady): [ 3 ] where: This relationship is valid for the flow of incompressible fluids where variations in speed and pressure are sufficiently small that variations in fluid density can be neglected. This assumption is commonly made in engineering practice when the Mach number is less than about 0.3. Locations where C p = − 1 {\displaystyle C_{p}=-1} are significant in the design of gliders because this indicates a suitable location for a "Total energy" port for supply of signal pressure to the Variometer , a special Vertical Speed Indicator which reacts to vertical movements of the atmosphere but does not react to vertical maneuvering of the glider. In an incompressible fluid flow field around a body, there will be points having positive pressure coefficients up to one, and negative pressure coefficients including coefficients less than minus one. In the flow of compressible fluids such as air, and particularly the high-speed flow of compressible fluids, 1 2 ρ v 2 {\displaystyle {{\frac {1}{2}}\rho v^{2}}} (the dynamic pressure ) is no longer an accurate measure of the difference between stagnation pressure and static pressure . Also, the familiar relationship that stagnation pressure is equal to total pressure does not always hold true. (It is always true in isentropic flow, but the presence of shock waves can cause the flow to depart from isentropic.) As a result, pressure coefficients can be greater than one in compressible flow. [ 4 ] The pressure coefficient C p {\displaystyle C_{p}} can be estimated for irrotational and isentropic flow by introducing the potential Φ {\displaystyle \Phi } and the perturbation potential ϕ {\displaystyle \phi } , normalized by the free-stream velocity u ∞ {\displaystyle u_{\infty }} Using Bernoulli's equation , which can be rewritten as where a {\displaystyle a} is the sound speed. The pressure coefficient becomes where a ∞ {\displaystyle a_{\infty }} is the far-field sound speed. The classical piston theory is a powerful aerodynamic tool. From the use of the momentum equation and the assumption of isentropic perturbations, one obtains the following basic piston theory formula for the surface pressure: where w {\displaystyle w} is the downwash speed and a {\displaystyle a} is the sound speed. The surface is defined as The slip velocity boundary condition leads to The downwash speed w {\displaystyle w} is approximated as In hypersonic flow, the pressure coefficient can be accurately calculated for a vehicle using Newton's corpuscular theory of fluid motion, which is inaccurate for low-speed flow and relies on three assumptions: [ 5 ] For a freestream velocity V ∞ {\displaystyle V_{\infty }} impacting a surface of area A {\displaystyle A} , which is inclined at an angle θ {\displaystyle \theta } relative to the freestream, the change in normal momentum is V ∞ sin ⁡ θ {\displaystyle V_{\infty }\sin \theta } and the mass flux incident on the surface is ρ ∞ V ∞ A sin ⁡ θ {\displaystyle \rho _{\infty }V_{\infty }A\sin \theta } , with ρ ∞ {\displaystyle \rho _{\infty }} being the freestream air density. Then the momentum flux, equal to the force exerted on the surface F {\displaystyle F} , from Newton's second law is equal to: Dividing by the surface area, it is clear that the force per unit area is equal to the pressure difference between the surface pressure p {\displaystyle p} and the freestream pressure p ∞ {\displaystyle p_{\infty }} , leading to the relation: The last equation may be identified as the pressure coefficient, meaning that Newtonian theory predicts that the pressure coefficient in hypersonic flow is: For very high speed flows, and vehicles with sharp surfaces, the Newtonian theory works very well. A modification to the Newtonian theory, specifically for blunt bodies, was proposed by Lester Lees: [ 6 ] where C p , max {\displaystyle C_{p,\max }} is the maximum value of the pressure coefficient at the stagnation point behind a normal shock wave : where p o {\displaystyle p_{o}} is the stagnation pressure and γ {\displaystyle \gamma } is the ratio of specific heats . The last relation is obtained from the ideal gas law p = ρ R T {\displaystyle p=\rho RT} , Mach number M = V / a {\displaystyle M=V/a} , and speed of sound a = γ R T {\displaystyle a={\sqrt {\gamma RT}}} . The Rayleigh pitot tube formula for a calorically perfect normal shock says that the ratio of the stagnation and freestream pressure is: Therefore, it follows that the maximum pressure coefficient for the Modified Newtonian law is: In the limit when M ∞ → ∞ {\displaystyle M_{\infty }\rightarrow \infty } , the maximum pressure coefficient becomes: And as γ → 1 {\displaystyle \gamma \rightarrow 1} , C p , max = 2 {\displaystyle C_{p,\max }=2} , recovering the pressure coefficient from Newtonian theory at very high speeds. The modified Newtonian theory is substantially more accurate than the Newtonian model for calculating the pressure distribution over blunt bodies. [ 5 ] An airfoil at a given angle of attack will have what is called a pressure distribution. This pressure distribution is simply the pressure at all points around an airfoil. Typically, graphs of these distributions are drawn so that negative numbers are higher on the graph, as the C p {\displaystyle C_{p}} for the upper surface of the airfoil will usually be farther below zero and will hence be the top line on the graph. All the three aerodynamic coefficients are integrals of the pressure coefficient curve along the chord. The coefficient of lift for a two-dimensional airfoil section with strictly horizontal surfaces can be calculated from the coefficient of pressure distribution by integration, or calculating the area between the lines on the distribution. This expression is not suitable for direct numeric integration using the panel method of lift approximation, as it does not take into account the direction of pressure-induced lift. This equation is true only for zero angle of attack. where: When the lower surface C p {\displaystyle C_{p}} is higher (more negative) on the distribution it counts as a negative area as this will be producing down force rather than lift.
https://en.wikipedia.org/wiki/Pressure_coefficient
Pressure drop (often abbreviated as "dP" or "ΔP") [ 1 ] is defined as the difference in total pressure between two points of a fluid carrying network. A pressure drop occurs when frictional forces, caused by the resistance to flow, act on a fluid as it flows through a conduit (such as a channel, pipe , or tube ). This friction converts some of the fluid's hydraulic energy to thermal energy (i.e., internal energy ). Since the thermal energy cannot be converted back to hydraulic energy, the fluid experiences a drop in pressure, as is required by conservation of energy . [ 2 ] The main determinants of resistance to fluid flow are fluid velocity through the pipe and fluid viscosity . Pressure drop increases proportionally to the frictional shear forces within the piping network. A piping network containing a high relative roughness rating as well as many pipe fittings and joints, tube convergence, divergence, turns, surface roughness, and other physical properties will affect the pressure drop. High flow velocities or high fluid viscosities result in a larger pressure drop across a pipe section, valve, or elbow joint. Low velocity will result in less (or no) pressure drop. The fluid may also be biphasic as in pneumatic conveying with a gas and a solid; in this case, the friction of the solid must also be taken into consideration for calculating the pressure drop. [ 3 ] Fluid in a system will always flow from a region of higher pressure to a region of lower pressure, assuming it has a path to do so. [ 4 ] All things being equal, a higher pressure drop will lead to a higher flow (except in cases of choked flow ). [ 5 ] The pressure drop of a given system will determine the amount of energy needed to convey fluid through that system. For example, a larger pump could be required to move a set amount of water through smaller-diameter pipes (with higher velocity and thus higher pressure drop) as compared to a system with larger-diameter pipes (with lower velocity and thus lower pressure drop). [ 6 ] Pressure drop is related inversely to pipe diameter to the fifth power. [ 7 ] For example, halving a pipe's diameter would increase the pressure drop by a factor of 2 5 = 32 {\displaystyle 2^{5}=32} (e.g. from 2 psi to 64 psi), assuming no change in flow. Pressure drop in piping is directly proportional to the length of the piping—for example, a pipe with twice the length will have twice the pressure drop, given the same flow rate. [ 8 ] Piping fittings (such as elbow and tee joints) generally lead to greater pressure drop than straight pipe. As such, a number of correlations have been developed to calculate equivalent length of fittings. [ 9 ] Certain valves are provided with an associated flow coefficient , commonly known as C v or K v . The flow coefficient relates pressure drop, flow rate, and specific gravity for a given valve. [ 10 ] Many empirical calculations exist for calculation of pressure drop, including:
https://en.wikipedia.org/wiki/Pressure_drop
A pressure exchanger transfers pressure energy from a high pressure fluid stream to a low pressure fluid stream. Many industrial processes operate at elevated pressures and have high pressure waste streams. One way of providing a high pressure fluid to such a process is to transfer the waste pressure to a low pressure stream using a pressure exchanger. One particularly efficient type of pressure exchanger is a rotary pressure exchanger. This device uses a cylindrical rotor with longitudinal ducts parallel to its rotational axis. The rotor spins inside a sleeve between two end covers. Pressure energy is transferred directly from the high pressure stream to the low pressure stream in the ducts of the rotor. Some fluid that remains in the ducts serves as a barrier that inhibits mixing between the streams. This rotational action is similar to that of an old fashioned machine gun firing high pressure bullets and it is continuously refilled with new fluid cartridges. The ducts of the rotor charge and discharge as the pressure transfer process repeats itself. The performance of a pressure exchanger is measured by the efficiency of the energy transfer process and by the degree of mixing between the streams. The energy of the streams is the product of their flow volumes and pressures. Efficiency is a function of the pressure differentials and the volumetric losses (leakage) through the device computed with the following equation: η = Σ energy out Σ energy in = ( Q G − L ) × ( P G − H D P ) + ( Q B + L ) × ( P B − L D P ) Q G × P G + Q B × P B ( 1 ) {\displaystyle \eta ={\frac {\Sigma {\text{ energy out}}}{\Sigma {\text{ energy in}}}}={\frac {(Q_{G}-L)\times (P_{G}-HDP)+(Q_{B}+L)\times (P_{B}-LDP)}{Q_{G}\times P_{G}+Q_{B}\times P_{B}}}\qquad \qquad (1)} where Q is flow, P is pressure, L is leakage flow, HDP is high pressure differential, LDP is low pressure differential, the subscript B refers to the low pressure feed to the device and the subscript G refers to the high pressure feed to the device. Mixing is a function of the concentrations of the species in the inlet streams and the ratio of flow volumes to the device. One application in which pressure exchangers are widely used is reverse osmosis (RO). In an RO system, pressure exchangers are used as energy recovery devices (ERDs). As illustrated, high-pressure concentrate from the membranes [C] is directed [3] to the ERD [D]. The ERD uses this high-pressure concentrate stream to pressurize the low-pressure seawater stream (stream [1] becomes stream [4]), which it then merges (with the aid of a circulation pump [B]) into the highest-pressure seawater stream created by the high-pressure pump [A]. This combined stream feeds the membranes [C]. The concentrate leaves the ERD at low pressure [5], expelled by the incoming feedwater flow [1]. Pressure exchangers save energy in these systems by reducing the load on the high pressure pump . In a seawater RO system operating at a 40% membrane water recovery rate, the ERD supplies 60% of the membrane feed flow. Energy is consumed by the circulation pump, however, because this pump merely circulates and does not pressurize water, its energy consumption is almost negligible: less than 3% of the energy consumed by the high pressure pump. Therefore, nearly 60% of the membrane feed flow is pressurized with almost no energy input. Seawater desalination plants have produced potable water for many years. However, until recently desalination had been used only in special circumstances because of the high energy consumption of the process. [ citation needed ] Early designs for desalination plants made use of various evaporation technologies. The most advanced are the multi-stage flash distillation seawater evaporation desalinators, which make use of multiple stages and have an energy consumption of over 9 kWh per cubic meter of potable water produced. For this reason large seawater desalinators were initially constructed in locations with low energy costs, such as the Middle East, or next to process plants with available waste heat. In the 1970s the seawater reverse osmosis (SWRO) process was developed which made potable water from seawater by forcing it under high pressure through a tight membrane thus filtering out salts and impurities. These salts and impurities are discharged from the SWRO device as a concentrated brine solution in a continuous stream, which contains a large amount of high-pressure energy. Most of this energy can be recovered with a suitable device. Many early SWRO plants built in the 1970s and early 1980s had an energy consumption of over 6.0 kWh per cubic meter of potable water produced, due to low membrane performance, pressure drop limitations and the absence of energy recovery devices. An example where a pressure exchange engine finds application is in the production of potable water using the reverse osmosis membrane process. In this process, a feed saline solution is pumped into a membrane array at high pressure. The input saline solution is then divided by the membrane array into super saline solution (brine) at high pressure and potable water at low pressure. While the high pressure brine is no longer useful in this process as a fluid, the pressure energy that it contains has high value. A pressure exchange engine is employed to recover the pressure energy in the brine and transfer it to feed saline solution. After transfer of the pressure energy in the brine flow, the brine is expelled at low pressure to drain. Nearly all reverse osmosis plants operated for the desalination of sea water in order to produce drinking water in industrial scale are equipped with an energy recovery system based on turbines. These are activated by the concentrate (brine) leaving the plant and transfer the energy contained in the high pressure of this concentrate usually mechanically to the high-pressure pump. In the pressure exchanger the energy contained in the brine is transferred hydraulically [ 1 ] [ 2 ] and with an efficiency of approximately 98% to the feed. [ 3 ] This reduces the energy demand for the desalination process significantly and thus the operating costs. Therefrom results an economic energy recovery, amortization times for such systems varying between 2 and 4 years depending on the place of operation. Reduced energy and capital costs mean that for the first time ever it is possible to produce potable water from seawater at a cost below $1 per cubic meter in many locations worldwide. Although the cost may be a bit higher on islands with high power costs, the PE has the potential to rapidly expand the market for seawater desalination. By means of the application of a pressure exchange system, which is already used in other domains, a considerably higher efficiency of energy recovery of reverse osmosis systems may be achieved than with the use of reverse running pumps or turbines. The pressure exchange system is suited, above all, for bigger plants i.e. approx. ≥ 2000 m3/d permeate production.
https://en.wikipedia.org/wiki/Pressure_exchanger
Pressure gain combustion (PGC) is the unsteady state process used in gas turbines in which gas expansion caused by heat release is constrained. First developed in the early 20th century as one of the earliest gas turbine designs, the concept was mostly abandoned following the advent of isobaric jet engines in WWII . [ 1 ] As an alternative to conventional gas turbines, pressure gain combustion prevents the expansion of gas by holding it at constant volume during the reaction, causing an increase in stagnation pressure . The subsequent combustion produces a detonation , rather than the deflagration used in most turbines. Doing so allows for extra work extraction rather than a loss of energy due to pressure loss across the turbine. Several different variations of turbines use this process, the most prominent being the pulse detonation engine and the rotating detonation engine . In recent years, pressure gain combustion has once again gained relevance and is currently being researched for use in propulsion systems and power generation due to its potential for improved efficiency and performance over conventional turbines. [ 2 ] [ 3 ] [ 4 ] Gas-powered turbines have been researched since the late 18th century, starting with John Barber's 1791 patent. Over a century later, Ægidius Elling built a turbine in 1903 which generated 11 bhp (8.2 kW), the first gas turbine to produce net positive work. In 1909, the first pressure gain combustion turbine was built by Hans Holzwarth. Initially operating at 200 bhp (147 kW), subsequent improvements to the engine increased its power output to 5000 bhp (3728 kW) by 1939. However, the aptly named Explosion Turbine would lose popularity among engineers and inventors as continuous combustion designs gained traction due to their use in jet engine prototypes. [ 5 ] [ 1 ] The concept of pulsed propulsion is neither new, nor exclusive to pressure gain combustion. In fact, the German V1 missile utilized a pulse jet operating at 45 Hz. During the space race , NASA's Project Orion concept utilized force from nuclear explosions ignited behind the spacecraft to generate thrust . This process is known as nuclear pulse propulsion and is stylistically similar to the pulse detonation engine . [ 6 ] In the mid-20th century, US aeronautical scientists and engineers were trying to study the properties of detonation waves . To do this, a primitive rotating detonation chamber of created. This development became the basis for the rotating detonation engine , one of the leading PGC engine concepts, although it was largely ignored at the time due to its instability. [ 7 ] However, as gas turbines are becoming more and more optimized, PGC research is now gaining traction in aircraft propulsion, power generation, and even rocket propulsion. In January 2008, a pulse detonation-powered plane completed its first flight as a cooperative project between the Air Force Research Laboratory and Innovative Scientific Solutions, a research and product development company. Currently, various organizations have developed working PGC engines (mostly RDEs), but none have been put to commercial use due to developmental challenges. [ 7 ] [ 8 ] [ 9 ] The majority of gas turbines consist of an intake through which atmospheric air enters the turbine. The air is then pressurized through a compressor before mixing with fuel. The air-fuel mixture, also known as the working fluid, is combusted in a deflagration (a combustion reaction propagating at subsonic speed), which causes the mixture to expand in volume while maintaining constant pressure. Finally, the combustion product is ejected out of the exhaust to produce thrust. This process is known as the Brayton cycle and has been used as the standard method of jet propulsion and turbine design for about a century. [ 10 ] [ 11 ] Contrasting with the Brayton cycle used in most turbines, pressure gain combustion is based on the Humphrey cycle . Instead of an isobaric system in which gas volume expands as heat is added to the combustion chamber, the volume of working fluid stays constant as its pressure increases during combustion. [ 12 ] While the Brayton cycle describes a subsonic deflagration, the Humphrey cycle occurs in a detonation (a combustion reaction propagating at supersonic speed). [ 13 ] The reaction occurs so quickly that the mixture doesn't have time to expand, causing a pressure gain, before being ejected through the exhaust to produce thrust. The whole process occurs rapidly, and turbines will produce anywhere from 20 to 200 detonations per second. [ 14 ] [ 15 ] Because the working fluid is combusting at a constant volume, there is no pressure loss across the turbine, which increases the net work generated by each cycle. However, since work is done by a series of detonations, rather than a constant reaction generating thrust, the process is naturally more unsteady compared to a conventional turbine. [ 3 ] [ 16 ] The simplest modern PGC turbine is the Pulse detonation engine . Consisting of almost no moving parts, the PDE is externally similar to a ramjet , a type of jet engine without compressor fans that is viable only at supersonic speeds. First, air enters the intake nozzle and travels directly to the combustion chamber to be mixed with injected fuel. There, the mixture is ignited while the front of the chamber closes, producing a detonation wave which both compresses and combusts the mixture, before the working fluid is ejected at supersonic speeds through the exhaust. [ 17 ] [ 6 ] Because of the engine's simplicity and anatomical similarity to ramjets and scramjets , pulse detonation engines can be implemented as a combined-cycle engine, which can improve the performance and reliability of ramjets. Conventional combined-cycle engines have complex moving parts that are essentially rendered useless at high speeds, an issue that PDE/ramjet drives will not have. [ 17 ] [ 6 ] Apart from PDEs, there exist multiple other PGC engine concepts, including resonant pulse combustors and internal combustion wave rotors. However, the majority of modern PGC research is concentrated around the rotating detonation engine (RDE), which aims to solve many of the issues encountered by PDEs. [ 3 ] The main drawback of pulse detonation is the intermittent nature of the combustions. Not only is the reaction hard to control, but the intermittent combustion also loses power due to the time it takes to refuel the combustion chamber after purging, during which no thrust is produced. [ 17 ] [ 6 ] The rotating detonation engine aims to address both these problems. While PDEs involve a series of detonations to ignite batches of air that enter the combustion chamber, RDEs can circumvent this by utilizing a single detonation wave that rotates around the space in between concentric cylinders. A continuous air intake flows through the cylinders, which compresses and combusts as it passes through the rotating detonation wave. This eliminates the need to constantly produce detonations since it only uses a single cyclic detonation, and it allows for a steadier constant flow, instead of the pulsing thrust produced by PDEs. [ 18 ] [ 19 ] Modern chemical rockets still utilize deflagration reactions to generate thrust, which have been optimized to their limits. As a result, pressure gain combustion engines, mostly RDEs, have garnered attention as a possible method of improving rocket performance. Currently, pressure gain rocket engines are being researched by space agencies in multiple countries, including NASA and JAXA , as well as numerous universities and private companies. [ 9 ] Detonation propulsion, which is more energy efficient than conventional deflagration reactions, may increase efficiency by 5-10%, which can both reduce rocket mass and increase payload size. [ 20 ] As mentioned previously, pressure gain turbines have also been researched and developed extensively for use in aircraft propulsion. Pressure gain combustion engines can both improve the performance and reduce the complexity of combined ramjet/scramjet engines through their shared design similarities. This may even allow PDE/RDE combined ramjets to be utilized at conditions unsuitable for conventional ramjets. In addition, pressure gain turbojets require significantly less complexity, especially in the compressors, compared to regular turbines. This will not only save resources in manufacturing but also allow for designs to produce higher thrust in smaller engines. [ 6 ] [ 19 ] Apart from nuclear fission , natural gas contains the highest energy density of widely used fuels. [ 21 ] As such, to reduce carbon emissions , electricity-generating plants are increasingly turning to gas turbines from crude oil and coal . While conventional turbines generate large amounts of energy more efficiently than other fossil fuels, just as in aerospace, they are beginning to reach their limits. [ 22 ] Similar to its potential use in propulsion, pressure gain combustion turbines can offer an improvement to gas power plants. In addition to better efficiency, RDEs can operate at much higher hydrogen concentrations, further improving performance because of hydrogen's higher energy density compared to petrochemicals . The relative simplicity of RDEs can also improve reliability and ease of maintenance, though that may be counterbalanced by the increased stress put on the engine by the process itself. [ 22 ] While PGC offers improved performance and efficiency, there are serious flaws and challenges that researchers were initially unable to solve, preventing the technology from being widely used. Since PDEs are effectively intermittent explosion drives, the cycle they run on is far more unsteady and harder to control than conventional turbines. This makes PDEs very difficult to integrate into airframes , as the high energy pulsing of the engine can cause the inlet to unstart and stop the reaction, in addition to putting high stress on the nacelle or any other adjacent parts. The noise from the exhaust is also a concern. In testing, the highly energetic detonations produced up to 122 dB at a distance of 3 m in a 20 Hz PDE. For scaled-up commercial units operating at higher power and frequency, noise pollution will be a serious issue if effective damping measures are not implemented. [ 6 ] Moreover, due to the high energy required to initiate detonations, PDEs with shorter combustion chambers will need to utilize deflagration combustion at initial ignition and accelerate pressure waves through a process called deflagration to detonation transition (DDT). This requires placing obstacles in the path of the deflagration wave to induce turbulent flow , which speeds up the wave but requires more complexity in the engine structure. [ 17 ] While RDEs solve many of the problems encountered in PDEs they aren't without their flaws. The constant flow of the engine, coupled with the need to sustain the detonation, requires a tremendous intake of air to be rapidly mixed with the fuel in a shorter distance than most PDEs, which are normally quite elongated. In addition, the stress placed on the engine by the detonation process was simply too much for the engine to withstand during the early years of development. However, advancements in materials science and manufacturing processes have improved the feasibility of RDEs to the point where research and development is believed to be worthwhile by many organizations. [ 9 ] [ 18 ] [ 19 ]
https://en.wikipedia.org/wiki/Pressure_gain_combustion
In fluid mechanics , pressure head is the height of a liquid column that corresponds to a particular pressure exerted by the liquid column on the base of its container. It may also be called static pressure head or simply static head (but not static head pressure ). Mathematically this is expressed as: where Note that in this equation, the pressure term may be gauge pressure or absolute pressure , depending on the design of the container and whether it is open to the ambient air or sealed without air. Pressure head is a component of hydraulic head , in which it is combined with elevation head. When considering dynamic (flowing) systems, there is a third term needed: velocity head . Thus, the three terms of velocity head , elevation head , and pressure head appear in the head equation derived from the Bernoulli equation for incompressible fluids : where Fluid flow is measured with a wide variety of instruments. The venturi meter in the diagram on the left shows two columns of a measurement fluid at different heights. The height of each column of fluid is proportional to the pressure of the fluid. To demonstrate a classical measurement of pressure head, we could hypothetically replace the working fluid with another fluid having different physical properties . For example, if the original fluid was water and we replaced it with mercury at the same pressure, we would expect to see a rather different value for pressure head. In fact the specific weight of water is 9.8 kN/m 3 and the specific weight of mercury is 133 kN/m 3 . So, for any particular measurement of pressure head, the height of a column of water will be about [133/9.8 = 13.6] 13.6 times taller than a column of mercury would be. So if a water column meter reads "13.6 cm H 2 O ", then an equivalent measurement is "1.00 cm Hg". This example demonstrates why there is some confusion surrounding pressure head and its relationship to pressure. Scientists frequently use columns of water (or mercury) to measure pressure (manometric pressure measurement ), since for a given fluid, pressure head is proportional to pressure. Measuring pressure in units of " mm of mercury " or " inches of water " makes sense for instrumentation , but these raw measurements of head must frequently be converted to more convenient pressure units using the equations above to solve for pressure. In summary pressure head is a measurement of length, which can be converted to the units of pressure (force per unit area), as long as strict attention is paid to the density of the measurement fluid and the local value of g. We would normally use pressure head calculations in areas in which g {\displaystyle g} is constant. However, if the gravitational field fluctuates, we can prove that pressure head fluctuates with it. A mercury barometer is one of the classic uses of static pressure head. Such barometers are an enclosed column of mercury standing vertically with gradations on the tube. The lower end of the tube is bathed in a pool of mercury open to the ambient to measure the local atmospheric pressure . The reading of a mercury barometer (in mm of Hg , for example) can be converted into an absolute pressure using the above equations. If we had a column of mercury 767 mm high, we could calculate the atmospheric pressure as (767 mm)•(133 kN/m 3 ) = 102 kPa. See the torr , millimeter of mercury , and pascal (unit) articles for barometric pressure measurements at standard conditions. The venturi meter and manometer is a common type of flow meter which can be used in many fluid applications to convert differential pressure heads into volumetric flow rate , linear fluid speed , or mass flow rate using Bernoulli's principle . The reading of these meters (in inches of water, for example) can be converted into a differential, or gauge pressure , using the above equations. The pressure of a fluid is different when it flows than when it is not flowing. This is why static pressure and dynamic pressure are never the same in a system in which the fluid is in motion. This pressure difference arises from a change in fluid velocity that produces velocity head , which is a term of the Bernoulli equation that is zero when there is no bulk motion of the fluid. In the picture on the right, the pressure differential is entirely due to the change in velocity head of the fluid, but it can be measured as a pressure head because of the Bernoulli principle. If, on the other hand, we could measure the velocity of the fluid, the pressure head could be calculated from the velocity head. See the Derivations of Bernoulli equation .
https://en.wikipedia.org/wiki/Pressure_head
Pressure Injection Cells , sometimes referred to as "bomb-loading devices" are used in proteomic research to enable controlled dispensing of small-volume liquid samples. Using high pressure, pressure injection cells are used for two applications: densely packing nanobore capillary columns (micro-columns) with solid-phase particles for use in LC/MS analysis; and precisely infusing microliter samples directly from microcentrifuge tubes into mass spectrometers without additional transfers, wasted sample, or contact with metallic surfaces which adsorb some negatively charged molecules such as phosphopeptides. [ 1 ] A typical pressure injection cell holds a micro-tube or a vial in its central chamber. [ 2 ] A small magnetic stir bar can be used to keep the particles in suspension. The pressure cell is connected to a source of compressed gas (such as Argon, Helium or Nitrogen). A capillary is placed through a ferrule in the cap so that one end is in contact with the liquid in the tube or vial. The distal end of the capillary is fritted to retain the particles while packing. Pressure from the compressed gas can be regulated to adjust the flow rate of the sample into the capillary. [ 3 ]
https://en.wikipedia.org/wiki/Pressure_injection_cell
Pressure jump is a technique used in the study of chemical kinetics . It involves making rapid changes to the pressure of an experimental system and observing the return to equilibrium or steady state . This allows the study of the shift in equilibrium of reactions that equilibrate in periods between milliseconds to hours (or longer), [ 1 ] these changes often being observed using absorption spectroscopy , or fluorescence spectroscopy though other spectroscopic techniques such as CD , [ 2 ] FTIR [ 3 ] or NMR [ 4 ] can also be used. Historically, pressure jumps were limited to one direction. Most commonly fast drops in pressure were achieved by using a quick release valve or a fast burst membrane. [ 5 ] Modern equipment can achieve pressure changes in both directions using either double reservoir arrangements [ 6 ] (good for large changes in pressure) or pistons operated by piezoelectric actuators [ 7 ] (often faster than valve based approaches). Ultra fast pressure drops can be achieved using electrically disintegrated burst membranes. [ 8 ] The ability to automatically repeat measurements and average the results is useful since the reaction amplitudes are often small. The fractional extent of the reaction ( i.e. the percentage change in concentration of a measurable species) depends on the molar volume change (Δ V °) between the reactants and products and the equilibrium position. If K is the equilibrium constant and P is the pressure then the volume change is given by: where R is the universal gas constant and T is the absolute temperature . The volume change can thus be understood to be the pressure dependency of the change in Gibbs free energy associated with the reaction. When a single step in a reaction is perturbed in a pressure jump experiment, the reaction follows a single exponential decay function with the reciprocal time constant (1/τ) equal to the sum of the forward and reverse intrinsic rate constants. In more complex reaction networks, when multiple reaction steps are perturbed, then the reciprocal time constants are given by the eigenvalues of the characteristic rate equations. The ability to observe intermediate steps in a reaction pathway is one of the attractive features of this technology. [ 9 ]
https://en.wikipedia.org/wiki/Pressure_jump
Pressure oxidation is a process for extracting gold from refractory ore . The most common refractory ores are pyrite and arsenopyrite , which are sulfide ores that trap the gold within them. Refractory ores require pre-treatment before the gold can be adequately extracted. [ 1 ] The pressure oxidation process is used to prepare such ores for conventional gold extraction processes such as cyanidation . It is performed in an autoclave at high pressure and temperature, where high-purity oxygen mixes with a slurry of ore. [ 2 ] When the original sulfide minerals are oxidized at high temperature and pressure, it completely releases the trapped gold. Pressure oxidation has a very high gold recovery rate, normally at least 10% higher than roasting . [ 1 ] The oxidation of the iron sulfide minerals produces sulfuric acid , soluble compounds such as ferric sulfate, and solids such as iron sulfate or jarosite . The iron-based solids produced pose an environmental challenge, as they can release acid and heavy metals to the environment. They can also make later precious metal recovery more difficult. Arsenic in the ore is converted to solid scorodite inside the autoclave, allowing it to be easily disposed of. This is an advantage over processes such as roasting where these toxic products are released as gases. [ 1 ] A disadvantage of pressure oxidation is that any silver in the feed material will often react to form silver jarosite inside the autoclave, making it difficult and expensive to recover the silver. [ 1 ] An example of a mine utilizing this technology is the Pueblo Viejo mine in the Dominican Republic . At Pueblo Viejo, the process is performed by injecting high-purity oxygen into autoclaves operating at 230 degrees C and 40 bar of pressure. The resulting chemical reactions oxides the sulfide minerals the gold is trapped within. [ 3 ] The oxidation of pyrite is highly exothermic, allowing the autoclave to operate at this temperature without an external heat source. [ 2 ] This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pressure_oxidation
Pressure piling is a phenomenon related to combustion of gases in a tube or long vessel. When a flame front propagates along a tube, the unburned gases ahead of the front are compressed, and hence heated. The amount of compression varies depending on the geometry and can range from twice to eight times the initial pressure . Where multiple vessels are connected by piping, ignition of gases in one vessel and pressure piling may result in a deflagration to detonation transition and very large explosion pressure. [ 1 ] In electrical equipment in hazardous areas , if two electrical enclosures are connected by a conduit , an explosion of a gas in one of the compartments travels through the conduit into the next enclosure. [ 2 ] The pressure of the 'primary' explosion together with the pressure from the 'secondary' explosion in the other compartment produces one huge explosion that the equipment cannot handle. Heat, arcs or sparks escape from the equipment and ignite any gas or vapour that may be around. Operators avoid this by not using conduits to join classified equipment together and by using barrier glands on cables going into the enclosure. This ensures that compartments remain separate at all times. This combustion article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pressure_piling
A pressure reactor , sometimes referred to as a pressure tube, or a sealed tube, is a chemical reaction vessel which can conduct a reaction under pressure. A pressure reactor is a special application of a pressure vessel . The pressure can be caused by the reaction itself or created by an external source, like hydrogen in catalytic transfer hydrogenation . A pressure reactor can offer several advantages over the conventional round-bottom flask . Firstly, it can conduct a reaction above the boiling point of a solvent . Secondly, the pressure can reduce the reaction volume, including the liquid phase, and in turn increase concentration and collision frequency , and accelerate a reaction. Increase in temperature can speed up the desired reaction, but also speed up the decomposition of reagents and starting materials . However, pressure can speed up the desired reaction and only impacts decomposition when it involves the release of a gas or a reaction with a gas in the vessel. When the desired reaction is accelerated, competing reactions are minimized. Pressure generally enables faster reactions with cleaner reaction profiles. The above benefits from a pressure reactor has been shown in microwave chemistry . E.g., if a Suzuki Coupling takes 8 hours at 80°C, it only takes 8 minutes at 140°C in a microwave synthesizer. The microwave effect is a controversial topic. Later experiments show some of these early reports to be artifacts and rate enhancement is strictly due to thermal effects. [ 1 ] [ 2 ] [ 3 ] If a pressure reactor is engineered properly, it can meet 4 out of 12 green chemistry principles Glass Pressure reactors are typically used when an operator needs to observe how a reaction takes place. Although the pressure ratings on these systems are lower than most metal pressure reactors, they are still an efficient set up for reaching responsible pressure limits. The ratings on glass vessels are directly related to the diameter of the vessel. The larger the diameter, the lower the allowable pressure. Integrated bottom valves can also impact the pressure ratings. A bottom valve on a glass vessel typically relates to a lower allowable working pressure. These are all variables determined by the process and parameters of each individual reaction. Glass pressure vessels can also be used in inert applications. These vessels are used in reactions included but are not limited to Hydrogenations, Polymerizations, Synthesis, Catalytic, petrochemical, crystallization, and so on. One of the drawbacks of a standard glass pressure reactor is the potential explosions due to hard-to-predict excessive internal pressure and lack of relief mechanism. However, with proper safety implementation provided by the manufacturer, the operator can perform most reactions in a safe manner. A Fisher-Porter tube or Fisher-Porter vessel is a glass pressure reactor used in the chemical laboratory. Manufactured by Andrews Glass Co. of Vineland NJ Metal Pressure reactors are typically used for high pressure reactions. They have a much higher pressure rating than glass reactors. Although they have a higher pressure rating, they still have their own distinct flaws. One of which would be that metal vessels are more susceptible to corrosion. The material of construction (MOC) is particularly important during the design phase of a metal pressure reactor. The correct MOC can reduce or even eliminate the corrosion seen in the vessel but, depending on the material chosen, could increase the price of a system. Metal vessels are also much heavier and should be handled carefully when performing maintenance. Metal high pressure reactors are used in reactions included but are not limited to Hydrogenation, Polymerization, Synthesis, Catalytic, Petrochemical and so on. They are also used to perform research such as Upstream, Biomass, Biopolymer, Zeolite, etc. The drawbacks of a metal pressure reactor (bomb) are set-up, maintenance, and corrosiveness. The drawbacks of a microwave synthesizer are solvent limitation Pressure cooking Pressure vessel
https://en.wikipedia.org/wiki/Pressure_reactor
A pressure regulator is a valve that controls the pressure of a fluid to a desired value, using negative feedback from the controlled pressure. Regulators are used for gases and liquids, and can be an integral device with a pressure setting, a restrictor and a sensor all in the one body, or consist of a separate pressure sensor, controller and flow valve. Two types are found: The pressure reduction regulator and the back-pressure regulator. Both types of regulator use feedback of the regulated pressure as input to the control mechanism, and are commonly actuated by a spring loaded diaphragm or piston reacting to changes in the feedback pressure to control the valve opening, and in both cases the valve should be opened only enough to maintain the set regulated pressure. The actual mechanism may be very similar in all respects except the placing of the feedback pressure tap. [ 2 ] As in other feedback control mechanisms, the level of damping is important to achieve a balance between fast response to a change in the measured pressure, and stability of output. Insufficient damping may lead to hunting oscillation of the controlled pressure, while excessive friction of moving parts may cause hysteresis . A pressure reducing regulator's primary function is to match the flow of gas through the regulator to the demand for fluid placed upon it, whilst maintaining a sufficiently constant output pressure. If the load flow decreases, then the regulator flow must decrease as well. If the load flow increases, then the regulator flow must increase in order to keep the controlled pressure from decreasing because of a shortage of fluid in the pressure system. It is desirable that the controlled pressure does not vary greatly from the set point for a wide range of flow rates, but it is also desirable that flow through the regulator is stable and the regulated pressure is not subject to excessive oscillation. [ citation needed ] A pressure regulator includes a restricting element , a loading element , and a measuring element : In the pictured single-stage regulator, a force balance is used on the diaphragm to control a poppet valve in order to regulate pressure. With no inlet pressure, the spring above the diaphragm pushes it down on the poppet valve, holding it open. Once inlet pressure is introduced, the open poppet allows flow to the diaphragm and pressure in the upper chamber increases, until the diaphragm is pushed upward against the spring, causing the poppet to reduce flow, finally stopping further increase of pressure. By adjusting the top screw, the downward pressure on the diaphragm can be increased, requiring more pressure in the upper chamber to maintain equilibrium. In this way, the outlet pressure of the regulator is controlled. [ citation needed ] f : poppet spring force {\displaystyle f:{\text{ poppet spring force}}} P i : inlet pressure {\displaystyle P_{i}:{\text{ inlet pressure}}} P o : outlet pressure {\displaystyle P_{o}:{\text{ outlet pressure}}} s : poppet area {\displaystyle s:{\text{ poppet area}}} High pressure gas from the supply enters the regulator through the inlet port. The inlet pressure gauge will indicate this pressure. The gas then passes through the normally open pressure control valve orifice and the downstream pressure rises until the valve actuating diaphragm is deflected sufficiently to close the valve, preventing any more gas from entering the low pressure side until the pressure drops again. The outlet pressure gauge will indicate this pressure. [ citation needed ] The outlet pressure on the diaphragm and the inlet pressure and poppet spring force on the upstream part of the valve hold the diaphragm/poppet assembly in the closed position against the force of the diaphragm loading spring. If the supply pressure falls, the closing force due to supply pressure is reduced, and downstream pressure will rise slightly to compensate. Thus, if the supply pressure falls, the outlet pressure will increase, provided the outlet pressure remains below the falling supply pressure. This is the cause of end-of-tank dump where the supply is provided by a pressurized gas tank. [ citation needed ] The operator can compensate for this effect by adjusting the spring load by turning the knob to restore outlet pressure to the desired level. With a single stage regulator, when the supply pressure gets low, the lower inlet pressure causes the outlet pressure to climb. If the diaphragm loading spring compression is not adjusted to compensate, the poppet can remain open and allow the tank to rapidly dump its remaining contents. [ citation needed ] Two stage regulators are two regulators in series in the same housing that operate to reduce the pressure progressively in two steps instead of one. The first stage, which is preset, reduces the pressure of the supply gas to an intermediate stage; gas at that pressure passes into the second stage. The gas emerges from the second stage at a pressure (working pressure) set by user by adjusting the pressure control knob at the diaphragm loading spring. Two stage regulators may have two safety valves, so that if there is any excess pressure between stages due to a leak at the first stage valve seat the rising pressure will not overload the structure and cause an explosion. [ citation needed ] An unbalanced single stage regulator may need frequent adjustment. As the supply pressure falls, the outlet pressure may change, necessitating adjustment. In the two stage regulator, there is improved compensation for any drop in the supply pressure. [ citation needed ] Air compressors are used in industrial, commercial, and home workshop environments to perform an assortment of jobs including blowing things clean; running air powered tools; and inflating things like tires, balls, etc. Regulators are often used to adjust the pressure coming out of an air receiver (tank) to match what is needed for the task. Often, when one large compressor is used to supply compressed air for multiple uses (often referred to as "shop air" if built as a permanent installation of pipes throughout a building), additional regulators will be used to ensure that each separate tool or function receives the pressure it needs. This is important because some air tools, or uses for compressed air, require pressures that may cause damage to other tools or materials. [ citation needed ] Pressure regulators are found in aircraft cabin pressurization, canopy seal pressure control, potable water systems, and waveguide pressurization. [ 3 ] Aerospace pressure regulators have applications in propulsion pressurant control for reaction control systems (RCS) and Attitude Control Systems (ACS), where high vibration, large temperature extremes and corrosive fluids are present. [ 4 ] Pressurized vessels can be used to cook food much more rapidly than at atmospheric pressure, as the higher pressure raises the boiling point of the contents. All modern pressure cookers will have a pressure regulator valve and a pressure relief valve as a safety mechanism to prevent explosion in the event that the pressure regulator valve fails to adequately release pressure. Some older models lack a safety release valve [ citation needed ] . Most home cooking models are built to maintain a low and high pressure setting. These settings are usually 7 to 15 pounds per square inch (0.48 to 1.03 bar). Almost all home cooking units will employ a very simple single-stage pressure regulator. Older models will simply use a small weight on top of an opening that will be lifted by excessive pressure to allow excess steam to escape. Newer models usually incorporate a spring-loaded valve that lifts and allows pressure to escape as pressure in the vessel rises. Some pressure cookers will have a quick release setting on the pressure regulator valve that will, essentially, lower the spring tension to allow the pressure to escape at a quick, but still safe rate. Commercial kitchens also use pressure cookers, in some cases using oil based pressure cookers to quickly deep fry fast food. Pressure vessels of this sort can also be used as autoclaves to sterilize small batches of equipment and in home canning operations. [ citation needed ] A water pressure regulating valve limits inflow by dynamically changing the valve opening so that when less pressure is on the outside, the valve opens up fully, and too much pressure on the outside causes the valve to shut. In a no pressure situation, where water could flow backwards, it won't be impeded. A water pressure regulating valve does not function as a check valve. [ citation needed ] [ clarification needed ] They are used in applications where the water pressure is too high at the end of the line to avoid damage to appliances or pipes. Oxy-fuel welding and cutting processes require gases at specific pressures, and regulators will generally be used to reduce the high pressures of storage cylinders to those usable for cutting and welding. Oxygen and fuel gas regulators usually have two stages: The first stage of the regulator releases the gas at a constant pressure from the cylinder despite the pressure in the cylinder becoming less as the gas is released. The second stage of the regulator controls the pressure reduction from the intermediate pressure to low pressure. The final flow rate may be adjusted at the torch. The regulator assembly usually has two pressure gauges, one indicating cylinder pressure, the other indicating delivery pressure. Inert gas shielded arc welding also uses gas stored at high pressure provided through a regulator. There may be a flow gauge calibrated to the specific gas. [ citation needed ] All propane and LP gas applications require the use of a regulator. Because pressures in propane tanks can fluctuate significantly with temperature, regulators must be present to deliver a steady pressure to downstream appliances. These regulators normally compensate for tank pressures between 30–200 pounds per square inch (2.1–13.8 bar) and commonly deliver 11 inches water column 0.4 pounds per square inch (28 mbar) for residential applications and 35 inches of water column 1.3 pounds per square inch (90 mbar) for industrial applications. Propane regulators differ in size and shape, delivery pressure and adjustability, but are uniform in their purpose to deliver a constant outlet pressure for downstream requirements. Common international settings for domestic LP gas regulators are 28 mbar for butane and 37 mbar for propane. All vehicular motors that run on compressed gas as a fuel (internal combustion engine or fuel cell electric power train) require a pressure regulator to reduce the stored gas ( CNG or Hydrogen ) pressure from 700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating pressure. [ citation needed ] ) For recreational vehicles with plumbing, a pressure regulator is required to reduce the pressure of an external water supply connected to the vehicle plumbing, as the supply may be a much higher elevation than the campground, and water pressure depends on the height of the water column. Without a pressure regulator, the intense pressure encountered at some campgrounds in mountainous areas may be enough to burst the camper's water pipes or unseat the plumbing joints, causing flooding. Pressure regulators for this purpose are typically sold as small screw-on accessories that fit inline with the hoses used to connect an RV to the water supply, which are almost always screw-thread-compatible with the common garden hose . [ citation needed ] Pressure regulators are used with diving cylinders for Scuba diving . The tank may contain pressures in excess of 3,000 pounds per square inch (210 bar), which could cause a fatal barotrauma injury to a person breathing it directly. A demand controlled regulator provides a flow of breathing gas at the ambient pressure (which varies by depth in the water). Pressure reducing regulators are also use to supply breathing gas to surface-supplied divers, [ 5 ] and people who use self-contained breathing apparatus (SCBA) for rescue and hazmat work on land. The interstage pressure for SCBA at normal atmospheric pressure can generally be left constant at a factory setting, but for surface supplied divers it is controlled by the gas panel operator , depending on the diver depth and flow rate requirements. Supplementary oxygen for high altitude flight in unpressurised aircraft and medical gases are also commonly dispensed through pressure reducing regulators from high-pressure storage. [ 6 ] [ 7 ] Supplementary oxygen may also be dispensed through a regulator which both reduces the pressure, and supplies the gas at a metered flow rate, to be mixed with ambient air. [ 8 ] One way of producing a constant mass flow at variable ambient pressure is to use a choked flow , where the flow through the metering orifice is sonic. For a given gas in choked flow, the mass flow rate may be controlled by setting the orifice size or the upstream pressure. To produce a choked flow in oxygen, the absolute pressure ratio of upstream and downstream gas must exceed 1.893 at 20 °C. At normal atmospheric pressure this requires an upstream pressure of more than 1.013 × 1.893 = 1.918 bar. A typical nominal regulated gauge pressure from a medical oxygen regulator is 3.4 bars (50 psi), for an absolute pressure of approximately 4.4 bar and a pressure ratio of about 4.4 without back pressure, so they will have choked flow in the metering orifices for a downstream (outlet) pressure of up to about 2.3 bar absolute. This type of regulator commonly uses a rotor plate with calibrated orifices and detents to hold it in place when the orifice corresponding to the desired flow rate is selected. This type of regulator may also have one or two uncalibrated takeoff connections from the intermediate pressure chamber with diameter index safety system (DISS) or similar connectors to supply gas to other equipment, and the high pressure connection is commonly a pin index safety system (PISS) yoke clamp. [ 9 ] Similar mechanisms can be used for flow rate control for aviation and mountaineering regulators. As the pressure in water pipes builds rapidly with depth, underground mining operations require a fairly complex water system with pressure reducing valves. These devices must be installed at a certain vertical interval, usually 600 feet (180 m). [ citation needed ] Without such valves, pipes could burst and pressure would be too great for equipment operation. Pressure regulators, also known as gas governors, are used extensively in the natural gas supply industry to control pressure. Natural gas is compressed to high pressures in order to be transmitted and distributed throughout the country through large transmission pipelines, up to 42-inch or 1.07 m diameter. [ 10 ] The transmission pressure can be over 1,000 pounds per square inch (69 bar) and must be reduced through several stages to a usable pressure for industrial, commercial, and residential applications. A distribution system may comprise three main pressure reduction locations. The first reduction station is located at the outskirts of an urban area, where the pressure is reduced to a distribution pressure to be fed throughout the supply area. [ 10 ] It is undesirable to locate high pressure pipelines in urban areas due to the risk of damage and release of high pressure flammable gas. Industrial users may take a supply at this reduced pressure. This may be the location where the odorless natural gas is odorized with mercaptan , in the United Kingdom this is done at the high pressure supply terminal. The distribution pressure is further reduced at a district regulator station, located at various points in the supply area, to below 60 psig (4.13 barg ). The final reduction occurs at the end users location, see image. Generally, the end user reduction is taken to low pressures ranging from 0.25 psig to 5 psig (0.01 to 0.34 barg). [ 10 ] Some industrial applications can require a higher pressure. [ citation needed ] Where the pressure drop on a built-in breathing system exhaust system is too great, typically in saturation systems, a back-pressure regulator may be used to reduce the exhaust pressure drop to a safer and more manageable pressure. [ 11 ] [ 13 ] The depth at which most heliox breathing mixtures are used in surface-supplied diving is generally at least 5 bar above surface atmospheric pressure, and the exhaust gas from the diver must pass through a reclaim valve , which is a back-pressure valve activated by the increase in pressure in the diver's helmet above ambient pressure caused by diver exhalation. [ 14 ] [ 15 ] The reclaim gas hose which carries the exhaled gas back to the surface for recycling must not be at too great a pressure difference from the ambient pressure at the diver. An additional back-pressure regulator in this line allows finer setting of the reclaim valve for lower work of breathing at variable depths. [ 16 ]
https://en.wikipedia.org/wiki/Pressure_reduction_regulator
Pressure reference system (PRS) is an enhancement of the inertial reference system and attitude and heading reference system designed to provide position angles measurements which are stable in time and do not suffer from long term drift caused by the sensor imperfections. [ 1 ] The measurement system uses behavior of the International Standard Atmosphere where atmospheric pressure descends with increasing altitude and two pairs of measurement units. Each pair measures pressure at two different positions that are mechanically connected with known distance between units, e.g. the units are mounted at the tips of the wing. In horizontal flight, there is no pressure difference measured by the measurement system which means the position angle is zero. In case the airplane banks (to turn), the tips of the wings mutually change their positions, one is going up and the second one is going down, and the pressure sensors in every unit measure different values which are translated into a position angle . The strapdown inertial navigation system uses double integration of the accelerations measured by an inertial measurement unit (IMU). [ 2 ] This process sums the sensors outputs together with all the sensor and measurement errors. The precision and long-term stability of the INS system depends on the quality of sensors used within the IMU. The sensor quality can be evaluated by Allan Variance technique. A precise IMU uses laser gyroscopes and precise accelerometers which are expensive. The INS is a sole system with no other inputs. Nowadays the trend of the modern navigation is to integrate [ 3 ] signals from IMU together with data provided by Global Positioning System (GPS). This approach gives long term stability to the INS output by suppressing sensor error influence on the calculation of the airplane position. The measurement system becomes attitude and heading reference system which can relax requirement on the sensor precision because the long-term stability is assured by GPS. The sensors used within AHRS are used only for position angles determination and so just one numerical integration of the angular rate measurements is required. The AHRS system is cheaper and a lot of universities and companies are developing AHRS systems based on microelectromechanical systems (MEMS) sensors. The MEMS sensors do not have performance required for navigation purposes . It is shown in an experimental research report, [ 4 ] where the output of the navigation solution drifts away after 2 seconds. The AHRS units based on MEMS inertial sensors usually also use a vector magnetometer, a GPS receiver, and a data fusion algorithm to cope with MEMS inertial sensors errors. Next to the sensor imperfections there are also environmental parameters which influence the computed values (position angles): All these influences cause drifts in the computed output data which can confuse pilot who performs the flight. The concept of the PRS was defined by Pavel Paces in his PhD thesis [ 6 ] where results measured under laboratory conditions were also published. Three arrangements of the PRS were evaluated: While the first method gives only ambiguous results the second method works well as it can be replaced by two altimeters. Disadvantage of the second method is high measurement uncertainty of both values. This is being solved by the extension of the reference volumes used even in absolute pressure sensors. [ 8 ]
https://en.wikipedia.org/wiki/Pressure_reference_system
A pressure regulator is a valve that controls the pressure of a fluid to a desired value, using negative feedback from the controlled pressure. Regulators are used for gases and liquids, and can be an integral device with a pressure setting, a restrictor and a sensor all in the one body, or consist of a separate pressure sensor, controller and flow valve. Two types are found: The pressure reduction regulator and the back-pressure regulator. Both types of regulator use feedback of the regulated pressure as input to the control mechanism, and are commonly actuated by a spring loaded diaphragm or piston reacting to changes in the feedback pressure to control the valve opening, and in both cases the valve should be opened only enough to maintain the set regulated pressure. The actual mechanism may be very similar in all respects except the placing of the feedback pressure tap. [ 2 ] As in other feedback control mechanisms, the level of damping is important to achieve a balance between fast response to a change in the measured pressure, and stability of output. Insufficient damping may lead to hunting oscillation of the controlled pressure, while excessive friction of moving parts may cause hysteresis . A pressure reducing regulator's primary function is to match the flow of gas through the regulator to the demand for fluid placed upon it, whilst maintaining a sufficiently constant output pressure. If the load flow decreases, then the regulator flow must decrease as well. If the load flow increases, then the regulator flow must increase in order to keep the controlled pressure from decreasing because of a shortage of fluid in the pressure system. It is desirable that the controlled pressure does not vary greatly from the set point for a wide range of flow rates, but it is also desirable that flow through the regulator is stable and the regulated pressure is not subject to excessive oscillation. [ citation needed ] A pressure regulator includes a restricting element , a loading element , and a measuring element : In the pictured single-stage regulator, a force balance is used on the diaphragm to control a poppet valve in order to regulate pressure. With no inlet pressure, the spring above the diaphragm pushes it down on the poppet valve, holding it open. Once inlet pressure is introduced, the open poppet allows flow to the diaphragm and pressure in the upper chamber increases, until the diaphragm is pushed upward against the spring, causing the poppet to reduce flow, finally stopping further increase of pressure. By adjusting the top screw, the downward pressure on the diaphragm can be increased, requiring more pressure in the upper chamber to maintain equilibrium. In this way, the outlet pressure of the regulator is controlled. [ citation needed ] f : poppet spring force {\displaystyle f:{\text{ poppet spring force}}} P i : inlet pressure {\displaystyle P_{i}:{\text{ inlet pressure}}} P o : outlet pressure {\displaystyle P_{o}:{\text{ outlet pressure}}} s : poppet area {\displaystyle s:{\text{ poppet area}}} High pressure gas from the supply enters the regulator through the inlet port. The inlet pressure gauge will indicate this pressure. The gas then passes through the normally open pressure control valve orifice and the downstream pressure rises until the valve actuating diaphragm is deflected sufficiently to close the valve, preventing any more gas from entering the low pressure side until the pressure drops again. The outlet pressure gauge will indicate this pressure. [ citation needed ] The outlet pressure on the diaphragm and the inlet pressure and poppet spring force on the upstream part of the valve hold the diaphragm/poppet assembly in the closed position against the force of the diaphragm loading spring. If the supply pressure falls, the closing force due to supply pressure is reduced, and downstream pressure will rise slightly to compensate. Thus, if the supply pressure falls, the outlet pressure will increase, provided the outlet pressure remains below the falling supply pressure. This is the cause of end-of-tank dump where the supply is provided by a pressurized gas tank. [ citation needed ] The operator can compensate for this effect by adjusting the spring load by turning the knob to restore outlet pressure to the desired level. With a single stage regulator, when the supply pressure gets low, the lower inlet pressure causes the outlet pressure to climb. If the diaphragm loading spring compression is not adjusted to compensate, the poppet can remain open and allow the tank to rapidly dump its remaining contents. [ citation needed ] Two stage regulators are two regulators in series in the same housing that operate to reduce the pressure progressively in two steps instead of one. The first stage, which is preset, reduces the pressure of the supply gas to an intermediate stage; gas at that pressure passes into the second stage. The gas emerges from the second stage at a pressure (working pressure) set by user by adjusting the pressure control knob at the diaphragm loading spring. Two stage regulators may have two safety valves, so that if there is any excess pressure between stages due to a leak at the first stage valve seat the rising pressure will not overload the structure and cause an explosion. [ citation needed ] An unbalanced single stage regulator may need frequent adjustment. As the supply pressure falls, the outlet pressure may change, necessitating adjustment. In the two stage regulator, there is improved compensation for any drop in the supply pressure. [ citation needed ] Air compressors are used in industrial, commercial, and home workshop environments to perform an assortment of jobs including blowing things clean; running air powered tools; and inflating things like tires, balls, etc. Regulators are often used to adjust the pressure coming out of an air receiver (tank) to match what is needed for the task. Often, when one large compressor is used to supply compressed air for multiple uses (often referred to as "shop air" if built as a permanent installation of pipes throughout a building), additional regulators will be used to ensure that each separate tool or function receives the pressure it needs. This is important because some air tools, or uses for compressed air, require pressures that may cause damage to other tools or materials. [ citation needed ] Pressure regulators are found in aircraft cabin pressurization, canopy seal pressure control, potable water systems, and waveguide pressurization. [ 3 ] Aerospace pressure regulators have applications in propulsion pressurant control for reaction control systems (RCS) and Attitude Control Systems (ACS), where high vibration, large temperature extremes and corrosive fluids are present. [ 4 ] Pressurized vessels can be used to cook food much more rapidly than at atmospheric pressure, as the higher pressure raises the boiling point of the contents. All modern pressure cookers will have a pressure regulator valve and a pressure relief valve as a safety mechanism to prevent explosion in the event that the pressure regulator valve fails to adequately release pressure. Some older models lack a safety release valve [ citation needed ] . Most home cooking models are built to maintain a low and high pressure setting. These settings are usually 7 to 15 pounds per square inch (0.48 to 1.03 bar). Almost all home cooking units will employ a very simple single-stage pressure regulator. Older models will simply use a small weight on top of an opening that will be lifted by excessive pressure to allow excess steam to escape. Newer models usually incorporate a spring-loaded valve that lifts and allows pressure to escape as pressure in the vessel rises. Some pressure cookers will have a quick release setting on the pressure regulator valve that will, essentially, lower the spring tension to allow the pressure to escape at a quick, but still safe rate. Commercial kitchens also use pressure cookers, in some cases using oil based pressure cookers to quickly deep fry fast food. Pressure vessels of this sort can also be used as autoclaves to sterilize small batches of equipment and in home canning operations. [ citation needed ] A water pressure regulating valve limits inflow by dynamically changing the valve opening so that when less pressure is on the outside, the valve opens up fully, and too much pressure on the outside causes the valve to shut. In a no pressure situation, where water could flow backwards, it won't be impeded. A water pressure regulating valve does not function as a check valve. [ citation needed ] [ clarification needed ] They are used in applications where the water pressure is too high at the end of the line to avoid damage to appliances or pipes. Oxy-fuel welding and cutting processes require gases at specific pressures, and regulators will generally be used to reduce the high pressures of storage cylinders to those usable for cutting and welding. Oxygen and fuel gas regulators usually have two stages: The first stage of the regulator releases the gas at a constant pressure from the cylinder despite the pressure in the cylinder becoming less as the gas is released. The second stage of the regulator controls the pressure reduction from the intermediate pressure to low pressure. The final flow rate may be adjusted at the torch. The regulator assembly usually has two pressure gauges, one indicating cylinder pressure, the other indicating delivery pressure. Inert gas shielded arc welding also uses gas stored at high pressure provided through a regulator. There may be a flow gauge calibrated to the specific gas. [ citation needed ] All propane and LP gas applications require the use of a regulator. Because pressures in propane tanks can fluctuate significantly with temperature, regulators must be present to deliver a steady pressure to downstream appliances. These regulators normally compensate for tank pressures between 30–200 pounds per square inch (2.1–13.8 bar) and commonly deliver 11 inches water column 0.4 pounds per square inch (28 mbar) for residential applications and 35 inches of water column 1.3 pounds per square inch (90 mbar) for industrial applications. Propane regulators differ in size and shape, delivery pressure and adjustability, but are uniform in their purpose to deliver a constant outlet pressure for downstream requirements. Common international settings for domestic LP gas regulators are 28 mbar for butane and 37 mbar for propane. All vehicular motors that run on compressed gas as a fuel (internal combustion engine or fuel cell electric power train) require a pressure regulator to reduce the stored gas ( CNG or Hydrogen ) pressure from 700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating pressure. [ citation needed ] ) For recreational vehicles with plumbing, a pressure regulator is required to reduce the pressure of an external water supply connected to the vehicle plumbing, as the supply may be a much higher elevation than the campground, and water pressure depends on the height of the water column. Without a pressure regulator, the intense pressure encountered at some campgrounds in mountainous areas may be enough to burst the camper's water pipes or unseat the plumbing joints, causing flooding. Pressure regulators for this purpose are typically sold as small screw-on accessories that fit inline with the hoses used to connect an RV to the water supply, which are almost always screw-thread-compatible with the common garden hose . [ citation needed ] Pressure regulators are used with diving cylinders for Scuba diving . The tank may contain pressures in excess of 3,000 pounds per square inch (210 bar), which could cause a fatal barotrauma injury to a person breathing it directly. A demand controlled regulator provides a flow of breathing gas at the ambient pressure (which varies by depth in the water). Pressure reducing regulators are also use to supply breathing gas to surface-supplied divers, [ 5 ] and people who use self-contained breathing apparatus (SCBA) for rescue and hazmat work on land. The interstage pressure for SCBA at normal atmospheric pressure can generally be left constant at a factory setting, but for surface supplied divers it is controlled by the gas panel operator , depending on the diver depth and flow rate requirements. Supplementary oxygen for high altitude flight in unpressurised aircraft and medical gases are also commonly dispensed through pressure reducing regulators from high-pressure storage. [ 6 ] [ 7 ] Supplementary oxygen may also be dispensed through a regulator which both reduces the pressure, and supplies the gas at a metered flow rate, to be mixed with ambient air. [ 8 ] One way of producing a constant mass flow at variable ambient pressure is to use a choked flow , where the flow through the metering orifice is sonic. For a given gas in choked flow, the mass flow rate may be controlled by setting the orifice size or the upstream pressure. To produce a choked flow in oxygen, the absolute pressure ratio of upstream and downstream gas must exceed 1.893 at 20 °C. At normal atmospheric pressure this requires an upstream pressure of more than 1.013 × 1.893 = 1.918 bar. A typical nominal regulated gauge pressure from a medical oxygen regulator is 3.4 bars (50 psi), for an absolute pressure of approximately 4.4 bar and a pressure ratio of about 4.4 without back pressure, so they will have choked flow in the metering orifices for a downstream (outlet) pressure of up to about 2.3 bar absolute. This type of regulator commonly uses a rotor plate with calibrated orifices and detents to hold it in place when the orifice corresponding to the desired flow rate is selected. This type of regulator may also have one or two uncalibrated takeoff connections from the intermediate pressure chamber with diameter index safety system (DISS) or similar connectors to supply gas to other equipment, and the high pressure connection is commonly a pin index safety system (PISS) yoke clamp. [ 9 ] Similar mechanisms can be used for flow rate control for aviation and mountaineering regulators. As the pressure in water pipes builds rapidly with depth, underground mining operations require a fairly complex water system with pressure reducing valves. These devices must be installed at a certain vertical interval, usually 600 feet (180 m). [ citation needed ] Without such valves, pipes could burst and pressure would be too great for equipment operation. Pressure regulators, also known as gas governors, are used extensively in the natural gas supply industry to control pressure. Natural gas is compressed to high pressures in order to be transmitted and distributed throughout the country through large transmission pipelines, up to 42-inch or 1.07 m diameter. [ 10 ] The transmission pressure can be over 1,000 pounds per square inch (69 bar) and must be reduced through several stages to a usable pressure for industrial, commercial, and residential applications. A distribution system may comprise three main pressure reduction locations. The first reduction station is located at the outskirts of an urban area, where the pressure is reduced to a distribution pressure to be fed throughout the supply area. [ 10 ] It is undesirable to locate high pressure pipelines in urban areas due to the risk of damage and release of high pressure flammable gas. Industrial users may take a supply at this reduced pressure. This may be the location where the odorless natural gas is odorized with mercaptan , in the United Kingdom this is done at the high pressure supply terminal. The distribution pressure is further reduced at a district regulator station, located at various points in the supply area, to below 60 psig (4.13 barg ). The final reduction occurs at the end users location, see image. Generally, the end user reduction is taken to low pressures ranging from 0.25 psig to 5 psig (0.01 to 0.34 barg). [ 10 ] Some industrial applications can require a higher pressure. [ citation needed ] Where the pressure drop on a built-in breathing system exhaust system is too great, typically in saturation systems, a back-pressure regulator may be used to reduce the exhaust pressure drop to a safer and more manageable pressure. [ 11 ] [ 13 ] The depth at which most heliox breathing mixtures are used in surface-supplied diving is generally at least 5 bar above surface atmospheric pressure, and the exhaust gas from the diver must pass through a reclaim valve , which is a back-pressure valve activated by the increase in pressure in the diver's helmet above ambient pressure caused by diver exhalation. [ 14 ] [ 15 ] The reclaim gas hose which carries the exhaled gas back to the surface for recycling must not be at too great a pressure difference from the ambient pressure at the diver. An additional back-pressure regulator in this line allows finer setting of the reclaim valve for lower work of breathing at variable depths. [ 16 ]
https://en.wikipedia.org/wiki/Pressure_regulator
Pressure swing adsorption ( PSA ) is a technique used to separate some gas species from a mixture of gases (typically air) under pressure according to the species' molecular characteristics and affinity for an adsorbent material. It operates at near-ambient temperature and significantly differs from the cryogenic distillation commonly used to separate gases. Selective adsorbent materials (e.g., zeolites , (aka molecular sieves ), activated carbon , etc.) are used as trapping material, preferentially adsorbing the target gas species at high pressure. The process then swings to low pressure to desorb the adsorbed gas. The pressure swing adsorption (PSA) process is based on the phenomenon that under high pressure, gases tend to be trapped onto solid surfaces, i.e. to be "adsorbed". The higher the pressure, the more gas is adsorbed. When the pressure is dropped, the gas is released, or desorbed. PSA can be used to separate gases in a mixture because different gases are adsorbed onto a given solid surface more or less strongly. For example, if a gas mixture such as air is passed under pressure through a vessel containing an adsorbent bed of zeolite that attracts nitrogen more strongly than oxygen , a fraction of nitrogen will stay in the bed, and the gas exiting the vessel will be richer in oxygen than the mixture entering. When the bed reaches the limit of its capacity to adsorb nitrogen, it can be regenerated by decreasing the pressure, thus releasing the adsorbed nitrogen. It is then ready for another cycle of producing oxygen-enriched air. Using two adsorbent vessels allows for near-continuous production of the target gas. It also allows a pressure equalisation, where the gas leaving the vessel being depressurised is used to partially pressurise the second vessel. This results in significant energy savings, and is a common industrial practice. Aside from their ability to discriminate between different gases, adsorbents for PSA systems are usually very porous materials chosen because of their large specific surface areas . Typical adsorbents are zeolite , activated carbon , silica gel , alumina , or synthetic resins . Though the gas adsorbed on these surfaces may consist of a layer only one or at most a few molecules thickness, surface areas of several hundred square meters per gram enable the adsorption of a large portion of the adsorbent's weight in gas. In addition to their affinity for different gases, zeolites and some types of activated carbon may utilize their molecular sieve characteristics to exclude some gas molecules from their structure based on the size and shape of the molecules, thereby restricting the ability of the larger molecules to be adsorbed. Aside from its use to supply medical oxygen, or as a substitute for bulk cryogenic or compressed-cylinder storage, which is the primary oxygen source for any hospital, PSA has numerous other uses. One of the primary applications of PSA is in the removal of carbon dioxide (CO 2 ) as the final step in the large-scale commercial synthesis of hydrogen (H 2 ) for use in oil refineries and in the production of ammonia (NH 3 ). Refineries often use PSA technology in the removal of hydrogen sulfide (H 2 S) from hydrogen feed and recycle streams of hydrotreating and hydrocracking units. Another application of PSA is the separation of carbon dioxide from biogas to increase the methane (CH 4 ) ratio. Through PSA the biogas can be upgraded to a quality similar to natural gas . This includes a process in landfill gas utilization to upgrade landfill gas to utility-grade high purity methane gas to be sold as natural gas. [ 1 ] PSA is also used in: In the frame of carbon capture and storage (CCS), research is also currently underway to capture CO 2 in large quantities from coal-fired power plants prior to geosequestration , in order to reduce greenhouse gas production from these plants. [ 4 ] [ 5 ] PSA has also been discussed as a future alternative to the non-regenerable sorbent technology used in space suit primary life support systems , in order to save weight and to extend the operating time of the suit. [ 6 ] This is the process used in medical oxygen concentrators used by emphysema and COVID-19 patients and others requiring oxygen-enriched air for breathing. [ citation needed ] (DS-PSA, sometimes also referred to as Dual Step PSA) With this variant of PSA developed for use in laboratory nitrogen generators, nitrogen gas is produced into two steps: in the first step, the compressed air is forced to pass through a carbon molecular sieve to produce nitrogen at a purity of approximately 98%; in the second step this nitrogen is forced to pass into a second carbon molecular sieve and the nitrogen gas reaches a final purity up to 99.999%. The purge gas from the second step is recycled and partially used as feed gas in the first step. In addition, the purge process is supported by active evacuation for better performance in the next cycle. The goals of both of these changes is to improve efficiency over a conventional PSA process. DS-PSA can also be applied to increase the oxygen concentration. In this case, an aluminum silica based zeolite adsorbs nitrogen in the first stage reaching 95% oxygen in the outlet, and in the second stage a carbon-based molecular sieve adsorbs the residual nitrogen in a reverse cycle, concentrating oxygen up to 99%. Rapid pressure swing adsorption, or RPSA, is frequently used in portable oxygen concentrators . It allows a large reduction in the size of the adsorbent bed when high purity is not essential and when the feed gas (air) can be discarded. [ 7 ] It works by quickly cycling the pressure while alternately venting opposite ends of the column at the same rate. This means that non-adsorbed gases progress along the column much faster and are vented at the distal end, while adsorbed gases do not get the chance to progress and are vented at the proximal extremity. [ 8 ] Vacuum swing adsorption (VSA) segregates certain gases from a gaseous mixture at near ambient pressure; the process then swings to a vacuum to regenerate the adsorbent material. VSA differs from other PSA techniques because it operates at near-ambient temperatures and pressures. VSA typically draws the gas through the separation process with a vacuum. For oxygen and nitrogen VSA systems, the vacuum is typically generated by a blower. Hybrid vacuum pressure swing adsorption (VPSA) systems also exist. VPSA systems apply pressurized gas to the separation process and also apply a vacuum to the purge gas. VPSA systems, like one of the portable oxygen concentrators, are among the most efficient systems measured on customary industry indices, such as recovery (product gas out/product gas in) and productivity (product gas out/mass of sieve material). Generally, higher recovery leads to a smaller compressor, blower, or other compressed gas or vacuum source and lower power consumption. Higher productivity leads to smaller sieve beds. The consumer will most likely consider indices which have a more directly measurable difference in the overall system, like the amount of product gas divided by the system weight and size, the system initial and maintenance costs, the system power consumption or other operational costs, and reliability.
https://en.wikipedia.org/wiki/Pressure_swing_adsorption
A pressure tank or pressurizer is a type of hydraulic accumulator used in a piping system to maintain a desired pressure . Applications include buffering water pressure in homes. [ 1 ] Referring to the figure on the left, a submersible water pump is installed in a well . The pressure switch turns the water pump on when it senses a pressure that is less than P lo and turns it off when it senses a pressure greater than P hi . While the pump is on, the pressure tank fills up. The pressure tank is then depleted as it supplies water in the specified pressure range to prevent "short-cycling", in which the pump tries to establish the proper pressure by rapidly cycling between P lo and P hi . A simple pressure tank would be just a tank which held water with an air space above the water which would compress as more water entered the tank. Modern systems isolate the water from the pressurized air using a flexible rubber or plastic diaphragm or bladder, because otherwise the air will dissolve in the water and be removed from the tank by usage. Eventually there will be little or no air and the tank will become "waterlogged" causing short-cycling, and will need to be drained to restore operation. The diaphragm or bladder may itself exert a pressure on the water, but it is usually small and will be neglected in the following discussion. Referring to the diagram on the right, a pressure tank is generally pressurized when empty with a "charging pressure" P c , which is usually about 2 psi below the turn-on pressure P lo (Case 1). The total volume of the tank is V t . When in use, the air in the tank will be compressed to pressure P and there will be a volume V of water in the tank (Case 2). In the following development, all pressures are gauge pressures, which are the pressures above atmospheric pressure ( P a , which is altitude dependent). The ideal gas law may be written for both cases, and the amount of air in each case is equal: where N is the number of molecules of gas (equal in both cases), k is the Boltzmann constant and T is the temperature . Assuming that the temperature is equal for both cases, the above equations can be solved for the water pressure/volume relationship in the tank: Tanks are generally specified by their total volume V t and the "drawdown" (Δ V ), which is the amount of water the tank will eject as the tank pressure goes from P hi to P lo , which are established by the pressure switch: [ 2 ] [ 3 ] The reason for the charging pressure can now be seen: The larger the charging pressure, the larger the drawdown. However, a charging pressure above P lo will not allow the pump to turn on when the water pressure is below P lo , so it is kept a bit below P lo . Another important parameter is the drawdown factor ( f Δ V ), which is the ratio of the drawdown to the total tank volume: This factor is independent of the tank size so that the drawdown can be calculated for any tank, given its total volume, atmospheric pressure, charging pressure, and the limiting pressures established by the pressure switch.
https://en.wikipedia.org/wiki/Pressure_tank
A pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure . [ 1 ] Construction methods and materials may be chosen to suit the pressure application, and will depend on the size of the vessel, the contents, working pressure, mass constraints, and the number of items required. Pressure vessels can be dangerous, and fatal accidents have occurred in the history of their development and operation. Consequently, pressure vessel design, manufacture, and operation are regulated by engineering authorities backed by legislation. For these reasons, the definition of a pressure vessel varies from country to country. [ citation needed ] The design involves parameters such as maximum safe operating pressure and temperature, safety factor , corrosion allowance and minimum design temperature (for brittle fracture). Construction is tested using nondestructive testing , such as ultrasonic testing , radiography , and pressure tests. Hydrostatic pressure tests usually use water, but pneumatic tests use air or another gas. Hydrostatic testing is preferred, because it is a safer method, as much less energy is released if a fracture occurs during the test (water does not greatly increase its volume when rapid depressurisation occurs, unlike gases, which expand explosively). Mass or batch production products will often have a representative sample tested to destruction in controlled conditions for quality assurance. Pressure relief devices may be fitted if the overall safety of the system is sufficiently enhanced. In most countries, vessels over a certain size and pressure must be built to a formal code. In the United States that code is the ASME Boiler and Pressure Vessel Code (BPVC) . In Europe the code is the Pressure Equipment Directive . These vessels also require an authorised inspector to sign off on every new vessel constructed and each vessel has a nameplate with pertinent information about the vessel, such as maximum allowable working pressure, maximum temperature, minimum design metal temperature , what company manufactured it, the date, its registration number (through the National Board), and American Society of Mechanical Engineers 's official stamp for pressure vessels (U-stamp). The nameplate makes the vessel traceable and officially an ASME Code vessel. A special application is pressure vessels for human occupancy , for which more stringent safety rules apply. The ASME definition of a pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure . [ 2 ] The Australian and New Zealand standard "AS/NZS 1200:2000 Pressure equipment" defines a pressure vessel as a vessel subject to internal or external pressure, including connected components and accessories up to the connection to external piping. [ 3 ] This article may include information on pressure vessels in the broad sense, and is not restricted to any single definition. A pressure vessel comprises a shell, and usually one or more other components needed to pressurise, retain the pressure, depressurise, and provide access for maintenance and inspection. There may be other components and equipment provided to facilitate the intended use, and some of these may be considered parts of the pressure vessel, such as shell penetrations and their closures, and viewports and airlocks on a pressure vessel for human occupancy, as they affect the integrity and strength of the shell, are also part of the structure retaining the pressure. Pressure gauges and safety devices like pressure relief valves may also be deemed part of the pressure vessel. [ 3 ] There may also be structural components permanently attached to the vessel for lifting, moving, or mounting it, like a foot ring, skids, handles, lugs, or mounting brackets. Pressure vessels are used in a variety of applications in both industry and the private sector. They appear in these sectors as industrial compressed air receivers, boilers and domestic hot water storage tanks . Other examples of pressure vessels are diving cylinders , recompression chambers , distillation towers , pressure reactors , autoclaves , and many other vessels in mining operations, oil refineries and petrochemical plants, nuclear reactor vessels, submarine and space ship habitats, atmospheric diving suits , pneumatic reservoirs, hydraulic reservoirs under pressure, rail vehicle air brake reservoirs , road vehicle air brake reservoirs , and storage vessels for high pressure permanent gases and liquified gases such as ammonia , chlorine , and LPG ( propane , butane ). A pressure vessel may also support structural loads. The passenger cabin of an airliner's outer skin carries both the structural and maneuvering loads of the aircraft, and the cabin pressurization loads. The pressure hull of a submarine also carries the hull structural and maneuvering loads. The working pressure, i.e. the pressure difference between the interior of the pressure vessel and the surroundings is the primary characteristic considered for design and construction. The concepts of high pressure and low pressure are somewhat flexible, and may be defined differently depending on context. There is also the matter of whether the internal pressure is greater or less than the external pressure, and its magnitude relative to normal atmospheric pressure. A vessel with internal pressure lower than atmospheric may also be called a hypobaric vessel or a vacuum vessel . A pressure vessel with high internal pressure can easily be made to be structurally stable, and will usually fail in tension, but failure due to excessive external pressure is usually by buckling instability and collapse. Pressure vessels can theoretically be almost any shape, but shapes made of sections of spheres, cylinders, ellipsoids of revolution, and cones with circular sections are usually employed, though some other surfaces of revolution are also inherently stable. A common design is a cylinder with end caps called heads . Head shapes are frequently either hemispherical or dished ( torispherical ). More complicated shapes have historically been much harder to analyze for safe operation and are usually far more difficult to construct. Theoretically, a spherical pressure vessel has approximately twice the strength of a cylindrical pressure vessel with the same wall thickness, [ 4 ] and is the ideal shape to hold internal pressure. [ 5 ] However, a spherical shape is difficult to manufacture, and therefore more expensive, so most pressure vessels are cylindrical with 2:1 semi-elliptical heads or end caps on each end. Smaller pressure vessels are assembled from a pipe and two covers. For cylindrical vessels with a diameter up to 600 mm (NPS of 24 in), it is possible to use seamless pipe for the shell, thus avoiding many inspection and testing issues, mainly the nondestructive examination of radiography for the long seam if required. A disadvantage of these vessels is that greater diameters are more expensive, so that for example the most economic shape of a 1,000 litres (35 cu ft), 250 bars (3,600 psi ) pressure vessel might be a diameter of 91.44 centimetres (36 in) and a length of 1.7018 metres (67 in) including the 2:1 semi-elliptical domed end caps. No matter what shape it takes, the minimum mass of a pressure vessel scales with the pressure and volume it contains and is inversely proportional to the strength to weight ratio of the construction material (minimum mass decreases as strength increases [ 6 ] ). Pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. The normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. [ 7 ] Therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container. Because (for a given pressure) the thickness of the walls scales with the radius of the tank, the mass of a tank (which scales as the length times radius times thickness of the wall for a cylindrical tank) scales with the volume of the gas held (which scales as length times radius squared). The exact formula varies with the tank shape but depends on the density, ρ, and maximum allowable stress σ of the material in addition to the pressure P and volume V of the vessel. (See below for the exact equations for the stress in the walls.) For a sphere , the minimum mass of a pressure vessel is where: Other shapes besides a sphere have constants larger than 3/2 (infinite cylinders take 2), although some tanks, such as non-spherical wound composite tanks can approach this. This is sometimes called a "bullet" [ citation needed ] for its shape, although in geometric terms it is a capsule . For a cylinder with hemispherical ends, where In a vessel with an aspect ratio of middle cylinder width to radius of 2:1, In looking at the first equation, the factor PV, in SI units, is in units of (pressurization) energy. For a stored gas, PV is proportional to the mass of gas at a given temperature, thus The other factors are constant for a given vessel shape and material. So we can see that there is no theoretical "efficiency of scale", in terms of the ratio of pressure vessel mass to pressurization energy, or of pressure vessel mass to stored gas mass. For storing gases, "tankage efficiency" is independent of pressure, at least for the same temperature. So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible ρ / σ {\displaystyle \rho /\sigma } , and very cold helium for best possible M / p V {\displaystyle M/{pV}} . Stress in a thin-walled pressure vessel in the shape of a sphere is where σ θ {\displaystyle \sigma _{\theta }} is hoop stress, or stress in the circumferential direction, σ l o n g {\displaystyle \sigma _{long}} is stress in the longitudinal direction, p is internal gauge pressure, r is the inner radius of the sphere, and t is thickness of the sphere wall. A vessel can be considered "thin-walled" if the diameter is at least 10 times (sometimes cited as 20 times) greater than the wall thickness. [ 10 ] Stress in a thin-walled pressure vessel in the shape of a cylinder is where: Almost all pressure vessel design standards contain variations of these two formulas with additional empirical terms to account for variation of stresses across thickness, quality control of welds and in-service corrosion allowances. All formulae mentioned above assume uniform distribution of membrane stresses across thickness of shell but in reality, that is not the case. Deeper analysis is given by Lamé's theorem , which gives the distribution of stress in the walls of a thick-walled cylinder of a homogeneous and isotropic material. The formulae of pressure vessel design standards are extension of Lamé's theorem by putting some limit on ratio of inner radius and thickness. For example, the ASME Boiler and Pressure Vessel Code (BPVC) (UG-27) formulas are: [ 11 ] Spherical shells: Thickness has to be less than 0.356 times inner radius Cylindrical shells: Thickness has to be less than 0.5 times inner radius where E is the joint efficiency, and all others variables as stated above. The factor of safety is often included in these formulas as well, in the case of the ASME BPVC this term is included in the material stress value when solving for pressure or thickness. Also sometimes called hull penetrations, depending on context, shell penetrations are intentional breaks in the structural integrity of the shell, and are usually significant local stress-raisers, so they must be accounted for in the design so they do not become failure points. It is usually necessary to reinforce the shell in the immediate vicinity of such penetrations. Shell penetrations are necessary to provide a variety of functions, including passage of the contents from the outside to the inside and back out, and in special applications for transmission of electricity, light, and other services through the shell. The simplest case is gas cylinders, which need only a neck penetration threaded to fit a valve, while a submarine or spacecraft may have a large number of penetrations for a large number of functions. The screw thread used for high pressure vessel shell penetrations is subject to high loads and must not leak. High pressure cylinders are produced with conical (tapered) threads and parallel threads. Two sizes of tapered threads have dominated the full metal cylinders in industrial use from 0.2 to 50 litres (0.0071 to 1.7657 cu ft) in volume. [ 12 ] For smaller fittings, taper thread standard 17E is used, [ 13 ] with a 12% taper right hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of 18.036 millimetres (0.71 in). These connections are sealed using thread tape and torqued to between 120 and 150 newton-metres (89 and 111 lbf⋅ft) on steel cylinders, and between 75 and 140 N⋅m (55 and 103 lbf⋅ft) on aluminium cylinders. [ 14 ] For larger fittings, taper thread standard 25E is used. To screw in the valve, a higher torque of typically about 200 N⋅m (150 lbf⋅ft) is necessary, [ 15 ] Until around 1950, hemp was used as a sealant. Later, a thin sheet of lead pressed to a hat form which closely fitted the external threads, with a hole on top was used. The fitter would squeeze the soft lead shim to conform better with the grooves and ridges of the fitting before screwing it into the hole. The lead would deform to form a thin layer between the internal and external thread, and thereby fill the gaps to create the seal. Since 2005, PTFE -tape has been used to avoid using lead. [ clarification needed ] A tapered thread provides simple assembly, but requires high torque for connecting and leads to high radial forces in the vessel neck, and has a limited number of times it can be used before it is excessively deformed. This could be extended a bit by always returning the same fitting to the same hole, and avoiding over-tightening. All cylinders built for 300 bar (4,400 psi) working pressure, all diving cylinders, [ clarification needed ] and all composite cylinders use parallel threads. [ citation needed ] Parallel threads for cylinder necks and similar penetrations of pressure vessels are made to several standards: The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about 0.2 mm (0.008 in), but they are not compatible, as the thread forms are different. All parallel thread valves are sealed using an elastomer O-ring at top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve. Pressure vessel closures are pressure retaining structures designed to provide quick access to pipelines, pressure vessels, pig traps, filters and filtration systems. Typically pressure vessel closures allow access by maintenance personnel. A commonly used maintenance access hole shape is elliptical, which allows the closure to be passed through the opening, and rotated into the working position, and is held in place by a bar on the outside, secured by a central bolt. The internal pressure prevents it from being inadvertently opened under load. Placing the closure on the high pressure side of the opening uses the pressure difference to lock the closure when at service pressure. Where this is impracticable a safety interlock may be mandated. An airlock [ a ] is a room or compartment which permits passage between environments of differing atmospheric pressure or composition, while minimizing the changing of pressure or composition between the differing environments. It consists of a chamber with two airtight doors or hatches arranged in series, which are not opened simultaneously. Airlocks can be small or large enough for one or more people to pass through, which may take the form of an antechamber . An airlock may also be used underwater to allow passage between the air environment in a pressure vessel, such as a submarine or diving bell , and the water environment outside. In such cases the airlock can contain air or water . This is called a floodable airlock or underwater airlock, and is used to prevent water from entering a submersible vessel or underwater habitat . A similar arrangement is used on spacecraft to facilitate extravehicular activity . Many pressure vessels are made of steel. To manufacture a cylindrical or spherical pressure vessel, rolled and possibly forged parts would have to be welded together. Some mechanical properties of steel, achieved by rolling or forging, could be adversely affected by welding, unless special precautions are taken. In addition to adequate mechanical strength, current standards dictate the use of steel with a high impact resistance, especially for vessels used in low temperatures. In applications where carbon steel would suffer corrosion, special corrosion resistant material should also be used. Some pressure vessels are made of composite materials , such as filament wound composite using carbon fibre held in place with a polymer. Due to the very high tensile strength of carbon fibre these vessels can be very light, but are much more difficult to manufacture. The composite material may be wound around a metal liner, forming a composite overwrapped pressure vessel . Other very common materials include polymers such as PET in carbonated beverage containers and copper in plumbing. Pressure vessels may be lined with various metals, ceramics, or polymers to prevent leaking and protect the structure of the vessel from the contained medium. This liner may also carry a significant portion of the pressure load. [ 20 ] [ 21 ] Pressure Vessels may also be constructed from concrete (PCV) or other materials which are weak in tension. Cabling, wrapped around the vessel or within the wall or the vessel itself, provides the necessary tension to resist the internal pressure. A "leakproof steel thin membrane" lines the internal wall of the vessel. Such vessels can be assembled from modular pieces and so have "no inherent size limitations". [ 22 ] There is also a high order of redundancy thanks to the large number of individual cables resisting the internal pressure. The very small vessels used to make liquid butane fueled cigarette lighters are subjected to about 2 bar pressure, depending on ambient temperature. These vessels are often oval (1 x 2 cm ... 1.3 x 2.5 cm) in cross section but sometimes circular. The oval versions generally include one or two internal tension struts which appear to be baffles but which also provide additional cylinder strength. The standard method of construction for boilers, compressed air receivers and other pressure vessels of iron or steel before gas and electrical welding of reliable quality became widespread was riveted sheets which had been rolled and forged into shape, then riveted together, often using butt straps along the joints, and caulked along the riveted seams by deforming the edges of the overlap with a blunt chisel to create a continuous line of high contact pressure along the joint. Hot riveting caused the rivets to contract on cooling, forming a tighter joint. [ 23 ] Large and low pressure vessels are commonly manufactured from formed plates welded together. Weld quality is critical to safety in pressure vessels for human occupancy . The typical circular-cylindrical high pressure gas cylinders for permanent gases (that do not liquify at storing pressure, like air, oxygen, nitrogen, hydrogen, argon, helium) have been manufactured by hot forging by pressing and rolling to get a seamless vessel of consistent material characteristics and minimised stress concentrations. Working pressure of cylinders for use in industry, skilled craft, diving and medicine had a standardized working pressure (WP) of about 150 bars (2,200 psi) in Europe until about 1950. From about 1975, the standard pressure rose to about 200 bars (2,900 psi). Firemen need slim, lightweight cylinders to move in confined spaces; since about 1995 cylinders for 300 bars (4,400 psi) WP were used (first in pure steel). [ citation needed ] A demand for reduced weight led to different generations of composite (fiber and matrix, over a liner) cylinders that are more vulnerable to impact damage. Composite cylinders for breathing gas are usually built for working pressure of 300 bars (4,400 psi). Manufacturing methods for seamless metal pressure vessels are commonly used for relatively small diameter cylinders where large numbers will be produced, as the machinery and tooling require large capital outlay. The methods are well suited to high pressure gas transport and storage applications, and provide consistently high quality products. Backward extrusion is a process by which the material is forced to flow back along the mandrel between the mandrel and die. Cold extrusion (aluminium): Seamless aluminium cylinders may be manufactured by cold backward extrusion of aluminium billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck. [ 24 ] Hot extrusion (steel): In the hot extrusion process a billet of steel is cut to size, induction heated to the correct temperature for the alloy, descaled and placed in the die. The metal is backward extruded by forcing the mandrel into it, causing it to flow through the annular gap until a deep cup is formed. This cup is further drawn to diameter and wall thickness reduced and the bottom formed. After inspection and trimming of the open end, the cylinder is hot spun to close the end and form the neck. [ 25 ] Seamless cylinders may also be cold drawn from steel plate discs to a cylindrical cup form, in two to four stages, depending on the final ratio of diameter to cylinder length. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. The spinning process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness. [ 26 ] A seamless steel cylinder can also be formed by hot spinning a closure at both ends. The base is first closed completely, and trimmed to form a smooth internal surface before the shoulder and neck are formed. [ 27 ] Regardless of the method used to form the cylinder, it will be machined to finish the neck and cut the neck threads, heat treated, cleaned, and surface finished, stamp marked, tested, and inspected for quality assurance. [ 26 ] [ 25 ] [ 24 ] [ 27 ] Composite pressure vessels are generally laid up from filament wound rovings in a thermosetting polymer matrix. The mandrel may be removable after cure, or may remain a part of the finished product, often providing a more reliable gas or liquid-tight liner, or better chemical resistance to the intended contents than the resin matrix. Metallic inserts may be provided for attaching threaded accessories, such as valves and pipes. [ 28 ] To classify the different structural principles of gas storage cylinders, 4 types are defined. [ 27 ] Type 2 and 3 cylinders have been in production since around 1995. Type 4 cylinders are commercially available at least since 2016. [ citation needed ] Wound infinite cylindrical shapes optimally take a winding angle of 54.7 degrees to the cylindrical axis, as this gives the necessary twice the strength in the circumferential direction to the longitudinal. [ 29 ] Hoop wound fibre reinforcement is wound at an angle of nearly 90° to the cylinder axis. As the pressure vessel is designed to a pressure, there is typically a safety valve or relief valve to ensure that this pressure is not exceeded in operation. There may be a rupture disc fitted to the vessel or the cylinder valve or a fusible plug to protect in case of overheating. Leak before burst describes a pressure vessel designed such that a crack in the vessel will grow through the wall, allowing the contained fluid to escape and reducing the pressure, prior to growing so large as to cause catastrophic fracture at the operating pressure. Many pressure vessel standards, including the ASME Boiler and Pressure Vessel Code [ 30 ] and the AIAA metallic pressure vessel standard, either require pressure vessel designs to be leak before burst, or require pressure vessels to meet more stringent requirements for fatigue and fracture if they are not shown to be leak before burst. [ 31 ] Hydrostatic test (filled with water) pressure is usually 1.5 times working pressure, but DOT test pressure for scuba cylinders is 5/3 (1.66) times working pressure. Pressure vessels are designed to operate safely at a specific pressure and temperature, technically referred to as the "Design Pressure" and "Design Temperature". A vessel that is inadequately designed to handle a high pressure constitutes a very significant safety hazard. Because of that, the design and certification of pressure vessels is governed by design codes such as the ASME Boiler and Pressure Vessel Code in North America, the Pressure Equipment Directive of the EU (PED), Japanese Industrial Standard (JIS), CSA B51 in Canada , Australian Standards in Australia and other international standards like Lloyd's , Germanischer Lloyd , Det Norske Veritas , Société Générale de Surveillance (SGS S.A.), Lloyd's Register Energy Nederland (formerly known as Stoomwezen) etc. Note that where the pressure-volume product is part of a safety standard, any incompressible liquid in the vessel can be excluded as it does not contribute to the potential energy stored in the vessel, so only the volume of the compressible part such as gas is used. The earliest documented design of pressure vessels was described in 1495 in the book by Leonardo da Vinci , the Codex Madrid I , in which containers of pressurized air were theorized to lift heavy weights underwater. [ 5 ] However, vessels resembling those used today did not come about until the 1800s, when steam was generated in boilers' helping to spur the Industrial Revolution . [ 5 ] However, with poor material quality and manufacturing techniques along with improper knowledge of design, operation and maintenance there was a large number of damaging and often deadly explosions associated with these boilers and pressure vessels, with a death occurring on a nearly daily basis in the United States. [ 5 ] Local provinces and states in the US began enacting rules for constructing these vessels after some particularly devastating vessel failures occurred killing dozens of people at a time, which made it difficult for manufacturers to keep up with the varied rules from one location to another. The first pressure vessel code was developed starting in 1911 and released in 1914, starting the ASME Boiler and Pressure Vessel Code (BPVC) . [ 5 ] In an early effort to design a tank capable of withstanding pressures up to 10,000 psi (69 MPa), a 6-inch (150 mm) diameter tank was developed in 1919 that was spirally-wound with two layers of high tensile strength steel wire to prevent sidewall rupture, and the end caps longitudinally reinforced with lengthwise high-tensile rods. [ 36 ] The need for high pressure and temperature vessels for petroleum refineries and chemical plants gave rise to vessels joined with welding instead of rivets (which were unsuitable for the pressures and temperatures required) and in the 1920s and 1930s the BPVC included welding as an acceptable means of construction; welding is the main means of joining metal vessels today. [ 5 ] There have been many advancements in the field of pressure vessel engineering such as advanced non-destructive examination, phased array ultrasonic testing and radiography , new material grades with increased corrosion resistance and stronger materials, and new ways to join materials such as explosion welding , friction stir welding , advanced theories and means of more accurately assessing the stresses encountered in vessels such as with the use of Finite Element Analysis , allowing the vessels to be built safer and more efficiently. Pressure vessels in the USA require BPVC stamping, but the BPVC is not just a domestic code, many other countries have adopted the BPVC as their official code. [ citation needed ] There are, however, other official codes in some countries, such as Japan, Australia, Canada, Britain, and other countries in the European Union. Nearly all recognize the inherent potential hazards of pressure vessels and the need for standards and codes regulating their design and construction. [ citation needed ] [ clarification needed ] Depending on the application and local circumstances, alternatives to pressure vessels exist. Examples can be seen in domestic water collection systems, where the following may be used:
https://en.wikipedia.org/wiki/Pressure_vessel
A pressure–volume diagram (or PV diagram , or volume–pressure loop ) [ 1 ] is used to describe corresponding changes in volume and pressure in a system. It is commonly used in thermodynamics , cardiovascular physiology , and respiratory physiology . PV diagrams, originally called indicator diagrams , were developed in the 18th century as tools for understanding the efficiency of steam engines . A PV diagram plots the change in pressure P with respect to volume V for some process or processes. Typically in thermodynamics, the set of processes forms a cycle , so that upon completion of the cycle there has been no net change in state of the system; i.e. the device returns to the starting pressure and volume. [ citation needed ] The figure shows the features of an idealized PV diagram. It shows a series of numbered states (1 through 4). The path between each state consists of some process (A through D) which alters the pressure or volume of the system (or both). A key feature of the diagram is that the amount of energy expended or received by the system as work can be measured because the net work is represented by the area enclosed by the four lines. In the figure, the processes 1-2-3 produce a work output, but processes from 3-4-1 require a smaller energy input to return to the starting position / state; so the net work is the difference between the two. This figure is highly idealized, in so far as all the lines are straight and the corners are right angles. A diagram showing the changes in pressure and volume in a real device will show a more complex shape enclosing the work cycle. [ citation needed ] ( § Applications ). The PV diagram, then called an indicator diagram, was developed in 1796 by James Watt and his employee John Southern . [ 2 ] Volume was traced by a plate moving with the piston, while pressure was traced by a pressure gauge whose indicator moved at right angles to the piston. A pencil was used to draw the diagram. [ citation needed ] Watt used the diagram to make radical improvements to steam engine performance. [ 3 ] Specifically, the diagram records the pressure of steam versus the volume of steam in a cylinder , throughout a piston 's cycle of motion in a steam engine. The diagram enables calculation of the work performed and thus can provide a measure of the power produced by the engine. [ 4 ] To exactly calculate the work done by the system it is necessary to calculate the integral of the pressure with respect to volume. One can often quickly calculate this using the PV diagram as it is simply the area enclosed by the cycle. [ citation needed ] Note that in some cases specific volume will be plotted on the x-axis instead of volume, in which case the area under the curve represents work per unit mass of the working fluid (i.e. J/kg). [ citation needed ] In cardiovascular physiology , the diagram is often applied to the left ventricle , and it can be mapped to specific events of the cardiac cycle . PV loop studies are widely used in basic research and preclinical testing , to characterize the intact heart's performance under various situations (effect of drugs, disease, characterization of mouse strains ) [ citation needed ] The sequence of events occurring in every heart cycle is as follows. The left figure shows a PV loop from a real experiment; letters refer to points. As it can be seen, the PV loop forms a roughly rectangular shape and each loop is formed in an anti-clockwise direction. Very useful information can be derived by examination and analysis of individual loops or series of loops, for example: See external links for a much more precise representation.
https://en.wikipedia.org/wiki/Pressure–volume_diagram
A pressurizer is a component of a pressurized water reactor . The basic design of the pressurized water reactor includes a requirement that the coolant (water) in the reactor coolant system must not boil. Put another way, the coolant must remain in the liquid state at all times, especially in the reactor vessel. To achieve this, the coolant in the reactor coolant system is maintained at a pressure sufficiently high that boiling does not occur at the coolant temperatures experienced while the plant is operating or in any analyzed possible transient state. To pressurize the coolant system to a higher pressure than the vapor pressure of the coolant at operating temperatures , a separate pressurizing system is required. This is in the form of the pressurizer. In a pressurized water reactor plant, the pressurizer is basically a cylindrical pressure vessel with hemispherical ends, mounted with the long axis vertical and directly connected by a single run of piping to the reactor coolant system. It is located inside the reactor containment building . Although the water in the pressurizer is the same reactor coolant as in the rest of the reactor coolant system, it is basically stagnant, i.e. reactor coolant does not flow through the pressurizer continuously as it does in the other parts of the reactor coolant system. Because of its innate incompressibility, water in a connected piping system adjusts equally to pressure changes anywhere in the connected system. The water in the system may not be at the same pressure at all points in the system due to differences in elevation but the pressure at all points responds equally to a pressure change in any one part of the system. From this phenomenon, it was recognized early on that the pressure in the entire reactor coolant system, including the reactor itself, could be controlled by controlling pressure in a small interconnected area of the system and this led to the design of the pressurizer. The pressurizer is a small vessel compared to the other two major vessels of the reactor coolant system, the reactor vessel itself and the steam generator (s). Pressure in the pressurizer is controlled by varying the temperature of the coolant in the pressurizer. Water pressure in a closed system tracks water temperature directly; as the temperature goes up, pressure goes up and vice versa. To increase the pressure in the reactor coolant system, large electric heaters in the pressurizer are turned on, raising the coolant temperature in the pressurizer and thereby raising the pressure. To decrease pressure in the reactor coolant system, sprays of relatively cool water are turned on inside the pressurizer, lowering the coolant temperature in the pressurizer and thereby lowering the pressure. The pressurizer has two secondary functions. One is providing a place to monitor water level in the reactor coolant system. Since the reactor coolant system is completely flooded during normal operations, there is no point in monitoring coolant level in any of the other vessels. But early awareness of a reduction of coolant level (or a loss of coolant ) is important to the safety of the reactor core . The pressurizer is deliberately located high in the reactor containment building such that, if the pressurizer has sufficient coolant in it, one can be reasonably certain that all the other vessels of the reactor coolant system (which are below it) are fully flooded with coolant. There is therefore, a coolant level monitoring system on the pressurizer and it is the one reactor coolant system vessel that is normally not full of coolant. The other secondary function is to provide a "cushion" for sudden pressure changes in the reactor coolant system. The upper portion of the pressurizer is specifically designed to NOT contain liquid coolant and a reading of full on the level instrumentation allows for that upper portion to not contain liquid coolant. Because the coolant in the pressurizer is quite hot during normal operations, the space above the liquid coolant is vaporized coolant ( steam ). This steam bubble provides a cushion for pressure changes in the reactor coolant system and the operators ensure that the pressurizer maintains this steam bubble at all times during operations. Allowing liquid coolant to completely fill the pressurizer eliminates this steam bubble, and is referred to in industry as letting the pressurizer "go hard". This would mean that a sudden pressure change can provide a hammer effect to the entire reactor coolant system. Some facilities also call this letting the pressurizer "go solid," although solid simply refers to being completely full of liquid and without a "steam bubble." Part of the pressurizer system is an over-pressure relief system. In the event that pressurizer pressure exceeds a certain maximum, there is a relief valve called the pilot-operated relief valve (PORV) on top of the pressurizer which opens to allow steam from the steam bubble to leave the pressurizer in order to reduce the pressure in the pressurizer. This steam is routed to a large tank (or tanks) in the reactor containment building where it is cooled back into liquid (condensed) and stored for later disposition. There is a finite volume to these tanks and if events deteriorate to the point where the tanks fill up, a secondary pressure relief device on the tank(s), often a rupture disc , allows the condensed reactor coolant to spill out onto the floor of the reactor containment building where it pools in sumps for later disposition.
https://en.wikipedia.org/wiki/Pressurizer_(nuclear_power)
The Press–Schechter formalism is a mathematical model for predicting the number of objects (such as galaxies , galaxy clusters or dark matter halos [ 1 ] ) of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974. [ 2 ] In the context of cold dark matter cosmological models, perturbations on all scales are imprinted on the universe at very early times, for example by quantum fluctuations during an inflationary era . Later, as radiation redshifts away, these become mass perturbations, and they start to grow linearly. Only long after that, starting with small mass scales and advancing over time to larger mass scales, do the perturbations actually collapse to form (for example) galaxies or clusters of galaxies, in so-called hierarchical structure formation (see Physical cosmology ). Press and Schechter observed that the fraction of mass in collapsed objects more massive than some mass M is related to the fraction of volume samples in which the smoothed initial density fluctuations are above some density threshold. This yields a formula for the mass function (distribution of masses) of objects at any given time. The Press–Schechter formalism predicts that the number of objects with mass between M {\displaystyle M} and M + d M {\displaystyle M+dM} is: d n ≡ N ( M ) d M = 1 π ( 1 + n 3 ) ρ ¯ M 2 ( M M ∗ ) ( 3 + n ) / 6 exp ⁡ ( − ( M M ∗ ) ( 3 + n ) / 3 ) d M {\displaystyle dn\equiv N(M)dM={\frac {1}{\sqrt {\pi }}}\left(1+{\frac {n}{3}}\right){\frac {\bar {\rho }}{M^{2}}}\left({\frac {M}{M^{*}}}\right)^{\left(3+n\right)/6}\exp \left(-\left({\frac {M}{M^{*}}}\right)^{\left(3+n\right)/3}\right)dM} where n {\displaystyle n} is the index of the power spectrum of the fluctuations in the early universe P ( k ) ∝ k n {\displaystyle P(k)\propto k^{n}} , ρ ¯ {\displaystyle {\bar {\rho }}} is the mean (baryonic and dark) matter density of the universe at the time the fluctuation from which the object was formed had gravitationally collapsed, and M ∗ {\displaystyle M^{*}} is a cut-off mass below which structures will form. Its value is: M ∗ = ( ρ ¯ 1 − n 3 2 σ 2 ) 3 3 + n = ( ρ ¯ 0 1 − n 3 2 σ 0 2 ) 3 3 + n ⋅ R 0 2 R 2 {\displaystyle M^{*}=\left({\frac {{\bar {\rho }}^{1-{\frac {n}{3}}}}{2\sigma ^{2}}}\right)^{\frac {3}{3+n}}=\left({\frac {{\bar {\rho }}_{0}^{1-{\frac {n}{3}}}}{2\sigma _{0}^{2}}}\right)^{\frac {3}{3+n}}\cdot {\frac {R_{0}^{2}}{R^{2}}}} σ {\displaystyle \sigma } is the standard deviation per unit volume of the fluctuation from which the object was formed had gravitationally collapsed, at the time of the gravitational collapse, and R is the scale of the universe at that time. Parameters with subscript 0 are at the time of the initial creation of the fluctuations (or any later time before the gravitational collapse). Qualitatively, the prediction is that the mass distribution is a power law for small masses, with an exponential cutoff above some characteristic mass that increases with time. Such functions had previously been noted by Schechter as observed luminosity functions , and are now known as Schechter luminosity functions. The Press-Schechter formalism provided the first quantitative model for how such functions might arise. The case of a scale-free power spectrum, n =0 (or, equivalently, a scalar spectral index of 1), is very close to the spectrum of the current standard cosmological model . In this case, d n {\displaystyle dn} has a simpler form. Written in mass-free units: M d n d M = 1 π ρ ¯ M ( M M ∗ ) 1 / 2 e − M / M ∗ {\displaystyle M{\frac {dn}{dM}}={\frac {1}{\sqrt {\pi }}}{\frac {\bar {\rho }}{M}}\left({\frac {M}{M^{*}}}\right)^{1/2}e^{-M/M^{*}}} The Press–Schechter formalism is derived through three key assumptions: [ 3 ] In other words, fluctuations are small at some early cosmological time, and grow until they cross a threshold ending in gravitational collapse into a halo. These perturbations are modeled linearly, even though the eventual collapse is itself a non-linear process. We introduce the smoothed density field δ M ( x → ) , {\displaystyle \delta _{M}({\vec {x}})~,} given by δ ( x → ) {\displaystyle \delta ({\vec {x}})} averaged over a sphere with center x → {\displaystyle {\vec {x}}} and mass M {\displaystyle M} contained inside (i.e., δ {\displaystyle \delta } is convolved with a top-hat window function). The sphere radius is of order M ∼ ρ ¯ R 3 . {\displaystyle M\sim {\bar {\rho }}R^{3}~.} [ 4 ] Then if δ M ( x → ) ≥ δ c , {\displaystyle \delta _{M}({\vec {x}})\geq \delta _{c}~,} a halo exists at x → {\displaystyle {\vec {x}}} with mass at least M . {\displaystyle M~.} Since perturbations δ M {\displaystyle \delta _{M}} are Gaussian distributed with an average 0 and variance σ ( M ) , {\displaystyle \sigma (M)~,} we can directly compute the probability of halos forming with masses at least M {\displaystyle M} as f ( δ M > δ c ) = ∫ δ c ∞ d δ M 1 2 π σ ( M ) exp ⁡ ( − 1 2 δ M 2 σ 2 ( M ) ) = 1 2 erfc ⁡ ( 1 2 δ c σ ( M ) ) . {\displaystyle f(\delta _{M}>\delta _{c})=\int _{\delta _{c}}^{\infty }d\delta _{M}~{\frac {1}{{\sqrt {2\pi }}\sigma (M)}}\exp \left(-{\frac {1}{2}}{\frac {\delta _{M}^{2}}{\sigma ^{2}(M)}}\right)={\frac {1}{2}}\operatorname {erfc} \left({\frac {1}{\sqrt {2}}}{\frac {\delta _{c}}{\sigma (M)}}\right)~.} Implicitly, σ ( R ) {\displaystyle \sigma (R)} and δ M {\displaystyle \delta _{M}} depend on redshift, so the above probability does as well. The variance given in the 1974 paper is σ ( M ) 2 = Σ 2 M 2 = V ⋅ σ 2 M 2 = σ 2 M ⋅ ρ {\displaystyle \sigma (M)^{2}={\frac {\Sigma ^{2}}{M^{2}}}={\frac {V\cdot \sigma ^{2}}{M^{2}}}={\frac {\sigma ^{2}}{M\cdot \rho }}} where Σ {\displaystyle \Sigma } is the mass standard deviation in the volume of the fluctuation. Note, that in the limit of large perturbations σ ( M ) ≫ δ M , {\displaystyle \sigma (M)\gg \delta _{M}~,} we expect all matter to be contained in halos such that f ( δ M > δ c ) = 1 . {\textstyle f(\delta _{M}>\delta _{c})=1~.} However, the above equation gives us the limit f ( δ M > δ c ) = 1 2 . {\textstyle f(\delta _{M}>\delta _{c})={\frac {1}{2}}~.} One can make an ad-hoc argument and say that negative perturbations are not contributing in this scheme so that we are mistakenly leaving out half of the mass. And so, the Press-Schechter ansatz is F ( > M ) = erfc ⁡ ( 1 2 δ M σ ( M ) ) , {\displaystyle F(>M)=\operatorname {erfc} \left({\frac {1}{\sqrt {2}}}{\frac {\delta _{M}}{\sigma (M)}}\right)~,} the fraction of matter contained in halos of mass > M . {\displaystyle >M~.} A fractional fluctuation δ {\displaystyle \delta } ; at some cosmological time reaches gravitational collapse after the universe has expanded by a factor of 1/δ since that time. Using this, the normal distribution of the fluctuations, written in terms of the M {\displaystyle M} , ρ {\displaystyle \rho } , and σ {\displaystyle \sigma } gives the Press-Schechter formula. A number of generalizations of the Press–Schechter formula exist, such as the Sheth–Tormen approximation . [ 5 ]
https://en.wikipedia.org/wiki/Press–Schechter_formalism
Prestressed concrete is a form of concrete used in construction. It is substantially prestressed ( compressed ) during production, in a manner that strengthens it against tensile forces which will exist when in service. [ 1 ] [ 2 ] : 3–5 [ 3 ] It was patented by Eugène Freyssinet in 1928. [ 4 ] This compression is produced by the tensioning of high-strength tendons located within or adjacent to the concrete and is done to improve the performance of the concrete in service. [ 5 ] Tendons may consist of single wires , multi-wire strands or threaded bars that are most commonly made from high-tensile steels , carbon fiber or aramid fiber . [ 1 ] : 52–59 The essence of prestressed concrete is that once the initial compression has been applied, the resulting material has the characteristics of high-strength concrete when subject to any subsequent compression forces and of ductile high-strength steel when subject to tension forces . This can result in improved structural capacity or serviceability , or both, compared with conventionally reinforced concrete in many situations. [ 6 ] [ 2 ] : 6 In a prestressed concrete member, the internal stresses are introduced in a planned manner so that the stresses resulting from the imposed loads are counteracted to the desired degree. Prestressed concrete is used in a wide range of building and civil structures where its improved performance can allow for longer spans , reduced structural thicknesses, and material savings compared with simple reinforced concrete. Typical applications include high-rise buildings , residential concrete slabs , foundation systems , bridge and dam structures, silos and tanks , industrial pavements and nuclear containment structures . [ 7 ] First used in the late nineteenth century, [ 1 ] prestressed concrete has developed beyond pre-tensioning to include post-tensioning , which occurs after the concrete is cast. Tensioning systems may be classed as either 'monostrand', where each tendon's strand or wire is stressed individually, or 'multi-strand', where all strands or wires in a tendon are stressed simultaneously. [ 6 ] Tendons may be located either within the concrete volume (internal prestressing) or wholly outside of it (external prestressing). While pre-tensioned concrete uses tendons directly bonded to the concrete, post-tensioned concrete can use either bonded or unbonded tendons. Pre-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned prior to the concrete being cast. [ 1 ] : 25 The concrete bonds to the tendons as it cures , following which the end-anchoring of the tendons is released, and the tendon tension forces are transferred to the concrete as compression by static friction . [ 6 ] : 7 Pre-tensioning is a common prefabrication technique, where the resulting concrete element is manufactured off-site from the final structure location and transported to site once cured. It requires strong, stable end-anchorage points between which the tendons are stretched. These anchorages form the ends of a casting bed which may be many times the length of the concrete element being fabricated. This allows multiple elements to be constructed end-to-end in the one pre-tensioning operation, allowing significant productivity benefits and economies of scale to be realized. [ 6 ] [ 8 ] The amount of bond (or adhesion ) achievable between the freshly set concrete and the surface of the tendons is critical to the pre-tensioning process, as it determines when the tendon anchorages can be safely released. Higher bond strength in early-age concrete will speed production and allow more economical fabrication. To promote this, pre-tensioned tendons are usually composed of isolated single wires or strands, which provides a greater surface area for bonding than bundled-strand tendons. [ 6 ] Unlike those of post-tensioned concrete (see below), the tendons of pre-tensioned concrete elements generally form straight lines between end anchorages. Where profiled or harped tendons [ 9 ] are required, one or more intermediate deviators are located between the ends of the tendon to hold the tendon to the desired non-linear alignment during tensioning. [ 1 ] : 68–73 [ 6 ] : 11 Such deviators usually act against substantial forces, and hence require a robust casting-bed foundation system. Straight tendons are typically used in linear precast concrete elements, such as shallow beams and hollow-core slabs ; whereas profiled tendons are more commonly found in deeper precast bridge beams and girders. Pre-tensioned concrete is most commonly used for the fabrication of structural beams , floor slabs , hollow-core slabs, balconies , lintels , driven piles , water tanks and concrete pipes . Post-tensioned concrete is a variant of prestressed concrete where the tendons are tensioned after the surrounding concrete structure has been cast. [ 1 ] : 25 The tendons are not placed in direct contact with the concrete, but are encapsulated within a protective sleeve or duct which is either cast into the concrete structure or placed adjacent to it. At each end of a tendon is an anchorage assembly firmly fixed to the surrounding concrete. Once the concrete has been cast and set, the tendons are tensioned (stressed) by pulling the tendon ends through the anchorages while pressing against the concrete. The large forces required to tension the tendons result in a significant permanent compression being applied to the concrete once the tendon is locked off at the anchorage. [ 1 ] : 25 [ 6 ] : 7 The method of locking the tendon ends to the anchorage is dependent upon the tendon composition, with the most common systems being button-head anchoring (for wire tendons), split-wedge anchoring (for strand tendons), and threaded anchoring (for bar tendons). [ 1 ] : 79–84 Tendon encapsulation systems are constructed from plastic or galvanised steel materials, and are classified into two main types: those where the tendon element is subsequently bonded to the surrounding concrete by internal grouting of the duct after stressing ( bonded post-tensioning); and those where the tendon element is permanently de bonded from the surrounding concrete, usually by means of a greased sheath over the tendon strands ( unbonded post-tensioning). [ 1 ] : 26 [ 6 ] : 10 Casting the tendon ducts or sleeves into the concrete before any tensioning occurs allows them to be readily profiled to any desired shape including incorporating vertical or horizontal curvature or both. When the tendons are tensioned, this profiling results in reaction forces being imparted onto the hardened concrete, and these can be beneficially used to counter any loadings subsequently applied to the structure. [ 2 ] : 5–6 [ 6 ] : 48 : 9–10 In bonded post-tensioning, tendons are permanently bonded to the surrounding concrete by the in situ grouting of their encapsulating ducting (after tendon tensioning). This grouting is undertaken for three main purposes: to protect the tendons against corrosion ; to permanently lock in the tendon pre-tension, thereby removing the long-term reliance upon the end-anchorage systems; and to improve certain structural behaviors of the final concrete structure. [ 10 ] Bonded post-tensioning characteristically uses tendons each comprising bundles of elements (e.g., strands or wires) placed inside a single tendon duct, with the exception of bars which are mostly used unbundled. This bundling makes for more efficient tendon installation and grouting processes, since each complete tendon requires only one set of end anchorages and one grouting operation. Ducting is fabricated from a durable and corrosion-resistant material such as plastic (e.g., polyethylene ) or galvanised steel, and can be either round or rectangular/oval in cross-section. [ 2 ] : 7 The tendon sizes used are highly dependent upon the application, ranging from building works typically using between 2 and 6 strands per tendon, to specialized dam works using up to 91 strands per tendon. Fabrication of bonded tendons is generally undertaken on-site, commencing with the fitting of end anchorages to formwork , placing the tendon ducting to the required curvature profiles, and reeving (or threading) the strands or wires through the ducting. Following concreting and tensioning, the ducts are pressure-grouted and the tendon-stressing ends sealed against corrosion . [ 6 ] : 2 Unbonded post-tensioning differs from bonded post-tensioning by allowing the tendons permanent freedom of longitudinal movement relative to the concrete. This is most commonly achieved by encasing each individual tendon element within a plastic sheathing filled with a corrosion -inhibiting grease , usually lithium -based. Anchorages at each end of the tendon transfer the tensioning force to the concrete, and are required to reliably perform this role for the life of the structure. [ 10 ] : 1 Unbonded post-tensioning can take the form of: For individual strand tendons, no additional tendon ducting is used and no post-stressing grouting operation is required, unlike for bonded post-tensioning. Permanent corrosion protection of the strands is provided by the combined layers of grease, plastic sheathing, and surrounding concrete. Where strands are bundled to form a single unbonded tendon, an enveloping duct of plastic or galvanised steel is used and its interior free spaces grouted after stressing. In this way, additional corrosion protection is provided via the grease, plastic sheathing, grout, external sheathing, and surrounding concrete layers. [ 10 ] : 1 Individually greased-and-sheathed tendons are usually fabricated off-site by an extrusion process. The bare steel strand is fed into a greasing chamber and then passed to an extrusion unit where molten plastic forms a continuous outer coating. Finished strands can be cut to length and fitted with dead-end anchor assemblies as required for the project. Both bonded and unbonded post-tensioning technologies are widely used around the world, and the choice of system is often dictated by regional preferences, contractor experience, or the availability of alternative systems. Either one is capable of delivering code-compliant, durable structures meeting the structural strength and serviceability requirements of the designer. [ 10 ] : 2 The benefits that bonded post-tensioning can offer over unbonded systems are: The benefits that unbonded post-tensioning can offer over bonded systems are: Long-term durability is an essential requirement for prestressed concrete given its widespread use. Research on the durability performance of in-service prestressed structures has been undertaken since the 1960s, [ 14 ] and anti-corrosion technologies for tendon protection have been continually improved since the earliest systems were developed. [ 15 ] The durability of prestressed concrete is principally determined by the level of corrosion protection provided to any high-strength steel elements within the prestressing tendons. Also critical is the protection afforded to the end-anchorage assemblies of unbonded tendons or cable-stay systems, as the anchorages of both of these are required to retain the prestressing forces. Failure of any of these components can result in the release of prestressing forces, or the physical rupture of stressing tendons. Modern prestressing systems deliver long-term durability by addressing the following areas: Several durability-related events are listed below: Prestressed concrete is a highly versatile construction material as a result of it being an almost ideal combination of its two main constituents: high-strength steel, pre-stretched to allow its full strength to be easily realised; and modern concrete, pre-compressed to minimise cracking under tensile forces. [ 1 ] : 12 Its wide range of application is reflected in its incorporation into the major design codes covering most areas of structural and civil engineering, including buildings, bridges, dams, foundations, pavements, piles, stadiums, silos, and tanks. [ 7 ] Building structures are typically required to satisfy a broad range of structural, aesthetic and economic requirements. Significant among these include: a minimum number of (intrusive) supporting walls or columns; low structural thickness (depth), allowing space for services, or for additional floors in high-rise construction; fast construction cycles, especially for multi-storey buildings; and a low cost-per-unit-area, to maximise the building owner's return on investment. The prestressing of concrete allows load-balancing forces to be introduced into the structure to counter in-service loadings. This provides many benefits to building structures: Some notable building structures constructed from prestressed concrete include: Sydney Opera House [ 24 ] and World Tower , Sydney; [ 25 ] St George Wharf Tower , London; [ 26 ] CN Tower , Toronto; [ 27 ] Kai Tak Cruise Terminal [ 28 ] and International Commerce Centre , Hong Kong; [ 29 ] Ocean Heights 2 , Dubai; [ 30 ] Eureka Tower , Melbourne; [ 31 ] Torre Espacio , Madrid; [ 32 ] Guoco Tower (Tanjong Pagar Centre), Singapore; [ 33 ] Zagreb International Airport , Croatia; [ 34 ] and Capital Gate , Abu Dhabi UAE. [ 35 ] Concrete is the most popular structural material for bridges, and prestressed concrete is frequently adopted. [ 36 ] [ 37 ] When investigated in the 1940s for use on heavy-duty bridges, the advantages of this type of bridge over more traditional designs was that it is quicker to install, more economical and longer-lasting with the bridge being less lively. [ 38 ] [ 39 ] One of the first bridges built in this way is the Adam Viaduct , a railway bridge constructed 1946 in the UK . [ 40 ] By the 1960s, prestressed concrete largely superseded reinforced concrete bridges in the UK, with box girders being the dominant form. [ 41 ] In short-span bridges of around 10 to 40 metres (30 to 130 ft), prestressing is commonly employed in the form of precast pre-tensioned girders or planks. [ 42 ] Medium-length structures of around 40 to 200 metres (150 to 650 ft), typically use precast-segmental, in-situ balanced-cantilever and incrementally-launched designs . [ 43 ] For the longest bridges, prestressed concrete deck structures often form an integral part of cable-stayed designs . [ 44 ] Concrete dams have used prestressing to counter uplift and increase their overall stability since the mid-1930s. [ 45 ] [ 46 ] Prestressing is also frequently retro-fitted as part of dam remediation works, such as for structural strengthening, or when raising crest or spillway heights. [ 47 ] [ 48 ] Most commonly, dam prestressing takes the form of post-tensioned anchors drilled into the dam's concrete structure, the underlying rock strata, or both. Such anchors typically comprise tendons of high-tensile bundled steel strands or individual threaded bars. A tendon is grouted to the concrete or rock at its far (internal) end and has a significant de-bonded free length at its external end which allows the tendon to stretch during tensioning. Tendons may be full-length-bonded to the surrounding concrete or rock once tensioned, or (more commonly) have strands permanently encapsulated in corrosion-inhibiting grease over the free length to permit long-term load monitoring and re-stressability. [ 49 ] Circular storage structures such as silos and tanks can use prestressing forces to directly resist the outward pressures generated by stored liquids or bulk solids. Horizontally curved tendons are installed within the concrete wall to form a series of hoops, spaced vertically up the structure. When tensioned, these tendons exert both axial (compressive) and radial (inward) forces onto the structure, which can directly oppose the subsequent storage loadings. If the magnitude of the prestress is designed to always exceed the tensile stresses produced by the loadings, a permanent residual compression will exist in the wall concrete, assisting in maintaining a watertight crack-free structure. [ 50 ] [ 51 ] [ 52 ] : 61 Prestressed concrete has been established as a reliable construction material for high-pressure containment structures such as nuclear reactor vessels and containment buildings, and petrochemical tank blast-containment walls. Using pre-stressing to place such structures into an initial state of bi-axial or tri-axial compression increases their resistance to concrete cracking and leakage, while providing a proof-loaded, redundant and monitorable pressure-containment system. [ 53 ] [ 54 ] [ 55 ] : 585–594 Nuclear reactor and containment vessels will commonly employ separate sets of post-tensioned tendons curved horizontally or vertically to completely envelop the reactor core. Blast containment walls, such as for liquid natural gas (LNG) tanks, will normally utilize layers of horizontally-curved hoop tendons for containment in combination with vertically looped tendons for axial wall pre-stressing. Heavily loaded concrete ground slabs and pavements can be sensitive to cracking and subsequent traffic-driven deterioration. As a result, prestressed concrete is regularly used in such structures as its pre-compression provides the concrete with the ability to resist the crack-inducing tensile stresses generated by in-service loading. This crack resistance also allows individual slab sections to be constructed in larger pours than for conventionally reinforced concrete, resulting in wider joint spacings, reduced jointing costs and less long-term joint maintenance issues. [ 55 ] : 594–598 [ 56 ] Initial works have also been successfully conducted on the use of precast prestressed concrete for road pavements, where the speed and quality of the construction has been noted as being beneficial for this technique. [ 57 ] Some notable civil structures constructed using prestressed concrete include: Gateway Bridge , Brisbane Australia; [ 58 ] Incheon Bridge , South Korea; [ 59 ] Roseires Dam , Sudan; [ 60 ] Wanapum Dam , Washington, US; [ 61 ] LNG tanks , South Hook, Wales; Cement silos , Brevik Norway; Autobahn A73 bridge , Itz Valley, Germany; Ostankino Tower , Moscow, Russia; CN Tower , Toronto, Canada; and Ringhals nuclear reactor , Videbergshamn Sweden. [ 53 ] : 37 Worldwide, many professional organizations exist to promote best practices in the design and construction of prestressed concrete structures. In the United States, such organizations include the Post-Tensioning Institute (PTI) and the Precast/Prestressed Concrete Institute (PCI). [ 62 ] Similar bodies include the Canadian Precast/Prestressed Concrete Institute (CPCI), [ 63 ] the UK's Post-Tensioning Association, [ 64 ] the Post Tensioning Institute of Australia [ 65 ] and the South African Post Tensioning Association. [ 66 ] Europe has similar country-based associations and institutions. These organizations are not the authorities of building codes or standards, but rather exist to promote the understanding and development of prestressed concrete design, codes and best practices. Rules and requirements for the detailing of reinforcement and prestressing tendons are specified by individual national codes and standards such as:
https://en.wikipedia.org/wiki/Prestressed_concrete
In structural engineering , a prestressed structure is a load-bearing structure whose overall integrity , stability and security depend, primarily, on prestressing : the intentional creation of permanent stresses in the structure for the purpose of improving its performance under various service conditions. [ 1 ] The basic types of prestressing are: Today, the concept of a prestressed structure is widely employed in the design of buildings , underground structures, TV towers, power stations, floating storage and offshore facilities, nuclear reactor vessels, and numerous bridge systems. [ 2 ] It is especially prominent in construction using concrete (see pre-stressed concrete ). The idea of precompression was apparently familiar to ancient Roman architects . The tall attic wall of the Colosseum works as a stabilizing device for the wall piers beneath it.
https://en.wikipedia.org/wiki/Prestressed_structure
Presumptive tests , in medical and forensic science , analyze a sample and establish one of the following: For example, the Kastle–Meyer test will show either that a sample is not blood or that the sample is probably blood, but may be a less common substance. Further chemical tests are needed to prove that the substance is blood. Confirmatory tests are the tests required to confirm the analysis. Confirmatory tests cost more than simpler presumptive tests so presumptive tests are often done to see if confirmatory tests are necessary. Similarly, in medicine, a presumptive diagnosis identifies the likely condition of a patient, and a confirmatory diagnosis is needed to confirm the condition. The US Food and Drug Administration issued a Premarket Submission and Labeling Recommendations for Drugs of Abuse Screening Tests . Its availability was announced in the Federal Register, Vol. 68, No. 231 on December 2, 2003 and is listed under "Notices." Presumptive testing has found widespread use by employers and public entities. Most people who take a drug test take a presumptive test, cheaper and faster than other methods of testing. However, it is less accurate and can render false results. The FDA recommends for confirmatory testing to be conducted and the placing of a warning label on the presumptive drug test: "This assay provides only a preliminary result. Clinical consideration and professional judgment should be applied to any drug of abuse test result, in evaluating a preliminary positive result. To obtain a confirmed analytical result, a more specific alternate chemical method is needed. Gas chromatography/mass spectrometry (GC/MS) is the recommended confirmatory method."
https://en.wikipedia.org/wiki/Presumptive_and_confirmatory_tests
In geometric mechanics a presymplectic form is a closed differential 2-form of constant rank on a manifold . [ 1 ] However, some authors use different definitions . Recently, Hajduk and Walczak defined a presymplectic form as a closed differential 2-form of maximal rank on a manifold of odd dimension. [ 2 ] A symplectic form is a presymplectic form that is also nondegenerate . [ 3 ] Lack of nondegeneracy, leading to presymplectic forms, occurs in dynamical systems with singular Lagrangians , Hamiltonian systems with constraints and control theory . [ 4 ] This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Presymplectic_form
Prevention through design (PtD), also called safety by design in Europe, is the concept of applying methods to minimize occupational hazards early in the design process, with an emphasis on optimizing employee health and safety throughout the life cycle of materials and processes. [ 1 ] It is a concept and movement that encourages construction or product designers to "design out" health and safety risks during design development. The process also encourages the various stakeholders within a construction project to be collaborative and share the responsibilities of workers' safety evenly. The concept supports the view that along with quality, programme and cost; safety is determined during the design stage. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ excessive citations ] It increases the cost-effectiveness of enhancements to occupational safety and health . [ 1 ] Compared to traditional forms of hazard control, PtD possesses a proactive nature whereas other safety measures are reactive to incidences that occur within construction projects. This method for reducing workplace safety risks lessens workers' reliance on personal protective equipment , which is the least effective of the hierarchy of hazard control . [ 9 ] In the domain of process safety , safety by design is usually referred to as inherent safety or inherently safer design (ISD). Each year in the U.S., 55,000 people die from work-related injuries and diseases, 294,000 are made sick, and 3.8 million are injured. The annual direct and indirect costs have been estimated to range from $128 billion to $155 billion. [ citation needed ] For U.S. industries such as construction, even though construction personnel account for only 5% of the total U.S. workforce, they are responsible for nearly 20% of all workplace fatalities. [ 10 ] Recent studies in Australia indicate that design is a significant contributor to 37% of work-related fatalities; therefore, the successful implementation of prevention through design concepts can have substantial impacts on worker health and safety. [ 11 ] A safer workplace can be created by removing hazards and reducing worker risks to an appropriate level "at the source," or as early in the life cycle of products or workplaces as possible. [ citation needed ] Designing, redesigning and retrofitting new and current work environments, systems, tools, facilities, equipment, machinery, goods, chemicals, work processes, and work organization. Improving the working climate by incorporating preventive approaches into all designs that have an effect on employees and those on the premises. [ citation needed ] The strategic plan lays out the objectives for implementing the PtD Plan for the National Initiative successfully. [ citation needed ] The National Institute for Occupational Safety and Health (NIOSH) in the United States is a major contributor and promoter of PtD policy and guidelines. NIOSH considers PtD to be "the most effective and reliable type" of prevention of occupational injuries . [ 12 ] A core tenet of PtD philosophy is the concept of addressing workplace hazards using methods at the top of the hierarchy of hazard controls , namely elimination and substitution. [ citation needed ] Within Europe, construction designers are legally bound to design out risks during design development to reduce hazards in the construction and end use phases via the Mobile Worksite Directive (also known as CDM regulations in the UK). The concept supports this legal requirement. [ 13 ] Some Notified Bodies provide testing and design verification services to ensure compliance with the safety standards defined in regulation codes such as the American Society of Mechanical Engineers . [ 14 ] Many non-governmental organizations have been established to support this aim, principally in the UK, Australia and the United States. [ 15 ] [ 16 ] [ 17 ] While engineering, as a rule, factors human safety into the design process, a modern appraisal of specific links to design and workers' safety can be seen in efforts beginning in the 1800s. Trends included the widespread implementation of guards for machinery, controls for elevators , and boiler safety practices. This was followed by enhanced design for ventilation , enclosures, system monitors, lockout/tagout controls, and hearing protectors. More recently, there has been the development of chemical process safety , ergonomically engineered tools, chairs, and workstations, lifting devices, retractable needles, latex-free gloves, and a parade of other safety devices and processes. [ 18 ] In 2007, NIOSH began its National Initiative on Prevention through Design [ 19 ] with the goal of promoting prevention through design philosophy, practice, and policy. [ citation needed ] The PtD National Initiative's goal is to avoid or mitigate occupational accidents, diseases, deaths, and exposures by incorporating prevention factors into all designs that impact people in the workplace. This is accomplished by eliminating hazards and reducing worker risks to an acceptable level "at the source," or as early in the life cycle of items or workplaces as possible. Designing, redesigning, and retrofitting new and existing work premises, structures, tools, facilities, equipment, machinery, products, substances, work processes, and work organization. [ citation needed ] Prevention through design represents a shift in approach for on-the-job safety. It involves evaluating potential risks associated with processes, structures, equipment, and tools. It takes into consideration the construction, maintenance, decommissioning, and disposal or recycling of waste material. [ 18 ] The idea of redesigning job tasks and work environments has begun to gain momentum in business and government as a cost-effective means to enhance occupational safety and health. Many U.S. companies openly support PtD concepts and have developed management practices to implement them. Other countries are actively promoting PtD concepts as well. The United Kingdom began requiring construction companies, project owners, and architects to address safety and health during the design phase of projects in 1994. Australia developed the Australian National OHS Strategy 2002–2012, which set "eliminating hazards at the design stage" as one of five national priorities. As a result, the Australian Safety and Compensation Council (ASCC) developed the Safe Design National Strategy and Action Plans for Australia encompassing a wide range of design areas. [ 9 ] In Australia, the Work Health and Safety Act of 2011 was passed which included elements that laid out the legal responsibilities of employers, designers, and other stakeholders within construction projects to take the necessary steps to ensure that safety is prioritized through all phases of the construction process. [ 20 ] In practice, what this has looked like is Australian state governments such as Queensland, South Australia, and Western Australia mandating design professionals to create a strategy for safety considerations throughout the construction process. The plan has to include pre-construction considerations, how safety can be evaluated, and providing details of how safety will be controlled once the physical construction process begins. Even before the Work Health and Safety Act of 2011, since 1998, any construction project that was valued over AU$3 million was subject to this requirement. [ citation needed ] Within the United Kingdom (U.K.), PtD has been legally required for those in the construction industry since March 31, 1995. [ 21 ] At the time of implementation, the fatality rate within the U.K. construction industry was 10 fatalities per 100,000 workers. [ 22 ] In 2021, the fatality rate has been reduced to 1.62 fatalities per 100,000 workers. [ 23 ] Although it cannot be established that PtD is the sole facilitator of this reduction in construction fatalities, it does show that since its enactment, fatalities have dropped substantially. Since its establishment in 1995, the UK government has periodically updated the legislation with the 2015 version of The Construction (Design and Management) Regulations placing even greater emphasis on the role that principal designers should play in injury and fatality prevention during the design phase of a project. [ 24 ] The National Institute for Occupational Safety and Health (NIOSH) is a contributor to prevention through design efforts in the United States. Several NIOSH initiatives and guidelines directly or indirectly advocate for PtD practices. Through NIOSH efforts, the U.S. Green Building Council posted new PtD credits [ 25 ] available for Leadership in Energy and Environmental Design (LEED) certification for construction. Additionally, they provide a wide variety of educational and guidance materials [ 26 ] on the topic of PtD. The NIOSH " Buy Quiet " initiative uses elements of prevention through design to encourage companies to buy quieter machinery, thereby reducing occupational hearing loss for their workers. [ 27 ] The Prevention through Design (PtD) Initiative of NIOSH collaborates with business, labor, trade unions, professional organizations, and academia. The curriculum focuses on “designing out” workplace hazards and threats in order to avoid sickness, injury, and death. It encourages technical accreditation bodies to include PtD in their evaluations to educate and encourage others to use PtD goals and processes in collaborative design and renovation of facilities, work processes, equipment, and resources. [ 28 ] Priorities of this initiative include: In Singapore, the government's Workplace Safety and Health Council pioneered a Design for Safety (Dfs) mark which would allow the Singaporean government to recognize construction projects that were completed with safety in mind. Receiving the Dfs mark for safety considerations is analogous to a building receiving a LEED certification for featuring aspects of sustainability and carbon footprint reduction. [ 29 ] Even though PtD is not a new concept and has shown to be associated with reductions in injuries and fatalities across various construction industries on the international stage, it is still not a core feature of various engineering and architectural schools' curriculum. [ 30 ] This can compromise designers' ability to consider safety in real-world applications since they have had limited education on the concept of safety let alone PtD.
https://en.wikipedia.org/wiki/Prevention_through_design
A preventive action is a change implemented to address a weakness in a management system that is not yet responsible for causing nonconforming product or service. Candidates for preventive action generally result from suggestions from customers or participants in the process but preventive action is a proactive process to identify opportunities for improvement rather than a simple reaction to identified problems or complaints. Apart from the review of the operational procedures, the preventive action might involve analysis of data, including trend and risk analyses and proficiency-testing results. The focus for preventive actions is to avoid creating nonconformances, but also commonly includes improvements in efficiency. [ 1 ] Preventive actions can address technical requirements related to the product or service supplied or to the internal management system. Many organizations require that when opportunities to improve are identified or if preventive action is required, action plans are developed, implemented and monitored to reduce the likelihood of nonconformities and to take advantage of the opportunities for improvement. Additionally, a thorough preventive action process will include the application of controls to ensure that the preventive actions are effective. In some settings, corrective action is used as an encompassing term that includes remedial actions , corrective actions and preventive actions. Preventive actions rely upon on the consequences of change. Once changed, inevitably, risks should be taken into consideration. In this case preventive actions aim to minimize or, where possible, eliminate the risks. Risks arise when little is known and understood about a particular situation. The chances of risk are minimized whilst one has better knowledge of the opportunities and consequences that could follow a situation. In order to reduce risk, a full analysis of potential best and worst results is required. Before taking into consideration any plan, people should be aware of the consequences of both success and failure. Not only the internal aspects - capability, expertise and willingness of staff- but also the external aspects of an organisation - stakeholders, customers, clients - should be assessed. [ 2 ] Strategic risk management works with defining an organisation's approach to risk in terms of condition, attitudes and expertise. It identifies the possible areas of risk and assures that the proper approach is used. Then operational risk management will insure that steps for minimizing or eliminating the risk are followed. A strategic approach of the risk management includes studying the environment and being aware of the issues that must be considered in any situation. [ 2 ] Risks can occur due to a range of unexpected possible and potential events outside of the organisation's control, such as: political instability, change in currency, changes of the weather which could lead to a change in customer behavior, etc. [ 2 ] Therefore, in an organisation it is important to know and understand what events could take place, where and why. So, managers should prioritize some steps of preventive actions in order to anticipate these kind of issues, especially focusing more on: "Patterns of behavior" relates to the morale and motivation of people. The effects of human behavior (such as victimization, bullying, harassment and discrimination) could affect confidence, weakening the relationships meant to lead to performance. Accidents could happen anytime and anywhere. Thus, an organisation has to assure that the accidents are kept to a minimal level. In this situation preventive actions should focus more on the nature and quality of the working environment, safety aspects and technology. Single events and errors are very hard to be managed and impossible to be eliminated. The risk should be kept at a minimum through supervision systems, regular inspections and procedures. In order to perform a change, an organisation has to do a forecast, deeply understanding where that event could lead and its consequences. Thus, the risk of a particular event and its probability of occurring should be clear. Using this information, one can understand and better make future decisions, proposal and initiatives. [ 2 ] Preventive actions differ from one organisation to another. [ 3 ] [ 4 ] Their number is vast, among them counting: Nowadays, due to fast changes in engineering, there is a large emphasis in the enhancement of safety and security regarding technology. However, in order to avoid some issues, more powerful safety analysis techniques are constantly being developed. As safety and security issues can occur anytime, intentionally or not, more preventive strategies against loss or hacking are enhanced. These actions aim to focus on the possible causes of the problem, rather than solving an already critical situation. [ 5 ] Computer security tries to defend computers by assuring that their networks are not accessed or disrupted. They approach different tactics in order to protect against attackers, creating barriers or lines of defense, through firewalls or encryption . However, losses result also from actions not executed properly (such as human errors) or from system errors among components. Losses could be prevented through preventive strategies and tactics. Security analysts could find possible attackers, highlighting their reasons, potential and purpose. Owning proper knowledge, security experts could assess their own system and identify the most suitable defense strategy. Tracing is one of the methods used by people in order to find any issue or deficiency in their system. Focusing first on strategy rather than tactics [ 6 ] can be achieved by adopting a new system-theoretic causality model recently developed to provide a more powerful approach to engineering for safety. [ 5 ] Causality models used in accidents are either traditional, caused by human errors, or more complex, caused by wrong interaction between components and systems errors. STAMP (System-Theoretic Accident Model and Processes) is a model of accident causality used in investigating potential accidents that can occur. In this case, issues are seen as results of inadequate control of the safety components used. [ 7 ] Nowadays more powerful systems that analyse safety have been created. STPA (System-Theoretic Process Analysis) uses such techniques, being based on the STAMP model of causality. [ 8 ] Once the cause is identified, STPA examines the system, creating a proper scenario that could solve the issue. Regarding technology, not only the safety and security of computers and isolated devices can be threatened, but also of entire complex information systems . As not all decisions made in an organisation are based on known rules, the analytical manager will examine in details the situation and anticipates potential issues that can occur. However. many decisions could have a great impact on some aspect of the organisation and cannot be easily reversed. [ 9 ] Thus, modelling and simulating play the roles of preventive actions, being applied earlier for the design of the process, where real factual data is not available. It is an abstract representation, that includes all aspects of a process so its potential impact could be better analysed. Such a representation before implementing can be done through business process modeling (BPM). On one hand, there are indeed the deterministic systems that rely on the input data and are capable of predicting accurate output. On the other hand, there is the probabilistic system [ 10 ] as well, which does not forecast with completely accuracy. However, both deterministic and probabilistic systems need some earlier actions that could prevent issues. [ 9 ] Analysis and design count are among the most important activities done before starting-up a business. During analysing, one gets a better understanding over the potential of the business; a diagrammatic model ensuring the agreement between IT professionals and system users. System design aims to design the way in which the system will work, this being eventually followed by system building. [ 9 ] Preventive healthcare or preventive medicine refers to the measures taken in order to prevent and treat diseases. As there is a wide range of diseases in the world, there is also a wide variety of factors that influence those health disorders, such as environment, genetic and lifestyle. Preventive healthcare relies on the anticipation of the diseases, before they take occur. Among these preventing methods, [ 11 ] there are: However, these traditional healthcare strategies [ 12 ] are not the only actions that could prevent health diseases. A very important step is recognizing and being aware of some certain health changes that can turn into real health threats. Examples of minor problems that people usually do not take seriously into consideration are numerous, such as losing involuntary weight, lasting coughs, body changes and others aches and pains. Once with noticing a disorder, people can take action by checking a specialist in order to avoid the situation getting worse. Crime prevention relies on the actions that defend and fight against criminals and crimes, such as murders, robberies, burglaries, black mail, high jacking or smuggling. Criminologists focus on preventing the risks that can cause crime rather than reacting to crime that have already occurred. There is a great number of techniques used in reducing crime. These could be split up into ones at a large scale, such as strategies implemented by a society or community, and others at a smaller one, such as personal security. Examples of collective strategies preventing criminality: [ 13 ] However, in most of the cases people tend to rely on their own personal skills and capabilities that could help them in preventing and defending criminal attacks. For example: Preventive actions taken against acts of terrorism could either be preventive lockdown (preemptive lockdown to mitigate the risk) or an emergency lockdown (during or after the occurrence of the risk). The August 2019 clampdown in Jammu and Kashmir [ 14 ] is an example of preventive lockdown to eliminate the risk to the lives of civilians from the militants, violent protesters and stonepelters.
https://en.wikipedia.org/wiki/Preventive_action
Previtamin D 3 is an intermediate in the production of cholecalciferol ( vitamin D 3 ). It is formed by the action of UV light , most specifically UVB light of wavelengths between 295 and 300 nm, acting on 7-dehydrocholesterol in the epidermal layers of the skin. [ 1 ] [ 2 ] [ 3 ] The B ring of the steroid nucleus structure is broken open, making a secosteroid . This then undergoes spontaneous isomerization into cholecalciferol, the prohormone of the active form of vitamin D, calcitriol . The synthesis of previtamin D 3 is blocked effectively by sunscreens . [ 4 ] Click on genes, proteins and metabolites below to link to respective articles. [ § 1 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Previtamin_D3
All definitions tacitly require the homogeneous relation R {\displaystyle R} be transitive : for all a , b , c , {\displaystyle a,b,c,} if a R b {\displaystyle aRb} and b R c {\displaystyle bRc} then a R c . {\displaystyle aRc.} A term's definition may require additional properties that are not listed in this table. In set theory , a prewellordering on a set X {\displaystyle X} is a preorder ≤ {\displaystyle \leq } on X {\displaystyle X} (a transitive and reflexive relation on X {\displaystyle X} ) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation x < y {\displaystyle x<y} defined by x ≤ y and y ≰ x {\displaystyle x\leq y{\text{ and }}y\nleq x} is a well-founded relation . A prewellordering on a set X {\displaystyle X} is a homogeneous binary relation ≤ {\displaystyle \,\leq \,} on X {\displaystyle X} that satisfies the following conditions: [ 1 ] A homogeneous binary relation ≤ {\displaystyle \,\leq \,} on X {\displaystyle X} is a prewellordering if and only if there exists a surjection π : X → Y {\displaystyle \pi :X\to Y} into a well-ordered set ( Y , ≲ ) {\displaystyle (Y,\lesssim )} such that for all x , y ∈ X , {\displaystyle x,y\in X,} x ≤ y {\textstyle x\leq y} if and only if π ( x ) ≲ π ( y ) . {\displaystyle \pi (x)\lesssim \pi (y).} [ 1 ] Given a set A , {\displaystyle A,} the binary relation on the set X := Finite ⁡ ( A ) {\displaystyle X:=\operatorname {Finite} (A)} of all finite subsets of A {\displaystyle A} defined by S ≤ T {\displaystyle S\leq T} if and only if | S | ≤ | T | {\displaystyle |S|\leq |T|} (where | ⋅ | {\displaystyle |\cdot |} denotes the set's cardinality ) is a prewellordering. [ 1 ] If ≤ {\displaystyle \leq } is a prewellordering on X , {\displaystyle X,} then the relation ∼ {\displaystyle \sim } defined by x ∼ y if and only if x ≤ y ∧ y ≤ x {\displaystyle x\sim y{\text{ if and only if }}x\leq y\land y\leq x} is an equivalence relation on X , {\displaystyle X,} and ≤ {\displaystyle \leq } induces a wellordering on the quotient X / ∼ . {\displaystyle X/{\sim }.} The order-type of this induced wellordering is an ordinal , referred to as the length of the prewellordering. A norm on a set X {\displaystyle X} is a map from X {\displaystyle X} into the ordinals. Every norm induces a prewellordering; if ϕ : X → O r d {\displaystyle \phi :X\to Ord} is a norm, the associated prewellordering is given by x ≤ y if and only if ϕ ( x ) ≤ ϕ ( y ) {\displaystyle x\leq y{\text{ if and only if }}\phi (x)\leq \phi (y)} Conversely, every prewellordering is induced by a unique regular norm (a norm ϕ : X → O r d {\displaystyle \phi :X\to Ord} is regular if, for any x ∈ X {\displaystyle x\in X} and any α < ϕ ( x ) , {\displaystyle \alpha <\phi (x),} there is y ∈ X {\displaystyle y\in X} such that ϕ ( y ) = α {\displaystyle \phi (y)=\alpha } ). If Γ {\displaystyle {\boldsymbol {\Gamma }}} is a pointclass of subsets of some collection F {\displaystyle {\mathcal {F}}} of Polish spaces , F {\displaystyle {\mathcal {F}}} closed under Cartesian product , and if ≤ {\displaystyle \leq } is a prewellordering of some subset P {\displaystyle P} of some element X {\displaystyle X} of F , {\displaystyle {\mathcal {F}},} then ≤ {\displaystyle \leq } is said to be a Γ {\displaystyle {\boldsymbol {\Gamma }}} - prewellordering of P {\displaystyle P} if the relations < ∗ {\displaystyle <^{*}} and ≤ ∗ {\displaystyle \leq ^{*}} are elements of Γ , {\displaystyle {\boldsymbol {\Gamma }},} where for x , y ∈ X , {\displaystyle x,y\in X,} Γ {\displaystyle {\boldsymbol {\Gamma }}} is said to have the prewellordering property if every set in Γ {\displaystyle {\boldsymbol {\Gamma }}} admits a Γ {\displaystyle {\boldsymbol {\Gamma }}} -prewellordering. The prewellordering property is related to the stronger scale property ; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions. Π 1 1 {\displaystyle {\boldsymbol {\Pi }}_{1}^{1}} and Σ 2 1 {\displaystyle {\boldsymbol {\Sigma }}_{2}^{1}} both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals , for every n ∈ ω , {\displaystyle n\in \omega ,} Π 2 n + 1 1 {\displaystyle {\boldsymbol {\Pi }}_{2n+1}^{1}} and Σ 2 n + 2 1 {\displaystyle {\boldsymbol {\Sigma }}_{2n+2}^{1}} have the prewellordering property. If Γ {\displaystyle {\boldsymbol {\Gamma }}} is an adequate pointclass with the prewellordering property, then it also has the reduction property : For any space X ∈ F {\displaystyle X\in {\mathcal {F}}} and any sets A , B ⊆ X , {\displaystyle A,B\subseteq X,} A {\displaystyle A} and B {\displaystyle B} both in Γ , {\displaystyle {\boldsymbol {\Gamma }},} the union A ∪ B {\displaystyle A\cup B} may be partitioned into sets A ∗ , B ∗ , {\displaystyle A^{*},B^{*},} both in Γ , {\displaystyle {\boldsymbol {\Gamma }},} such that A ∗ ⊆ A {\displaystyle A^{*}\subseteq A} and B ∗ ⊆ B . {\displaystyle B^{*}\subseteq B.} If Γ {\displaystyle {\boldsymbol {\Gamma }}} is an adequate pointclass whose dual pointclass has the prewellordering property, then Γ {\displaystyle {\boldsymbol {\Gamma }}} has the separation property : For any space X ∈ F {\displaystyle X\in {\mathcal {F}}} and any sets A , B ⊆ X , {\displaystyle A,B\subseteq X,} A {\displaystyle A} and B {\displaystyle B} disjoint sets both in Γ , {\displaystyle {\boldsymbol {\Gamma }},} there is a set C ⊆ X {\displaystyle C\subseteq X} such that both C {\displaystyle C} and its complement X ∖ C {\displaystyle X\setminus C} are in Γ , {\displaystyle {\boldsymbol {\Gamma }},} with A ⊆ C {\displaystyle A\subseteq C} and B ∩ C = ∅ . {\displaystyle B\cap C=\varnothing .} For example, Π 1 1 {\displaystyle {\boldsymbol {\Pi }}_{1}^{1}} has the prewellordering property, so Σ 1 1 {\displaystyle {\boldsymbol {\Sigma }}_{1}^{1}} has the separation property. This means that if A {\displaystyle A} and B {\displaystyle B} are disjoint analytic subsets of some Polish space X , {\displaystyle X,} then there is a Borel subset C {\displaystyle C} of X {\displaystyle X} such that C {\displaystyle C} includes A {\displaystyle A} and is disjoint from B . {\displaystyle B.}
https://en.wikipedia.org/wiki/Prewellordering
Prey naïveté hypothesis is a theory that suggests that native prey often struggle to recognize or avoid an introduced predator because they lack a coevolutionary history with it. Prey naïveté is believed to intensify the effects of non-native predators, which can contribute significantly to the risks of extinction and endangerment of prey populations. The prey naïveté hypothesis suggests that ineffective antipredator defenses result from a lack of evolutionary exposure to specific predators. [ 1 ] This naiveté towards non-native predators is likely influenced by eco-evolutionary factors such as biogeographic isolation and prey adaptation. [ 2 ] A prey species' ability to detect and evade predators can be shaped by the life history, ecology , and evolutionary context of both predator and prey. While some predator-prey systems display species-specific avoidance behaviors, many taxa require learned olfactory recognition of predators. Certain antipredator behaviors that develop in response to coevolved predators may persist over time, even in their absence, particularly when other predators are present, as suggested by the " multipredator hypothesis ." [ 1 ] For instance, rats introduced to oceanic islands have been implicated in the extinction of many mammals , birds , and reptiles that lack evolutionary experience with generalist mammalian nest predators. However, the negative effects of rats are lessened on islands with native rats or functionally similar land crabs , as the fauna on these islands appear to be less naïve to the threats posed by introduced omnivores . [ 2 ] Prey are generally naïve towards non-native predators in marine and freshwater environments, but not in terrestrial ones. The naïveté was most significant towards non-native predators lacking native relatives in the community. Time since introduction plays a role, with prey naïveté diminishing over generations; approximately 200 generations may be needed for prey to sufficiently develop antipredator behaviors towards these non-native threats. [ 2 ] The occurrence and intensity of prey naiveté are hypothesized to arise from several interrelated factors, categorized into four themes: Prey naiveté was initially conceptualized as a straightforward phenomenon in which native fauna become vulnerable to non-native predators due to naive behavioral responses. It is now understood to be a multifaceted issue, and is classified into four distinct levels: In addition to behavioral inadequacies, prey species lacking evolutionary exposure to non-native predation may possess morphological or physiological traits that render them more susceptible to such threats, including insufficient defensive structures, flightlessness , conspicuous odors , or inadequate camouflage . Although prey naiveté is widely recognized in ecological studies, its variability under the influence of eco-evolutionary factors is not yet fully quantified. [ 2 ] Prey naïveté contributes significantly to the extinction and endangerment of prey species globally, as well as to the failure of wildlife reintroductions . [ 2 ] While excluding novel predators from conservation areas has had mixed results, the absence of any predators can worsen prey naiveté. Reintroducing native predators has been proposed as a potential solution to enhance prey behavioral responses. A study published in 2024 assessed the behavioral reactions of two prey species—the burrowing bettong ( Bettongia lesueur ) and spinifex hopping mouse ( Notomys alexis )—to the reintroduction of a native predator, the western quoll ( Dasyurus geoffroii ), and its impact on their responses to feral cats ( Felis catus ). Results indicated that quoll-exposed bettongs engaged in less inattentive foraging compared to controls but did not differentiate between predator and non-predator cues. In contrast, quoll-exposed hopping mice adjusted their foraging behaviors in open areas and increased their wariness in response to quoll stimuli, while cat-exposed hopping mice only heightened their caution in the presence of cat stimuli. Although reintroducing native predators improved general antipredator responses among naïve prey populations, evidence for enhanced discrimination towards introduced predators was limited, although the findings suggest that exposure to native predators may better prepare naïve prey for environments where novel predators are present. [ 3 ] A 2019 study explored whether exposing predator-naïve prey, specifically the greater bilby ( Macrotis lagotis ), to controlled numbers of introduced predators ( feral cats , Felis catus ) can enhance their survival upon reintroduction. Over two years, bilbies were exposed to feral cats in a fenced area, and their behaviors were assessed. Results showed that predator-exposed bilbies exhibited increased wariness—spending less time moving and more time in cover—compared to naïve bilbies. Following translocation , the predator-exposed group had higher survival rates and was less likely to be predated upon than their naïve counterparts. The study suggests that training naïve prey in the presence of predators may improve their survival in reintroduction efforts. [ 4 ]
https://en.wikipedia.org/wiki/Prey_naiveté
In the theory of evolution and natural selection , the Price equation (also known as Price's equation or Price's theorem ) describes how a trait or allele changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the frequency of alleles within each new generation of a population. The Price equation was derived by George R. Price , working in London to re-derive W.D. Hamilton 's work on kin selection . Examples of the Price equation have been constructed for various evolutionary cases. The Price equation also has applications in economics . [ 1 ] The Price equation is a mathematical relationship between various statistical descriptors of population dynamics, rather than a physical or biological law, and as such is not subject to experimental verification. In simple terms, it is a mathematical statement of the expression " survival of the fittest ". The Price equation shows that a change in the average amount z {\displaystyle z} of a trait in a population from one generation to the next ( Δ z {\displaystyle \Delta z} ) is determined by the covariance between the amounts z i {\displaystyle z_{i}} of the trait for subpopulation i {\displaystyle i} and the fitnesses w i {\displaystyle w_{i}} of the subpopulations, together with the expected change in the amount of the trait value due to fitness, namely E ( w i Δ z i ) {\displaystyle \mathrm {E} (w_{i}\Delta z_{i})} : Here w {\displaystyle w} is the average fitness over the population, and E {\displaystyle \operatorname {E} } and cov {\displaystyle \operatorname {cov} } represent the population mean and covariance respectively. 'Fitness' w {\displaystyle w} is the ratio of the average number of offspring for the whole population per the number of adult individuals in the population, and w i {\displaystyle w_{i}} is that same ratio only for subpopulation i {\displaystyle i} . If the covariance between fitness ( w i {\displaystyle w_{i}} ) and trait value ( z i {\displaystyle z_{i}} ) is positive, the trait value is expected to rise on average across population i {\displaystyle i} . If the covariance is negative, the characteristic is harmful, and its frequency is expected to drop. The second term, E ( w i Δ z i ) {\displaystyle \mathrm {E} (w_{i}\Delta z_{i})} , represents the portion of Δ z {\displaystyle \Delta z} due to all factors other than direct selection which can affect trait evolution. This term can encompass genetic drift , mutation bias, or meiotic drive . Additionally, this term can encompass the effects of multi-level selection or group selection . Price (1972) referred to this as the "environment change" term, and denoted both terms using partial derivative notation (∂ NS and ∂ EC ). This concept of environment includes interspecies and ecological effects. Price describes this as follows: Fisher adopted the somewhat unusual point of view of regarding dominance and epistasis as being environment effects. For example, he writes (1941): ‘A change in the proportion of any pair of genes itself constitutes a change in the environment in which individuals of the species find themselves.’ Hence he regarded the natural selection effect on M as being limited to the additive or linear effects of changes in gene frequencies, while everything else – dominance, epistasis, population pressure, climate, and interactions with other species – he regarded as a matter of the environment. Suppose we are given four equal-length lists of real numbers [ 3 ] n i {\displaystyle n_{i}} , z i {\displaystyle z_{i}} , n i ′ {\displaystyle n_{i}'} , z i ′ {\displaystyle z_{i}'} from which we may define w i = n i ′ / n i {\displaystyle w_{i}=n_{i}'/n_{i}} . n i {\displaystyle n_{i}} and z i {\displaystyle z_{i}} will be called the parent population numbers and characteristics associated with each index i . Likewise n i ′ {\displaystyle n_{i}'} and z i ′ {\displaystyle z_{i}'} will be called the child population numbers and characteristics, and w i ′ {\displaystyle w_{i}'} will be called the fitness associated with index i . (Equivalently, we could have been given n i {\displaystyle n_{i}} , z i {\displaystyle z_{i}} , w i {\displaystyle w_{i}} , z i ′ {\displaystyle z_{i}'} with n i ′ = w i n i {\displaystyle n_{i}'=w_{i}n_{i}} .) Define the parent and child population totals: and the probabilities (or frequencies): [ 4 ] Note that these are of the form of probability mass functions in that ∑ i q i = ∑ i q i ′ = 1 {\displaystyle \sum _{i}q_{i}=\sum _{i}q_{i}'=1} and are in fact the probabilities that a random individual drawn from the parent or child population has a characteristic z i {\displaystyle z_{i}} . Define the fitnesses: The average of any list x i {\displaystyle x_{i}} is given by: so the average characteristics are defined as: and the average fitness is: A simple theorem can be proved: q i w i = ( n i n ) ( n i ′ n i ) = ( n i ′ n ′ ) ( n ′ n ) = q i ′ ( n ′ n ) {\displaystyle q_{i}w_{i}=\left({\frac {n_{i}}{n}}\right)\left({\frac {n_{i}'}{n_{i}}}\right)=\left({\frac {n_{i}'}{n'}}\right)\left({\frac {n'}{n}}\right)=q_{i}'\left({\frac {n'}{n}}\right)} so that: and The covariance of w i {\displaystyle w_{i}} and z i {\displaystyle z_{i}} is defined by: Defining Δ z i = d e f z i ′ − z i {\displaystyle \Delta z_{i}\;{\stackrel {\mathrm {def} }{=}}\;z_{i}'-z_{i}} , the expectation value of w i Δ z i {\displaystyle w_{i}\Delta z_{i}} is The sum of the two terms is: Using the above mentioned simple theorem, the sum becomes where Δ z = d e f z ′ − z {\displaystyle \Delta z\;{\stackrel {\mathrm {def} }{=}}\;z'-z} . Consider a set of groups with i = 1 , . . . , n {\displaystyle i=1,...,n} that are characterized by a particular trait, denoted by x i {\displaystyle x_{i}} . The number n i {\displaystyle n_{i}} of individuals belonging to group i {\displaystyle i} experiences exponential growth: d n i d t = f i n i {\displaystyle {dn_{i} \over {dt}}=f_{i}n_{i}} where f i {\displaystyle f_{i}} corresponds to the fitness of the group. We want to derive an equation describing the time-evolution of the expected value of the trait: E ( x ) = ∑ i p i x i ≡ μ , p i = n i ∑ i n i {\displaystyle \mathbb {E} (x)=\sum _{i}p_{i}x_{i}\equiv \mu ,\quad p_{i}={n_{i} \over {\sum _{i}n_{i}}}} Based on the chain rule , we may derive an ordinary differential equation : d μ d t = ∑ i ∂ μ ∂ p i d p i d t + ∑ i ∂ μ ∂ x i d x i d t = ∑ i x i d p i d t + ∑ i p i d x i d t = ∑ i x i d p i d t + E ( d x d t ) {\displaystyle {\begin{aligned}{d\mu \over {dt}}&=\sum _{i}{\partial \mu \over {\partial p_{i}}}{dp_{i} \over {dt}}+\sum _{i}{\partial \mu \over {\partial x_{i}}}{dx_{i} \over {dt}}\\&=\sum _{i}x_{i}{dp_{i} \over {dt}}+\sum _{i}p_{i}{dx_{i} \over {dt}}\\&=\sum _{i}x_{i}{dp_{i} \over {dt}}+\mathbb {E} \left({dx \over {dt}}\right)\end{aligned}}} A further application of the chain rule for d p i / d t {\displaystyle dp_{i}/dt} gives us: d p i d t = ∑ j ∂ p i ∂ n j d n j d t , ∂ p i ∂ n j = { − p i / N , i ≠ j ( 1 − p i ) / N , i = j {\displaystyle {dp_{i} \over {dt}}=\sum _{j}{\partial p_{i} \over {\partial n_{j}}}{dn_{j} \over {dt}},\quad {\partial p_{i} \over {\partial n_{j}}}={\begin{cases}-p_{i}/N,\quad &i\neq j\\(1-p_{i})/N,\quad &i=j\end{cases}}} Summing up the components gives us that: d p i d t = p i ( f i − ∑ j p j f j ) = p i [ f i − E ( f ) ] {\displaystyle {\begin{aligned}{dp_{i} \over {dt}}&=p_{i}\left(f_{i}-\sum _{j}p_{j}f_{j}\right)\\&=p_{i}\left[f_{i}-\mathbb {E} (f)\right]\end{aligned}}} which is also known as the replicator equation . Now, note that: ∑ i x i d p i d t = ∑ i p i x i [ f i − E ( f ) ] = E { x i [ f i − E ( f ) ] } = Cov ( x , f ) {\displaystyle {\begin{aligned}\sum _{i}x_{i}{dp_{i} \over {dt}}&=\sum _{i}p_{i}x_{i}\left[f_{i}-\mathbb {E} (f)\right]\\&=\mathbb {E} \left\{x_{i}\left[f_{i}-\mathbb {E} (f)\right]\right\}\\&={\text{Cov}}(x,f)\end{aligned}}} Therefore, putting all of these components together, we arrive at the continuous-time Price equation: d d t E ( x ) = Cov ( x , f ) ⏟ Selection effect + E ( x ˙ ) ⏟ Dynamic effect {\displaystyle {d \over {dt}}\mathbb {E} (x)=\underbrace {{\text{Cov}}(x,f)} _{\text{Selection effect}}+\underbrace {\mathbb {E} ({\dot {x}})} _{\text{Dynamic effect}}} When the characteristic values z i {\displaystyle z_{i}} do not change from the parent to the child generation, the second term in the Price equation becomes zero resulting in a simplified version of the Price equation: which can be restated as: where v i {\displaystyle v_{i}} is the fractional fitness: v i = w i / w {\displaystyle v_{i}=w_{i}/w} . This simple Price equation can be proven using the definition in Equation (2) above. It makes this fundamental statement about evolution: "If a certain inheritable characteristic is correlated with an increase in fractional fitness, the average value of that characteristic in the child population will be increased over that in the parent population." The Price equation can describe any system that changes over time, but is most often applied in evolutionary biology. The evolution of sight provides an example of simple directional selection. The evolution of sickle cell anemia shows how a heterozygote advantage can affect trait evolution. The Price equation can also be applied to population context dependent traits such as the evolution of sex ratios. Additionally, the Price equation is flexible enough to model second order traits such as the evolution of mutability. The Price equation also provides an extension to Founder effect which shows change in population traits in different settlements Sometimes the genetic model being used encodes enough information into the parameters used by the Price equation to allow the calculation of the parameters for all subsequent generations. This property is referred to as dynamical sufficiency. For simplicity, the following looks at dynamical sufficiency for the simple Price equation, but is also valid for the full Price equation. Referring to the definition in Equation (2), the simple Price equation for the character z {\displaystyle z} can be written: For the second generation: The simple Price equation for z {\displaystyle z} only gives us the value of z ′ {\displaystyle z'} for the first generation, but does not give us the value of w ′ {\displaystyle w'} and ⟨ w i z i ⟩ {\displaystyle \langle w_{i}z_{i}\rangle } , which are needed to calculate z ″ {\displaystyle z''} for the second generation. The variables w i {\displaystyle w_{i}} and ⟨ w i z i ⟩ {\displaystyle \langle w_{i}z_{i}\rangle } can both be thought of as characteristics of the first generation, so the Price equation can be used to calculate them as well: The five 0-generation variables w {\displaystyle w} , z {\displaystyle z} , ⟨ w i z i ⟩ {\displaystyle \langle w_{i}z_{i}\rangle } , ⟨ w i 2 ⟩ {\displaystyle \langle w_{i}^{2}\rangle } , and ⟨ w i 2 z i {\displaystyle \langle w_{i}^{2}z_{i}} must be known before proceeding to calculate the three first generation variables w ′ {\displaystyle w'} , z ′ {\displaystyle z'} , and ⟨ w i ′ z i ′ ⟩ {\displaystyle \langle w'_{i}z'_{i}\rangle } , which are needed to calculate z ″ {\displaystyle z''} for the second generation. It can be seen that in general the Price equation cannot be used to propagate forward in time unless there is a way of calculating the higher moments ⟨ w i n ⟩ {\displaystyle \langle w_{i}^{n}\rangle } and ⟨ w i n z i ⟩ {\displaystyle \langle w_{i}^{n}z_{i}\rangle } from the lower moments in a way that is independent of the generation. Dynamical sufficiency means that such equations can be found in the genetic model, allowing the Price equation to be used alone as a propagator of the dynamics of the model forward in time. The simple Price equation was based on the assumption that the characters z i {\displaystyle z_{i}} do not change over one generation. If it is assumed that they do change, with z i {\displaystyle z_{i}} being the value of the character in the child population, then the full Price equation must be used. A change in character can come about in a number of ways. The following two examples illustrate two such possibilities, each of which introduces new insight into the Price equation. We focus on the idea of the fitness of the genotype. The index i {\displaystyle i} indicates the genotype and the number of type i {\displaystyle i} genotypes in the child population is: which gives fitness: Since the individual mutability z i {\displaystyle z_{i}} does not change, the average mutabilities will be: with these definitions, the simple Price equation now applies. In this case we want to look at the idea that fitness is measured by the number of children an organism has, regardless of their genotype. Note that we now have two methods of grouping, by lineage, and by genotype. It is this complication that will introduce the need for the full Price equation. The number of children an i {\displaystyle i} -type organism has is: which gives fitness: We now have characters in the child population which are the average character of the i {\displaystyle i} -th parent. with global characters: with these definitions, the full Price equation now applies. The use of the change in average characteristic ( z ′ − z {\displaystyle z'-z} ) per generation as a measure of evolutionary progress is not always appropriate. There may be cases where the average remains unchanged (and the covariance between fitness and characteristic is zero) while evolution is nevertheless in progress. For example, if we have z i = ( 1 , 2 , 3 ) {\displaystyle z_{i}=(1,2,3)} , n i = ( 1 , 1 , 1 ) {\displaystyle n_{i}=(1,1,1)} , and w i = ( 1 , 4 , 1 ) {\displaystyle w_{i}=(1,4,1)} , then for the child population, n i ′ = ( 1 , 4 , 1 ) {\displaystyle n_{i}'=(1,4,1)} showing that the peak fitness at w 2 = 4 {\displaystyle w_{2}=4} is in fact fractionally increasing the population of individuals with z i = 2 {\displaystyle z_{i}=2} . However, the average characteristics are z=2 and z'=2 so that Δ z = 0 {\displaystyle \Delta z=0} . The covariance c o v ( z i , w i ) {\displaystyle \mathrm {cov} (z_{i},w_{i})} is also zero. The simple Price equation is required here, and it yields 0=0 . In other words, it yields no information regarding the progress of evolution in this system. A critical discussion of the use of the Price equation can be found in van Veelen (2005), [ 5 ] van Veelen et al . (2012), [ 6 ] and van Veelen (2020). [ 7 ] Frank (2012) discusses the criticism in van Veelen et al . (2012). [ 8 ] Price's equation features in the plot and title of the 2008 thriller film WΔZ . The Price equation also features in posters in the computer game BioShock 2 , in which a consumer of a "Brain Boost" tonic is seen deriving the Price equation while simultaneously reading a book. The game is set in the 1950s, substantially before Price's work.
https://en.wikipedia.org/wiki/Price_equation
This is a list of prices of chemical elements . Listed here are mainly average market prices for bulk trade of commodities. Data on elements' abundance in Earth's crust is added for comparison. As of 2020 [update] , the most expensive non- synthetic element by both mass and volume is rhodium . It is followed by caesium , iridium and palladium by mass and iridium, gold and platinum by volume . Carbon in the form of diamond can be more expensive than rhodium. Per-kilogram prices of some synthetic radioisotopes range to trillions of dollars. While the difficulty of obtaining macroscopic samples of synthetic elements in part explains their high value, there has been interest in converting base metals to gold ( Chrysopoeia ) since ancient times, but only deeper understanding of nuclear physics has allowed the actual production of a tiny amount of gold from other elements for research purposes as demonstrated by Glenn Seaborg . [ 1 ] [ 2 ] However, both this and other routes of synthesis of precious metals via nuclear reactions is orders of magnitude removed from economic viability. Chlorine , sulfur and carbon (as coal) are cheapest by mass. Hydrogen , nitrogen , oxygen and chlorine are cheapest by volume at atmospheric pressure. When there is no public data on the element in its pure form, price of a compound is used, per mass of element contained. This implicitly puts the value of compounds' other constituents, and the cost of extraction of the element, at zero. For elements whose radiological properties are important, individual isotopes and isomers are listed. The price listing for radioisotopes is not exhaustive.
https://en.wikipedia.org/wiki/Prices_of_chemical_elements
In economics , engineering , business management and marketing the price–performance ratio is often written as cost–performance , cost–benefit or capability/price ( C/P ), refers to a product's ability to deliver performance, of any sort, for its price. Generally speaking, products with a lower price/performance ratio are more desirable on demand curve , excluding other factors. Even though this term would seem to be a straightforward ratio, when price performance is improved, better, or increased, it actually refers to the performance divided by the price, in other words exactly the opposite ratio (i.e. an inverse ratio) to rank a product as having an increased price/performance. According to futurist Raymond Kurzweil , products start out as highly ineffective and highly expensive. [ 1 ] Gradually, products become more effective and cheaper until they are highly effective and almost free to buy. [ 1 ] Some of the products that have followed this example include AIDS medications (which are now affordable compared to initial pricing [ which? ] ), text-to-speech programs, and digital cameras . [ 1 ] However, products that rely primarily on paper (e.g., newspapers and toilet paper) and/or fossil fuels (e.g., electricity in most countries and petroleum gasoline for automobiles) have only increased in price. This directly contradicts the trend of electronic gadgets like netbooks , desktop computers , and laptop computers that also have been decreasing in price. However, the prevailing inflation rate of a country or province/state may negate the plummeting costs of software, AIDS medications, and/or digital cameras in certain regions along with certain governmental policies. This has the effect of keeping costs high in certain areas while they are dramatically reduced in others. In theory, this means that the rich people have earlier access to highly inefficient technologies, medical treatments, and therapies (that are prototypical in nature) while the poor get access to these same products when they become more efficient and easier to manufacture several years down the road. [ 1 ] During the latter 1990s, the cost–performance ratios of the larger mainframe systems fell tremendously in comparison to a number of smaller microcomputers handling the same load. As a result, many of the older computer companies were shut down and people were put out of work . However, most of them were able to be re-hired at the newer corporations after undergoing a series of re-training involving the newer technologies. In the business world, there is usually a value associated with a typical cost–performance ratio analysis. This value can either be positive, neutral, or negative depending on the amount of money spent versus the results achieved by the spending of the available capital . A cost-performance ratio with a positive value (i.e. greater than 1) indicates that costs are running under budget. [ 2 ] A negative value (i.e. less than 1) indicates that costs are running over budget. [ 2 ] However, a neutral cost-performance ratio (between 1.0 and 1.9) could suggest a certain degree of stagnation in the budget. Business trips can also be factored into the cost–performance ratio because spending $50 to do a journey spanning 100 miles (160 km) in two hours is a better cost–performance ratio than spending $105 to do the journey in one hour. The term tends to be used quite a bit when comparing computer hardware. During the latter 1990s, the price–performance ratios of midrange and large mainframe systems fell tremendously in comparison to a number of smaller microcomputers handling the same load. Many companies were forced out of the industry as this happened, including DEC , Data General and many multiprocessor vendors such as Sequent Computer Systems and Pyramid Technology .
https://en.wikipedia.org/wiki/Price–performance_ratio
Prickle is also known as REST/NRSF-interacting LIM domain protein, which is a putative nuclear translocation receptor. [ 1 ] Prickle is part of the non-canonical Wnt signaling pathway that establishes planar cell polarity. [ 2 ] A gain or loss of function of Prickle1 causes defects in the convergent extension movements of gastrulation. [ 3 ] In epithelial cells, Prickle2 establishes and maintains cell apical/basal polarity. [ 4 ] Prickle1 plays an important role in the development of the nervous system by regulating the movement of nerve cells. [ 5 ] The first prickle protein was identified in Drosophila as a planar cell polarity protein. Vertebrate prickle-1 was first found as a rat protein that binds to a transcription factor , neuron-restrictive silencer factor (NRSF). It was then recognized that other vertebrates including mice and humans have two genes that are related to Drosophila prickle. [ 6 ] Mouse prickle-2 was found to be expressed in mature neurons of the brain along with mouse homologs of Drosophila planar polarity genes flamingo and dischevelled. [ 7 ] Prickle interacts with flamingo to regulate sensory axon advance at the transition between the peripheral nervous system and the central nervous system. [ 8 ] Also, Prickle1 interacts with RE1-silencing transcription factor (REST) by transporting REST out of the nucleus. [ 1 ] REST turns off several critical genes in neurons by binding to particular regions of DNA in the nucleus. [ 1 ] Prickle is recruited to the cell surface membrane by strabismus , another planar cell polarity protein. [ 9 ] In the developing Drosophila wing, prickle becomes concentrated at the proximal side of cells. [ 9 ] Prickle can compete with the ankyrin-repeat protein Diego for a binding site on Dishevelled. [ 10 ] In Drosophila , prickle is present inside cells in multiple forms due to alternative splicing of the prickle mRNA. [ 11 ] The relative levels of the alternate forms may be regulated and involved in the normal control of planar cell polarity. [ 11 ] Mutations in Prickle genes can cause epilepsy in humans by perturbing Prickle function. [ 12 ] One mutation in Prickle1 gene can result in Prickle1-Related Progressive Myoclonus Epilepsy-Ataxia Syndrome. [ 2 ] This mutation disrupts the interaction between prickle-like 1 and REST, which results in the inability to suppress REST. [ 2 ] Gene knockdown of Prickle1 by shRNA or dominant-negative constructs results in decreased axonal and dendritic extension in neurons in the hippocampus. [ 5 ] Prickle1 gene knockdown in neonatal retina causes defects in axon terminals of photoreceptors and in inner and outer segments. [ 5 ]
https://en.wikipedia.org/wiki/Prickle_(protein)
In mathematics , a Priestley space is an ordered topological space with special properties. Priestley spaces are named after Hilary Priestley who introduced and investigated them. [ 1 ] Priestley spaces play a fundamental role in the study of distributive lattices . In particular, there is a duality (" Priestley duality " [ 2 ] ) between the category of Priestley spaces and the category of bounded distributive lattices. [ 3 ] [ 4 ] A Priestley space is an ordered topological space ( X , τ ,≤) , i.e. a set X equipped with a partial order ≤ and a topology τ , satisfying the following two conditions: It follows that for each Priestley space ( X , τ ,≤) , the topological space ( X , τ ) is a Stone space ; that is, it is a compact Hausdorff zero-dimensional space. Some further useful properties of Priestley spaces are listed below. Let ( X , τ ,≤) be a Priestley space. A Priestley morphism from a Priestley space ( X , τ ,≤) to another Priestley space ( X ′, τ ′,≤′) is a map f  : X → X ′ which is continuous and order-preserving . Let Pries denote the category of Priestley spaces and Priestley morphisms. Priestley spaces are closely related to spectral spaces . For a Priestley space ( X , τ ,≤) , let τ u denote the collection of all open up-sets of X . Similarly, let τ d denote the collection of all open down-sets of X . Theorem: [ 5 ] If ( X , τ ,≤) is a Priestley space, then both ( X , τ u ) and ( X , τ d ) are spectral spaces. Conversely, given a spectral space ( X , τ ) , let τ # denote the patch topology on X ; that is, the topology generated by the subbasis consisting of compact open subsets of ( X , τ ) and their complements . Let also ≤ denote the specialization order of ( X , τ ) . Theorem: [ 6 ] If ( X , τ ) is a spectral space, then ( X , τ # ,≤) is a Priestley space. In fact, this correspondence between Priestley spaces and spectral spaces is functorial and yields an isomorphism between Pries and the category Spec of spectral spaces and spectral maps . Priestley spaces are also closely related to bitopological spaces . Theorem: [ 7 ] If ( X , τ ,≤) is a Priestley space, then ( X , τ u , τ d ) is a pairwise Stone space . Conversely, if ( X , τ 1 , τ 2 ) is a pairwise Stone space, then ( X , τ ,≤) is a Priestley space, where τ is the join of τ 1 and τ 2 and ≤ is the specialization order of ( X , τ 1 ) . The correspondence between Priestley spaces and pairwise Stone spaces is functorial and yields an isomorphism between the category Pries of Priestley spaces and Priestley morphisms and the category PStone of pairwise Stone spaces and bi-continuous maps . Thus, one has the following isomorphisms of categories: S p e c ≅ P r i e s ≅ P S t o n e {\displaystyle \mathbf {Spec} \cong \mathbf {Pries} \cong \mathbf {PStone} } One of the main consequences of the duality theory for distributive lattices is that each of these categories is dually equivalent to the category of bounded distributive lattices .
https://en.wikipedia.org/wiki/Priestley_space
Prigogine's theorem is a theorem of non-equilibrium thermodynamics , originally formulated by Ilya Prigogine . The formulation of Prigogine's theorem is: In a stationary state, the production of entropy inside a thermodynamic system with constant external parameters is minimal and constant. If the system is not in a stationary state, then it will change until the entropy production rate, or, in other words, the dissipative function of the system, takes the smallest value. According to this theorem, the stationary state of a linear non-equilibrium system (under conditions that prevent the achievement of an equilibrium state) corresponds to the minimum entropy production. [ 1 ] If there are no such obstacles, then the production of entropy reaches its absolute minimum - zero. A linear system means the fulfillment of linear phenomenological relationships between thermodynamic flows and driving forces. The coefficients of proportionality in the relationships between flows and driving forces are called phenomenological coefficients. The theorem was proved by Prigogine in 1947 from the Onsager relations . [ 2 ] Prigogine's theorem is valid if the kinetic coefficients in the Onsager relations are constant (do not depend on driving forces and flows); for real systems, it is valid only approximately, so the minimum entropy production for a stationary state is not such a general principle as the maximum entropy for an equilibrium state. It has been experimentally established that Onsager's linear relations are valid in a fairly wide range of parameters for heat conduction and diffusion processes (for example, Fourier's law , Fick's law ). For chemical reactions, the linear assumption is valid in a narrow region near the state of chemical equilibrium . The principle is also violated for systems odd with respect to time reversal. Attribution note: early versions of this article were translated from the Russian-language Wikipedia article on this topic. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prigogine's_theorem
The Prilezhaev reaction , also known as the Prileschajew reaction or Prilezhaev epoxidation , is the chemical reaction of an alkene with a peroxy acid to form epoxides . [ 1 ] It is named after Nikolai Prilezhaev , who first reported this reaction in 1909. [ 2 ] A widely used peroxy acid for this reaction is meta -chloroperoxybenzoic acid ( m -CPBA), due to its stability and good solubility in most organic solvents. [ 1 ] [ 3 ] The reaction is performed in inert solvents ( C 6 H 14 , C 6 H 6 , CH 2 Cl 2 , CHCl 3 , CCl 4 ) between -10 and 60 °C with the yield of 60-80%. An illustrative example is the epoxidation of trans -2-butene with m -CPBA to give trans -2,3-epoxybutane : [ 4 ] The oxygen atom that adds across the double bond of the alkene is taken from the peroxy acid, generating a molecule of the corresponding carboxylic acid as a byproduct. The reaction is highly stereospecific in the sense that the double bond stereochemistry is generally transferred to the relative configuration of the epoxide with essentially perfect fidelity, so that a trans -olefin leads to the stereoselective formation of the trans -2,3-substituted epoxide only, as illustrated by the example above, while a cis -olefin would only give the cis- epoxide. This stereochemical outcome is a consequence of the accepted mechanism, discussed below. In general, the Prilezhaev reaction epoxidizes the most substituted double bond. [ 1 ] The reaction proceeds through what is commonly known as the "butterfly mechanism", first proposed by Bartlett, wherein the peracid is intramolecularly hydrogen-bonded at the transition state. [ 5 ] Although there are frontier orbital interactions in both directions, the peracid is generally viewed as the electrophile and the alkene as the nucleophile . In support of this notion, more electron-rich alkenes undergo epoxidation at a faster rate. For example, the relative rates of epoxidation increase upon methyl substitution of the alkene (the methyl groups increase the electron density of the double bond by hyperconjugation ): ethylene (1, no methyl groups), propene (24, one methyl group), cis -2-butene (500, two methyl groups), 2-methyl-2-butene (6500, three methyl groups), 2,3-dimethyl-2-butene (>6500, four methyl groups). The reaction is believed to be concerted, with a transition state that is synchronous or nearly so. [ 6 ] The "butterfly mechanism" takes place via a transition state geometry in which the plane of the peracid bisects that of the alkene, with the O–O bond aligned perpendicular to it. This conformation allows the key frontier orbital interactions to occur. The primary interaction of the occupied π C=C orbital (HOMO) and the low-lying unoccupied σ* O-O orbital (LUMO). This interaction accounts for the observed overall nucleophilic character and electrophilic character of the alkene and peracid, respectively. There is also a secondary interaction between a lone pair orbital perpendicular to the plane of the peracid, n O (p) (HOMO) and the unoccupied π* C=C orbital (LUMO). [ 7 ] [ 8 ] Using the approach of Anslyn and Dougherty (2006, p. 556), the mechanism can be represented as follows: [ 9 ] There is a very large dependence of the reaction rate on the choice of solvent. [ 10 ]
https://en.wikipedia.org/wiki/Prilezhaev_reaction
A prill is a small aggregate or globule of a material, most often a dry sphere, formed from a melted liquid through spray crystallization. [ 1 ] Prilled is a term used in mining and manufacturing to refer to a product that has been pelletized . ANFO explosive typically comprises ammonium nitrate prills mixed with #2 fuel oil . [ 2 ] The pellets are a neater, simpler form for handling, with reduced dust. The material to be prilled must be in a solid state at room temperature and a low-viscosity liquid when melted. Prills are formed by allowing drops of the melted prill substance to congeal or freeze in mid-air after being dripped from the top of a tall prilling tower . Certain agrochemicals such as urea are often supplied in prilled form. Fertilizers (ammonium nitrate, urea, NPK fertilizer ) and some detergent powders are commonly manufactured as prills. [ 3 ] However prilling of ammonium nitrate and urea has in recent years been replaced by fluid bed granulation as this gives strong and more abrasion -resistant granules. [ 1 ] Melted material may also be atomized and then allowed to form smaller prills that are useful in cosmetics , food , and animal feed . This metallurgy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Prill
In computer science , Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph . This means it finds a subset of the edges that forms a tree that includes every vertex , where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex. The algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník [ 1 ] and later rediscovered and republished by computer scientists Robert C. Prim in 1957 [ 2 ] and Edsger W. Dijkstra in 1959. [ 3 ] Therefore, it is also sometimes called the Jarník's algorithm , [ 4 ] Prim–Jarník algorithm , [ 5 ] Prim–Dijkstra algorithm [ 6 ] or the DJP algorithm . [ 7 ] Other well-known algorithms for this problem include Kruskal's algorithm and Borůvka's algorithm . [ 8 ] These algorithms find the minimum spanning forest in a possibly disconnected graph; in contrast, the most basic form of Prim's algorithm only finds minimum spanning trees in connected graphs. However, running Prim's algorithm separately for each connected component of the graph, it can also be used to find the minimum spanning forest. [ 9 ] In terms of their asymptotic time complexity , these three algorithms are equally fast for sparse graphs , but slower than other more sophisticated algorithms. [ 7 ] [ 6 ] However, for graphs that are sufficiently dense, Prim's algorithm can be made to run in linear time , meeting or improving the time bounds for other algorithms. [ 10 ] The algorithm may informally be described as performing the following steps: In more detail, it may be implemented following the pseudocode below. As described above, the starting vertex for the algorithm will be chosen arbitrarily, because the first iteration of the main loop of the algorithm will have a set of vertices in Q that all have equal weights, and the algorithm will automatically start a new tree in F when it completes a spanning tree of each connected component of the input graph. The algorithm may be modified to start with any particular vertex s by setting C [ s ] to be a number smaller than the other values of C (for instance, zero), and it may be modified to only find a single spanning tree rather than an entire spanning forest (matching more closely the informal description) by stopping whenever it encounters another vertex flagged as having no associated edge. Different variations of the algorithm differ from each other in how the set Q is implemented: as a simple linked list or array of vertices, or as a more complicated priority queue data structure. This choice leads to differences in the time complexity of the algorithm. In general, a priority queue will be quicker at finding the vertex v with minimum cost, but will entail more expensive updates when the value of C [ w ] changes. The time complexity of Prim's algorithm depends on the data structures used for the graph and for ordering the edges by weight, which can be done using a priority queue . The following table shows the typical choices: A simple implementation of Prim's, using an adjacency matrix or an adjacency list graph representation and linearly searching an array of weights to find the minimum weight edge to add, requires O (|V| 2 ) running time. However, this running time can be greatly improved by using heaps to implement finding minimum weight edges in the algorithm's inner loop. A first improved version uses a heap to store all edges of the input graph, ordered by their weight. This leads to an O(|E| log |E|) worst-case running time. But storing vertices instead of edges can improve it still further. The heap should order the vertices by the smallest edge-weight that connects them to any vertex in the partially constructed minimum spanning tree (MST) (or infinity if no such edge exists). Every time a vertex v is chosen and added to the MST, a decrease-key operation is performed on all vertices w outside the partial MST such that v is connected to w , setting the key to the minimum of its previous value and the edge cost of ( v , w ). Using a simple binary heap data structure, Prim's algorithm can now be shown to run in time O (|E| log |V|) where |E| is the number of edges and |V| is the number of vertices. Using a more sophisticated Fibonacci heap , this can be brought down to O (|E| + |V| log |V|), which is asymptotically faster when the graph is dense enough that |E| is ω (|V|), and linear time when |E| is at least |V| log |V|. For graphs of even greater density (having at least |V| c edges for some c > 1), Prim's algorithm can be made to run in linear time even more simply, by using a d -ary heap in place of a Fibonacci heap. [ 10 ] [ 11 ] Let P be a connected, weighted graph . At every iteration of Prim's algorithm, an edge must be found that connects a vertex in a subgraph to a vertex outside the subgraph. Since P is connected, there will always be a path to every vertex. The output Y of Prim's algorithm is a tree , because the edge and vertex added to tree Y are connected. Let Y 1 be a minimum spanning tree of graph P. If Y 1 = Y then Y is a minimum spanning tree. Otherwise, let e be the first edge added during the construction of tree Y that is not in tree Y 1 , and V be the set of vertices connected by the edges added before edge e . Then one endpoint of edge e is in set V and the other is not. Since tree Y 1 is a spanning tree of graph P , there is a path in tree Y 1 joining the two endpoints. As one travels along the path, one must encounter an edge f joining a vertex in set V to one that is not in set V . Now, at the iteration when edge e was added to tree Y , edge f could also have been added and it would be added instead of edge e if its weight was less than e , and since edge f was not added, we conclude that Let tree Y 2 be the graph obtained by removing edge f from and adding edge e to tree Y 1 . It is easy to show that tree Y 2 is connected, has the same number of edges as tree Y 1 , and the total weights of its edges is not larger than that of tree Y 1 , therefore it is also a minimum spanning tree of graph P and it contains edge e and all the edges added before it during the construction of set V . Repeat the steps above and we will eventually obtain a minimum spanning tree of graph P that is identical to tree Y . This shows Y is a minimum spanning tree. The minimum spanning tree allows for the first subset of the sub-region to be expanded into a larger subset X , which we assume to be the minimum. The main loop of Prim's algorithm is inherently sequential and thus not parallelizable . However, the inner loop , which determines the next edge of minimum weight that does not form a cycle, can be parallelized by dividing the vertices and edges between the available processors. [ 12 ] The following pseudocode demonstrates this. This algorithm can generally be implemented on distributed machines [ 12 ] as well as on shared memory machines. [ 13 ] The running time is O ( | V | 2 | P | ) + O ( | V | log ⁡ | P | ) {\displaystyle O({\tfrac {|V|^{2}}{|P|}})+O(|V|\log |P|)} , assuming that the reduce and broadcast operations can be performed in O ( log ⁡ | P | ) {\displaystyle O(\log |P|)} . [ 12 ] A variant of Prim's algorithm for shared memory machines, in which Prim's sequential algorithm is being run in parallel, starting from different vertices, has also been explored. [ 14 ] It should, however, be noted that more sophisticated algorithms exist to solve the distributed minimum spanning tree problem in a more efficient manner.
https://en.wikipedia.org/wiki/Prim's_algorithm
Prima facie ( / ˌ p r aɪ m ə ˈ f eɪ ʃ i , - ʃ ə , - ʃ i iː / ; from Latin prīmā faciē ) is a Latin expression meaning "at first sight", [ 1 ] or "based on first impression". [ 2 ] The literal translation would be "at first face" or "at first appearance", from the feminine forms of primus ("first") and facies ("face"), both in the ablative case . In modern, colloquial, and conversational English, a common translation would be "on the face of it". The term prima facie is used in modern legal English (including both civil law and criminal law ) to signify that upon initial examination, sufficient corroborating evidence appears to exist to support a case. In common law jurisdictions, a reference to prima facie evidence denotes evidence that, unless rebutted , would be sufficient to prove a particular proposition or fact. [ 3 ] The term is used similarly in academic philosophy . [ 2 ] Most legal proceedings, in most jurisdictions , require a prima facie case to exist, following which proceedings may then commence to test it, and create a ruling. [ 3 ] The similar ex facie , Latin for "on the face [of it]," is a legal term typically used to note that a document's explicit terms are defective without further investigation. For example, a contract between two parties would be void ex facie if, under a legal system where it was a binding requirement for validity, the document did not require party A to give consideration to party B for services rendered. In most legal proceedings, one party has a burden of proof, which requires it to present prima facie evidence for all of the essential facts in its case. If it cannot, its claim may be dismissed without any need for a response by other parties. [ 4 ] A prima facie case might not stand or fall on its own; if an opposing party introduces other evidence or asserts an affirmative defense , it can be reconciled only with a full trial . Sometimes the introduction of prima facie evidence is informally called making a case or building a case . For example, in a trial under criminal law , the prosecution has the burden of presenting prima facie evidence of each element of the crime charged against the defendant . In a murder case, this would include evidence that the victim was in fact dead, that the defendant's act caused the death, and that the defendant acted with malice aforethought . If no party introduces new evidence, the case stands or falls just by the prima facie evidence or lack thereof, respectively. Prima facie evidence does not need to be conclusive or irrefutable: at this stage, evidence rebutting the case is not considered, only whether any party's case has enough merit to take it to a full trial. In common law jurisdictions such as the United Kingdom and the United States, the prosecution in a criminal trial must disclose all evidence to the defense. This includes the prima facie evidence. An aim of the doctrine of prima facie is to prevent litigants from bringing spurious charges which simply waste all other parties' time. Prima facie is often confused with res ipsa loquitur ('the thing speaks for itself', or literally 'the thing itself speaks'), the common law doctrine that when the facts make it self-evident that negligence or other responsibility lies with a party, it is not necessary to provide extraneous details, since any reasonable person would immediately find the facts of the case. The difference between the two is that prima facie is a term meaning there is enough evidence for there to be a case to answer, while res ipsa loquitur means that the facts are so obvious a party does not need to explain any more. For example: "There is a prima facie case that the defendant is liable. They controlled the pump. The pump was left on and flooded the plaintiff 's house. The plaintiff was away and had left the house in the control of the defendant. Res ipsa loquitur ." In Canadian tort law, this doctrine has been subsumed by general negligence law. [ 5 ] The phrase is also used in academic philosophy . Among its most notable uses is in the theory of ethics first proposed by W. D. Ross in his 1930 book The Right and the Good , often called the Ethic of Prima Facie Duties , as well as in epistemology , as used, for example, by Robert Audi . It is generally used in reference to an obligation. "I have a prima facie obligation to keep my promise and meet my friend" means that I am under an obligation, but this may yield to a more pressing duty. A more modern usage prefers the title pro tanto obligation : an obligation that may be later overruled by another more pressing one; it exists only pro tempore . The phrase prima facie is sometimes misspelled prima facia in the mistaken belief that facia is the actual Latin word; however, faciē is in fact the ablative case of faciēs , a fifth declension Latin noun. In policy debate theory , prima facie is used to describe the mandates or planks of an affirmative case, or, in some rare cases, a negative counterplan . When the negative team appeals to prima facie , it appeals to the fact that the affirmative team cannot add or amend anything in its plan after being stated in the first affirmative constructive. A common usage of the phrase is the concept of a " prima facie speed limit", which has been used in Australia and the United States. A prima facie speed limit is a default speed limit that applies when no other specific speed limit is posted, and may be exceeded by a driver; [ 6 ] however, if the driver is detected, and cited by police for exceeding the limit, the onus of proof is on the driver to show that the speed at which the driver was travelling was safe under the circumstances. In most jurisdictions, this type of speed limit has been replaced by absolute speed limits.
https://en.wikipedia.org/wiki/Prima_facie
In alchemy and philosophy, prima materia , materia prima or first matter (for a philosophical exposition refer to: Prime Matter ), is the ubiquitous starting material required for the alchemical magnum opus and the creation of the philosopher's stone . It is the primitive formless base of all matter similar to chaos , the quintessence or aether . Esoteric alchemists describe the prima materia using simile, and compare it to concepts like the anima mundi . The concept of prima materia is sometimes attributed to Aristotle . [ 2 ] The earliest roots of the idea can be found in the philosophy of Anaxagoras , who described the nous in relation to chaos. Empedocles ' cosmogony is also relevant. [ 3 ] When alchemy developed in Greco-Roman Egypt on the foundations of Greek philosophy, it included the concept of prima materia as a central tenet. Mary Anne Atwood uses words attributed to Arnaldus de Villa Nova to describe the role of prima materia in the fundamental theory of alchemy: "That there abides in nature a certain pure matter, which, being discovered and brought by art to perfection, converts to itself proportionally all imperfect bodies that it touches." [ 4 ] Although descriptions of the prima materia have changed throughout history, the concept has remained central to alchemical thought. Alchemical authors used similes to describe the universal nature of the prima materia. Arthur Edward Waite states that all alchemical writers concealed its "true name". Since the prima materia has all the qualities and properties of elementary things, the names of all kinds of things were assigned to it. [ 5 ] A similar account can be found in the Theatrum Chemicum : They have compared the "prima materia" to everything, to male and female, to the hermaphroditic monster, to heaven and earth, to body and spirit, chaos, microcosm, and the confused mass; it contains in itself all colors and potentially all metals; there is nothing more wonderful in the world, for it begets itself, conceives itself, and gives birth to itself. [ 6 ] Comparisons have been made to Hyle , the primal fire, Proteus , Light, and Mercury . [ 7 ] Martin Ruland the Younger lists more than fifty synonyms for the prima materia in his 1612 alchemical dictionary. His text includes justifications for the names and comparisons. He repeats that, "the philosophers have so greatly admired the Creature of God which is called the Primal Matter, especially concerning its efficacy and mystery, that they have given to it many names, and almost every possible description, for they have not known how to sufficiently praise it." [ 8 ] Waite lists an additional eighty four names. Names assigned to the Prima Materia in Ruland's 1612 alchemical dictionary, Lexicon alchemiae sive dictionarium alchemistarum . [ 9 ]
https://en.wikipedia.org/wiki/Prima_materia
In particle physics , the Primakoff effect , named after Henry Primakoff , is the resonant production of neutral pseudoscalar mesons by high-energy photons interacting with an atomic nucleus . It can be viewed as the reverse process of the decay of the meson into two photons and has been used for the measurement of the decay width of neutral mesons. [ 1 ] It could also take place in stars and be a production mechanism of certain hypothetical particles , such as the axion . [ 2 ] More precisely, the Primakoff effect is the conversion of axions into photons in the presence of very strong electromagnetic field . [ 3 ] The effect is predicted to lead to optical properties of the vacuum state in the presence of a strong magnetic field . [ 4 ] This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Primakoff_effect
Primal Pictures is a business that provides 3D graphic renderings of human anatomy . The company was founded in 1991. [ 1 ] In 2012, Informa acquired all of the company, previously owning 10 percent. [ 2 ] In 2022, as part of Informa's sale of its Pharma Intelligence business, Primal Pictures became part of private equity firm Warburg Pincus . [ 3 ] Later that year, Pharma Intelligence changed its name to Citeline and merged with pharmaceutical technology company Norstella. [ 4 ] Primal Pictures provides 3D graphic renderings of human anatomy, built using real scan data from the Visible Human Project , for use by healthcare students, educators, and medical professionals. It operates the Anatomy.tv online platform. [ 5 ] In one study, Anatomy.tv was deemed the greatest value in undergraduate anatomy education "since it had highest scores for effectiveness as well as the lowest scores for cost." [ 6 ] The representation of the body in Primal's software is derived from medical scan data that has been interpreted by a team of Primal anatomists and translated into three-dimensional images by graphics specialists. The interactive anatomy visuals are accompanied by animations that demonstrate: Primal Pictures won the Queen's Award for Enterprise in 2012. [ 11 ] PrimalPictures.com [1] Anatomy.tv [2]
https://en.wikipedia.org/wiki/Primal_Pictures