text
stringlengths
11
320k
source
stringlengths
26
161
Persephin is a neurotrophic factor in the glial cell line-derived neurotrophic factor (GDNF) family. Persephin shares around a 40% similarity in amino acid sequence compared to GDNF and neurturin, two members of the GDNF family. [ 1 ] Persephin has been found to be less potent than other members of the GDNF family . It has been found to support the survival and morphological differentiation of tyrosine hydroxylase immunoreactive neurons, although less so than both GDNF and neurturin . [ 2 ] The mRNA levels of persephin in developing neurons has been low compared to other neurotrophic factors, but relatively higher levels of persephin mRNA have been found in embryonic neurons. [ 1 ] Similarly to the other members of the GDNF family of ligands , persephin uses a receptor that consists of the tyrosine kinase signaling component Ret and a unit of glycosylphosphatidylinsitol (GPI)-anchored receptor (GFRα). Persephin specifically binds to GFRα4. [ 3 ] Persephin acts on both neurons in the CNS and PNS, but also has the ability to act as a renal ramogen . [ 1 ] Unlike other GDNF family of ligands , persephin only contains one RXXR cleavage site, rather than multiple, indicating that it can only make one length of functional peptide. [ 1 ] Persephin has the potential to be used as a therapeutic treatment for neurodegenerative diseases, such as Parkinson's disease and other diseases that affect motor neurons . Because persephin acts more selectively compared to other GFLs , such as GDNF , it may produce fewer mechanism-based complications, making it a stronger therapeutic target. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Persephin
The Perseus-Taurus Shell (also shortened to the Per-Tau shell ) is a near-spherical cavity in the interstellar medium, 500 light-years wide, located in the Perseus-Taurus constellations. [ 1 ] [ 2 ] A team from the Harvard Smithsonian Center for Astrophysics led by Catherine Zucker and Shmuel Bialy discovered the structure in 2021. [ 2 ] Scientists believe that it appeared following the explosions of ancient supernovae . [ 3 ] [ 4 ] [ 5 ] Molecular clouds surround the sphere-shape cavity. [ 6 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Perseus-Taurus_Shell
The Perseus Digital Library , formerly known as the Perseus Project , is a free-access digital library founded by Gregory Crane in 1987 and hosted by the Department of Classical Studies of Tufts University . One of the pioneers of digital libraries, its self-proclaimed mission is to make the full record of humanity available to everyone. While originally focused on the ancient Greco -Roman world, it has since diversified and offers materials in Arabic , Germanic , English Renaissance literature, 19th century American documents and Italian poetry in Latin , and has sprouted several child projects and international cooperation. The current version, Perseus 4.0, is also known as the Perseus Hopper, and is mirrored by the University of Chicago . The Perseus Digital Library was created to provide access to materials of the history of humanity to everyone, with Gregory Crane, the editor-in-chief of the library, stating that "access to the cultural heritage of humanity is a right, not a privilege". [ 1 ] This notably means that the Perseus Digital Library tries not to be exclusive to academics but aims to be accessible to everyone. [ 1 ] [ 2 ] To reflect this, the library supports open-source content and has published its code on SourceForge . The website is written in Java , uses sustainable formats such as XML and JPEG , [ 3 ] and includes native support for the Greek , Latin and Arabic alphabets. It allows users to download all materials that belong to the public domain along with the Creative Commons rights information that specify their conditions of use. While automated downloading is not authorized, in order to protect items subject to intellectual property, the library offers download packages to the public. [ 1 ] The Perseus Digital library also adheres to sets of standards edified by other projects. It follows the norms of the Text Encoding Initiative for its XML mark-up. In the same vein, the library has applied the Canonical Text Services (CTS) protocol regarding citations to its classical Greek-Latin corpus. [ 1 ] [ 3 ] Following this philosophy, Perseus chooses to use copyright -free texts, be it in the primary readings or in their translations and commentaries. For these reasons, the texts hosted necessarily date at the latest from the 19th and early 20th century, and must be divided into books, chapters and sections to be displayed individually. As such, those translations and commentaries can be outdated compared to the current state of the research, which can prove problematic when most of the now canonically accepted versions of ancient texts were established and sectioned later, during the 20th century. [ 1 ] Perseus however tries to make rare and out-of-print materials accessible, [ 4 ] and, for some texts, the material one can find on the website is the only one that was produced, which makes it especially valuable to scholars. [ 1 ] Some content is restricted by intellectual property license agreements with the holders of the rights to that material. This is notably the case for the pictures of artifacts that come from partnership with museums. [ 1 ] The Perseus Library is one of the first digital libraries to have been created, and is widely regarded as a pioneer in the field and a role model of other similar initiatives. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The Perseus Library first originated as a branch of the Thesaurus Linguae Graecae , from a full-text retrieval tool on Ancient Greek materials made by Gregory Crane, who became the editor-in-chief of the project ever since it was created. [ 2 ] The goal of the library was to provide a wider access to knowledge, past the academical field; to quote the mission statement, "to make a full record of humanity, as intellectually accessible as possible, to every human being, regardless of linguistic or cultural background". [ 9 ] The planning period took place from 1985 to 1988, with the development of the Ancient Greek collection starting in 1987 thanks to funding from the Annenberg-CPB Project which allowed the Perseus Project to be developed. [ 3 ] Perseus 1.0, or HyperCard Perseus, was a CD-ROM released in 1992 by Yale University , using the Apple HyperCard for Macintosh . [ 2 ] [ 3 ] [ 7 ] For practical reasons, it was limited to ancient Greek materials, and contained the texts of nine major Greek authors along with an English translation and commentary. [ 2 ] The collection was enriched by use of hyperlink technology and contextual material such as pictures of artifacts, an atlas as well as an historical timeline, and an encyclopedia of places, people and terminology, in an attempt to help non-academic users gain access to the material. [ 2 ] [ 3 ] Perseus 1.0 got nonetheless criticized for its "difficulty of use and odd content, both specialised and lacking". [ 10 ] Furthermore, it was not a true digital library, but rather more a CD-ROM of primary readings published with various additional information. [ 3 ] A second version of the CD-ROM came in 1996 in the form of Perseus 2.0, which mainly expanded the collection of pictures. It was still limited to McIntosh computers, until a platform-independent version got released in 2000. [ 2 ] [ 3 ] Hardware limitations induced costs and limited the scope of the projects, which ultimately led to the CD-ROM versions of Perseus only covering Greek material. Moreover, they were very expensive: even though the price was to only make minimal profits, the CDs cost between $150 and $350 depending on the amount of material included, and were only released in North America, which severely limited worldwide accessibility. [ 2 ] After moving to Tufts University in 1993, [ 3 ] the Perseus Library switched to a website version in 1995 written in Perl . [ 2 ] [ 3 ] [ 8 ] Thanks to this new interface, Perseus-Online could reach a wider audience. However, Perseus was still bound by copyright agreements made with the CD-ROM company, which limited the reuse of material. [ 2 ] Perseus 2.0 Online expanded the collection in 1997, adding Roman materials as well as Renaissance texts of Shakespeare and Marlowe . This version also introduced a search bar on the website, as well as articles which presented information on Heracles and the Olympic Games, which were quite successful. [ 2 ] In 1999, a grant from the Digital Library Initiative Phase 2 allowed Perseus to expand into other areas of Humanities and to create collections on the History of London and the American Civil War . [ 3 ] [ 8 ] Perseus 3.0 released in 2000 directly on the web. [ 3 ] This version expanded and revised the website, adding new collections, but it was subject to some issues when it came to making links to material stable and consistent. [ 2 ] The current version of Perseus, Perseus 4.0, also known as the Perseus Hopper, was released in 2005, with Perseus 3.0 coexisting alongside and slowly fading out, until it got taken down in 2009. [ 2 ] [ 3 ] This time, the website was based on Java , written in the open-sourced language Hopper and TEI-compliant XML. The shift allowed Perseus to produce its own XML-encoded texts, which were not bound by copyright agreements. The Greek, Latin and English collections were released in 2006 under a Creative Commons License. [ 2 ] The source code got subsequently released in 2007. [ 2 ] Perseus has nowadays branched into other projects: the Scaife Viewer, which is the first phase of the work towards Perseus 5.0, [ 11 ] the Perseus Catalog, [ 1 ] [ 12 ] [ 13 ] which provides links to the digital editions not hosted by the Perseus Library, the Perseids Project, [ 1 ] which aims to support access to Classics scholarship by providing tools to foster language acquisition, facilitate working with documents, and encourage research, and, more recently, the Beyond Translation project, which aims to combine the Scaife Viewer with new versions and services of Perseus 4.0. [ 9 ] Furthermore, the library has been cooperating internationally with Leipzig University , with several projects emerging of it, such as the Ancient Greek and Latin Dependency Treebank, for classical philology, Leipzig Open Fragmentary Texts Series (LOFTS) which focuses on fragmentary texts, [ 14 ] the Open Greek and Latin Project and Open Persian. [ 1 ] The Perseus Digital Library contains online collections on the Humanities pertaining to different subjects. The main collection focuses on the classical materials of ancient Greece and ancient Rome, and features an extensive number of texts written in Ancient Greek and Latin chosen for their status as a canonical literary text, in a degree of completeness and representativeness no other digital library can claim. [ 1 ] It has however been noted that the materials that weren't included on account on not being traditionally studied are further devalued by the lack of representation. [ 1 ] The library does not only host primary readings. Partnerships with museums allowed it to build a consequent collection of artifacts which showcases pictures of coins, sculptures, vases, but also gems, buildings and sites, as well as information concerning the context of artifact and its current location. [ 9 ] [ 13 ] Moreover, Perseus includes commentaries and translations that are free of copyright. However, to be free of copyright, texts have to be sufficiently old, and, as a result, Classics scholars have insisted that the commentaries and translations provided by Perseus cannot be used in an academical setting due to their age and the existence of more recent editions for the most often researched texts. [ 1 ] Although the classical section is the most complete and established of the website, the Perseus Digital Library is not limited to this collection, and has branched throughout its existence into other categories of knowledge. Materials on early modern English literature are as such available, and used to be called the Perseus Garner. [ 4 ] They consisted of a heterogeneous compilation of primary materials from the early modern period in England, as well as selected secondary materials from the nineteenth and early twentieth centuries, comprising the works of Christopher Marlowe , the Globe Shakespeare, volumes from the New Variorum Shakespeare Series, Raphael Holinshed's Chronicles, Richard Hakluyt's Voyages and the rhetorical works of Henry Peacham and Thomas Wilson , among other primary sources. Several reference works, include glossaries and lexicons, are also included. [ 4 ] This collection of texts has however been criticized for its choices of inclusion, and described as neither balanced nor complete, and texts not included are devalued by their absence. [ 1 ] Records from American Memory , a corpus of electronic versions of the Library of Congress archival collections related to the cultural heritage of the United States, were harvested in order to offer a collection on the history of the 19th-century United States. [ 3 ] This third-party collection was further completed by materials on the American Civil War . This sub-collection, as well as materials on the Humanist and Renaissance Italian Poetry in Latin and the Richmond Times Dispatch , are regarded as fairly complete due to their narrow subject. Perseus also hosts a variety of documents on the study of Germanic people , such as Beowulf and a variety of sagas in Old Norse along with translations. [ 9 ] This sub-section has been described as fairly good, considering that this field of research is less well researched than the other. [ 1 ] Finally, the Perseus Digital Library hosts Arabic materials, but its selection is limited to the Quran and dictionaries. [ 1 ] [ 9 ] The Library used to host the Bolles Collection of the History of London , a digitized recreation of an existing special collection homogeneous in theme but heterogeneous in content, which interlinks maps of London, relevant texts, and historical and contemporary illustrations of the city. [ 7 ] The collection got transferred to the Tufts Digital Library . The same can be said of the Duke Databank of Documentary Papyri and the history of Tufts, which used to be on the website as well (Perseus). A section on the history of mechanics also used to be present on Perseus. [ 3 ] The Perseus Library follows the goal of Digital Humanities , which is to capitalize on the use of modern technology to further research in Classics and facilitate understanding of the material. As such, it uses a variety of tools to enrich the texts it hosts. One of the way it does so is by automatically linking the texts to additional materials. Interlinks exist between a primary reading, its different versions, and its translations and commentaries. Users can also find maps of places mentioned in the texts as well as a historical timeline, and search tools allow readers to look for a text by its author or the presence of a specific lemma or word. [ 1 ] [ 3 ] [ 15 ] [ 16 ] Perseus also enhanced its texts through TEI-compliant [ 17 ] markup language , which allows each word to be linked to a dictionary entry, a morphological analysis tool known as Morpheus, a word frequency tool, and other texts where the word is used. [ 1 ] [ 4 ] [ 15 ] Since the mark-up is automatically generated, older sections of the libraries have been noted to be less rich and complete than newer ones. [ 1 ] This structure allows for a machine-readable and searchable environment, and one of Perseus' goals is the automated generation of knowledge through text and data mining . [ 3 ] Each section of a text and item is also given a stable identifier of 10 digits, [ 3 ] which makes citations possible [ 13 ] in the form of four different URIs (text, citation, work, catalog record) containing URNs ; [ 1 ] furthermore, metadata schemes are employed as to make each section or object meaningful outside of the context of the library. [ 3 ] Those sections are also given a Creative Commons license indicating conditions of use. [ 1 ] However, one should note the lack of a TEI-header containing bibliographical information and metadata about the respective source, and that such information needs to be searched for on the Perseus Catalog. [ 1 ] As a result of the use of this technology, Perseus has been useful to scholars of classical philology and history in facilitating the study of the material, [ 1 ] [ 18 ] but also to students who have benefited from the various tools the library offers. [ 3 ] [ 16 ] The website has been criticised for being ergonomically poor and unintuitive, and new users may have problems accessing resources due to a confusing layout which seems to prioritize showcasing the Perseus Digital Library over its collections. [ 1 ] [ 3 ] The lack of presentation for collections accentuates this problem. [ 3 ] Accessibility is another issue, with pages not always adhering to the standards of the Section 508 Amendment to the Rehabilitation Act of 1973 . [ 3 ] Perseus has proven convincing in terms of sustainability throughout its long history [ 1 ] and ability to evolve, having notably been able to migrate from the SGML format to XML in Perseus 4.0. [ 3 ] The preservation of the collections is further insured by a Fedora Commons Backend created in 2002 [ 2 ] [ 3 ] as well as a mirror site provided by the University of Chicago. [ 1 ] The Perseus Digital Library has been under the consistent leadership of its founder and editor in chief Gregory Crane. [ 2 ] [ 3 ] The library is nowadays located at Tufts University with a full-time staff of eight members, consisting of Gregory Crane, Marie-Claire Beaulieu, who has joined the project in 2010 and become its Associate Editor in 2013, [ 19 ] Bridget Almas, lead software developer of the Perseus Digital Library and one of the primary programmers of the Alpheios Project , Alison Babeu, Digital Librarian and Research Coordinator of the library since 2004, Lisa Cerrato, Managing Editor who was a part of Perseus since 1994, and Anna Krohn, digital library analyst and lead developer of the Perseus Catalog. Frederik Baumgardt and Tim Buckingham are also noted as working on the Perseids Project full time, respectively as Data Architect and Senior Research Coordinator. [ 9 ] A list of former staff and students can be found on the Perseus website. [ 9 ] A long list of agencies provided funding and grants to the Perseus Digital Library over the years. According to the home page of the Perseus website, the list of recent financial supporters includes: the Alpheios Project , the Andrew W. Mellon Foundation , the United States Department of Education , the Institute of Museum and Library Services , the National Endowment for the Humanities as well as private donors, and Tufts University. [ 9 ] [ 3 ] The Mellon Foundation, Tufts University, Harvard University 's Center for Hellenic Studies and, mainly, the National Endowment for the Humanities are specifically noted as key donors that made the Beyond Translation project possible. [ 9 ] Additional support for the Perseus project has been provided over the years by the Annenberg Foundation , Apple Inc. , the Berger Family Technology Transfer Endowment, the Digital Libraries Initiative Phase 2, the Fund for the Improvement of Postsecondary Education part of the U.S. Department of Education, the Getty Grant program, the Modern Language Association , the National Endowment for the Arts, the National Science Foundation , the Packard Humanities Institute , Xerox , Boston University , and Harvard University. [ 9 ]
https://en.wikipedia.org/wiki/Perseus_Digital_Library
In topological data analysis , a persistence barcode , sometimes shortened to barcode , is an algebraic invariant associated with a filtered chain complex or a persistence module that characterizes the stability of topological features throughout a growing family of spaces . [ 1 ] Formally, a persistence barcode consists of a multiset of intervals in the extended real line , where the length of each interval corresponds to the lifetime of a topological feature in a filtration , usually built on a point cloud , a graph , a function , or, more generally, a simplicial complex or a chain complex . Generally, longer intervals in a barcode correspond to more robust features, whereas shorter intervals are more likely to be noise in the data. A persistence barcode is a complete invariant that captures all the topological information in a filtration. [ 2 ] In algebraic topology, the persistence barcodes were first introduced by Sergey Barannikov in 1994 as the "canonical forms" invariants [ 2 ] consisting of a multiset of line segments with ends on two parallel lines, and later, in geometry processing, by Gunnar Carlsson et al. in 2004. [ 3 ] Let F {\displaystyle \mathbb {F} } be a fixed field . Consider a real-valued function on a chain complex f : K → R {\displaystyle f:K\rightarrow \mathbb {R} } compatible with the differential, so that f ( σ i ) ≤ f ( τ ) {\displaystyle f(\sigma _{i})\leq f(\tau )} whenever ∂ τ = ∑ i σ i {\displaystyle \partial \tau =\sum _{i}\sigma _{i}} in K {\displaystyle K} . Then for every a ∈ R {\displaystyle a\in \mathbb {R} } the sublevel set K a = f − 1 ( ( − ∞ , a ] ) {\displaystyle K_{a}=f^{-1}((-\infty ,a])} is a subcomplex of K , and the values of f {\displaystyle f} on the generators in K {\displaystyle K} define a filtration (which is in practice always finite): Then, the filtered complexes classification theorem states that for any filtered chain complex over F {\displaystyle \mathbb {F} } , there exists a linear transformation that preserves the filtration and brings the filtered complex into so called canonical form , a canonically defined direct sum of filtered complexes of two types: two-dimensional complexes with trivial homology d ( e a j ) = e a i {\displaystyle d(e_{a_{j}})=e_{a_{i}}} and one-dimensional complexes with trivial differential d ( e a i ′ ) = 0 {\displaystyle d(e_{a'_{i}})=0} . [ 2 ] The multiset B f {\displaystyle {\mathcal {B}}_{f}} of the intervals [ a i , a j ) {\displaystyle [a_{i},a_{j})} or [ a i ′ , ∞ ) {\displaystyle [a_{i}',\infty )} describing the canonical form, is called the barcode , and it is the complete invariant of the filtered chain complex. The concept of a persistence module is intimately linked to the notion of a filtered chain complex. A persistence module M {\displaystyle M} indexed over R {\displaystyle \mathbb {R} } consists of a family of F {\displaystyle \mathbb {F} } - vector spaces { M t } t ∈ R {\displaystyle \{M_{t}\}_{t\in \mathbb {R} }} and linear maps φ s , t : M s → M t {\displaystyle \varphi _{s,t}:M_{s}\to M_{t}} for each s ≤ t {\displaystyle s\leq t} such that φ s , t ∘ φ r , s = φ r , t {\displaystyle \varphi _{s,t}\circ \varphi _{r,s}=\varphi _{r,t}} for all r ≤ s ≤ t {\displaystyle r\leq s\leq t} . [ 4 ] This construction is not specific to R {\displaystyle \mathbb {R} } ; indeed, it works identically with any totally-ordered set . A persistence module M {\displaystyle M} is said to be of finite type if it contains a finite number of unique finite-dimensional vector spaces. The latter condition is sometimes referred to as pointwise finite-dimensional . [ 5 ] Let I {\displaystyle I} be an interval in R {\displaystyle \mathbb {R} } . Define a persistence module Q ( I ) {\displaystyle Q(I)} via Q ( I s ) = { 0 , if s ∉ I ; F , otherwise {\displaystyle Q(I_{s})={\begin{cases}0,&{\text{if }}s\notin I;\\\mathbb {F} ,&{\text{otherwise}}\end{cases}}} , where the linear maps are the identity map inside the interval. The module Q ( I ) {\displaystyle Q(I)} is sometimes referred to as an interval module. [ 6 ] Then for any R {\displaystyle \mathbb {R} } -indexed persistence module M {\displaystyle M} of finite type, there exists a multiset B M {\displaystyle {\mathcal {B}}_{M}} of intervals such that M ≅ ⨁ I ∈ B M Q ( I ) {\displaystyle M\cong \bigoplus _{I\in {\mathcal {B}}_{M}}Q(I)} , where the direct sum of persistence modules is carried out index-wise. The multiset B M {\displaystyle {\mathcal {B}}_{M}} is called the barcode of M {\displaystyle M} , and it is unique up to a reordering of the intervals. [ 3 ] This result was extended to the case of pointwise finite-dimensional persistence modules indexed over an arbitrary totally-ordered set by William Crawley-Boevey and Magnus Botnan in 2020, [ 7 ] building upon known results from the structure theorem for finitely generated modules over a PID , as well as the work of Cary Webb for the case of the integers . [ 8 ]
https://en.wikipedia.org/wiki/Persistence_barcode
The persistence length is a basic mechanical property quantifying the bending stiffness of a polymer . The molecule behaves like a flexible elastic rod/beam ( beam theory ). Informally, for pieces of the polymer that are shorter than the persistence length, the molecule behaves like a rigid rod, while for pieces of the polymer that are much longer than the persistence length, the properties can only be described statistically, like a three-dimensional random walk . Formally, the persistence length, P , is defined as the length over which correlations in the direction of the tangent are lost. In a more chemical based manner it can also be defined as the average sum of the projections of all bonds j ≥ i on bond i in an infinitely long chain. [ 1 ] Let us define the angle θ between a vector that is tangent to the polymer at position 0 (zero) and a tangent vector at a distance L away from position 0, along the contour of the chain. It can be shown that the expectation value of the cosine of the angle falls off exponentially with distance, [ 2 ] [ 3 ] where P is the persistence length and the angled brackets denote the average over all starting positions. The persistence length is considered to be one half of the Kuhn length , the length of hypothetical segments that the chain can be considered as freely joined. The persistence length equals the average projection of the end-to-end vector on the tangent to the chain contour at a chain end in the limit of infinite chain length. [ 4 ] The persistence length can be also expressed using the bending stiffness B s {\displaystyle B_{s}} , the Young's modulus E and knowing the section of the polymer chain. [ 2 ] [ 5 ] [ 6 ] [ 7 ] where k B {\displaystyle k_{B}} is the Boltzmann constant and T is the temperature. In the case of a rigid and uniform rod, I can be expressed as: where a is the radius. For charged polymers the persistence length depends on the surrounding salt concentration due to electrostatic screening. The persistence length of a charged polymer is described by the OSF (Odijk, Skolnick and Fixman) model. [ 8 ] For example, a piece of uncooked spaghetti has a persistence length on the order of 10 18 {\displaystyle 10^{18}} m (taking in consideration a Young modulus of 5 GPa and a radius of 1 mm). [ 9 ] Double-helical DNA has a persistence length of about 390 ångströms . [ 10 ] Such large persistent length for spaghetti does not mean that it is not flexible. It just means that its stiffness is such that it needs 10 18 {\displaystyle 10^{18}} m of length for thermal fluctuations at 300K to bend it. Another example: [ 11 ] Imagine a long cord that is slightly flexible. At short distance scales, the cord will basically be rigid. If you look at the direction the cord is pointing at two points that are very close together, the cord will likely be pointing in the same direction at those two points (i.e. the angles of the tangent vectors are highly correlated). If you choose two points on this flexible cord (imagine a piece of cooked spaghetti that you've just tossed on your plate) that are very far apart, however, the tangent to the cords at those locations will likely be pointing in different directions (i.e. the angles will be uncorrelated). If you plot out how correlated the tangent angles at two different points are as a function of the distance between the two points, you'll get a plot that starts out at 1 (perfect correlation) at a distance of zero and drops exponentially as distance increases. The persistence length is the characteristic length scale of that exponential decay. For the case of a single molecule of DNA the persistence length can be measured using optical tweezers and atomic force microscopy. [ 12 ] [ 13 ] Persistence length measurement of single stranded DNA is viable by various tools. Most of them have been done by incorporation of the worm-like chain model. For example, two ends of single stranded DNA were tagged by donor and acceptor dyes to measure average end to end distance which is represented as FRET efficiency. It was converted to persistence length by comparing the FRET efficiency with calculated FRET efficiency based on models such as the worm-like chain model. [ 14 ] [ 15 ] The recent attempts to obtain persistence length is combination of fluorescence correlation spectroscopy (FCS) with HYDRO program. HYDRO program is simply noted as the upgrade of Stokes–Einstein equation . The Stokes–Einstein equation calculates diffusion coefficient (which is inversely proportional to diffusion time) by assuming the molecules as pure sphere. However, the HYDRO program has no limitation regarding to the shape of molecule. For estimation of single stranded DNA persistence length, the diffusion time of number of worm-like chain polymer was generated and its diffusion time is calculated by the HYDRO program which is compared with the experiment diffusion time of FCS. The polymer property was adjusted to find the optimal persistence length. [ 16 ]
https://en.wikipedia.org/wiki/Persistence_length
In mathematics , the persistence of a number is the number of times one must apply a given operation to an integer before reaching a fixed point at which the operation no longer alters the number. Usually, this involves additive or multiplicative persistence of a non-negative integer, which is how often one has to replace the number by the sum or product of its digits until one reaches a single digit. Because the numbers are broken down into their digits, the additive or multiplicative persistence depends on the radix . In the remainder of this article, base ten is assumed. The single-digit final state reached in the process of calculating an integer's additive persistence is its digital root . Put another way, a number's additive persistence counts how many times we must sum its digits to arrive at its digital root. The additive persistence of 2718 is 2: first we find that 2 + 7 + 1 + 8 = 18, and then that 1 + 8 = 9. The multiplicative persistence of 39 is 3, because it takes three steps to reduce 39 to a single digit: 39 → 27 → 14 → 4. Also, 39 is the smallest number of multiplicative persistence 3. In base 10, there is thought to be no number with a multiplicative persistence greater than 11; this is known to be true for numbers up to 2.67×10 30000 . [ 1 ] [ 2 ] The smallest numbers with persistence 0, 1, 2, ... are: The search for these numbers can be sped up by using additional properties of the decimal digits of these record-breaking numbers. These digits must be in increasing order (with the exception of the second number, 10), and – except for the first two digits – all digits must be 7, 8, or 9. There are also additional restrictions on the first two digits. Based on these restrictions, the number of candidates for n -digit numbers with record-breaking persistence is only proportional to the square of n , a tiny fraction of all possible n -digit numbers. However, any number that is missing from the sequence above would have multiplicative persistence > 11; such numbers are believed not to exist, and would need to have over 30,000 digits if they do exist. [ 1 ] More about the additive persistence of a number can be found here . The additive persistence of a number, however, can become arbitrarily large ( proof : for a given number n {\displaystyle n} , the persistence of the number consisting of n {\displaystyle n} repetitions of the digit 1 is 1 higher than that of n {\displaystyle n} ). The smallest numbers of additive persistence 0, 1, 2, ... are: The next number in the sequence (the smallest number of additive persistence 5) is 2 × 10 2×(10 22 − 1)/9 − 1 (that is, 1 followed by 2222222222222222222222 9's). For any fixed base, the sum of the digits of a number is at most proportional to its logarithm ; therefore, the additive persistence is at most proportional to the iterated logarithm , and the smallest number of a given additive persistence grows tetrationally . Some functions only allow persistence up to a certain degree. For example, the function which takes the minimal digit only allows for persistence 0 or 1, as you either start with or step to a single-digit number.
https://en.wikipedia.org/wiki/Persistence_of_a_number
Persistent, bioaccumulative and toxic substances ( PBTs ) are a class of compounds that have high resistance to degradation from abiotic and biotic factors, high mobility in the environment and high toxicity. Because of these factors PBTs have been observed to have a high order of bioaccumulation and biomagnification , very long retention times in various media, and widespread distribution across the globe. Most PBTs in the environment are either created through industry or are unintentional byproducts. [ 1 ] Persistent organic pollutants (POPs) were the focal point of the Stockholm Convention 2001 due to their persistence, ability to biomagnify and the threat posed to both human health and the environment. The goal of the Stockholm Convention was to determine the classification of POPs, create measures to eliminate production/use of POPs, and establish proper disposal of the compounds in an environmentally friendly manner. [ 2 ] Currently the majority of the global community is actively involved with this program but a few still resist, most notably the US. Similar to POPs classification, the PBT classification of chemicals was developed in 1997 by the Great Lakes Binational Toxic Strategy (GLBNS). Signed by both the US and Canada, the GLBNS classified PBTs in one of two categories, level I and level II. [ 3 ] Level I PBTs are top priority which currently, as of 2005, contained 12 compounds or classes of compounds. [ 3 ] The GLBNS is administered by the U.S Environmental Protection Agency (USEPA) and Environment Canada . [ 3 ] Following the GLBNS, the Multimedia Strategy for Priority Persistent, Bioaccumulative and Toxic Pollutants (PBT Strategy) was drafted by the USEPA. [ 3 ] The PBT Strategy led to the implementation of PBT criteria in several regulational policies. Two main policies that were changed by the PBT strategy were the Toxics Release Inventory (TRI), which required more rigid chemical reporting, and the New Chemical Program (NCP) under the Toxics Substances Control Act (TSCA), which required screening for PBTs and PBT properties. [ 3 ] PBTs are a unique classification of chemicals that have and will continue to impact human health and the environment worldwide. The three main attributes of a PBT (persistence, bioaccumulative and toxic) each have a huge role in the risk posed by these compounds. [ 1 ] PBTs may have a high environmental mobility [ clarification needed ] relative to other contaminants mainly due to their resistance to degradation (persistence). This allows PBTs to travel far and wide in both the atmosphere and in aqueous environments. The low degradation rates of PBTs allow these chemicals to be exposed to both biotic and abiotic factors while maintaining a relatively stable concentration. Another factor that makes PBTs especially dangerous are the degradation products which are often relatively as toxic as the parent compound. These factors have resulted in global contamination, most notably in remote areas such as the arctic and high elevation areas, which are far from any source of PBTs. [ 3 ] The bioaccumulative ability of PBTs follows suit with the persistence attribute by the high resistance to degradation by biotic factors, especially with in organisms. Bioaccumulation is the result of a toxic substance being taken up at a higher rate than being removed from an organism. For PBTs this is caused mainly by a resistance to degradation, biotic and abiotic. PBTs usually are highly insoluble in water which allows them to enter organisms at faster rates through fats and other nonpolar regions on an organism. Bioaccumulation of a toxicant can lead to biomagnification through a trophic web which has resulted in massive concern in areas with especially low trophic diversity. [ clarification needed ] Biomagnification results in higher trophic organisms accumulating more PBTs than those of lower trophic levels through consumption of the PBT contaminated lower trophic organisms. [ 3 ] The toxicity of this class of compounds is high, with very low concentrations of a PBT required to enact an effect on an organism compared to most other contaminants. This high toxicity along with the persistence allows for the PBT to have detrimental effects in remote areas around the globe where there is not a local source of PBTs. The bioaccumulation and magnification along with the high toxicity and persistence has the ability to destroy and/or irreparably damage trophic systems, especially the higher trophic levels, globally. For this reason, PBTs have become an area of focus in global politics. [ 3 ] Historically, PCBs were used extensively for industrial purposes such as coolants , insulating fluids, and as a plasticizer . These contaminants enter the environment through both use and disposal. Due to extensive concern from the public, legal, and scientific sectors indicating that PCBs are likely carcinogens and potential to adversely impact the environment, these compounds were banned in 1979 in the United States. [ 4 ] The ban included the use of PCBs in uncontained sources, such as adhesives, fire retardant fabric treatments, and plasticizers in paints and cements. [ 4 ] Containers that are completely enclosed such as transformers and capacitors are exempt from the ban. [ 4 ] The inclusion of PCBs as a PBT may be attributed to their low water solubility, high stability, and semi-volatility facilitating their long range transport and accumulation in organisms. [ 5 ] The persistence of these compounds is due to the high resistance to oxidation, reduction, addition, elimination and electrophilic substitution. [ 6 ] The toxicological interactions of PCBs are affected by the number and position of the chlorine atoms, without ortho substitution are referred as coplanar and all others as non-coplanar. [ 5 ] Non-coplanar PCBs may cause neurotoxicity by interfering with intracellular signal transduction dependent on calcium. [ 7 ] Ortho-PCBs may alter hormone regulation through disruption of the thyroid hormone transport by binding to transthyretin. [ 8 ] Coplanar PCBs are similar to dioxins and furans, both bind to the aryl hydrocarbon receptor (AhR) in organisms and may exert dioxin-like effects, in addition to the effects shared with non-coplanar PCBs. [ 9 ] [ 10 ] The AhR is a transcription factor, therefore, abnormal activation may disrupt cellular function by altering gene transcription. [ 9 ] [ 10 ] Effects of PBTs may include increase in disease, lesions in benthic feeders, spawning loss, change in age-structured populations of fish, and tissue contamination in fish and shellfish. [ 11 ] [ 12 ] Humans and other organisms, which consume shellfish and/or fish contaminated with persistent bioaccumulative pollutants, have the potential to bioaccumulate these chemicals. [ 2 ] This may put these organisms at risk of mutagenic, teratogenic, and/or carcinogenic effects. [ 2 ] Correlations have been found between elevated exposure to PCB mixtures and alterations in liver enzymes, hepatomegaly, and dermatological effects such as rashes have been reported. [ 5 ] One PBT of concern is DDT (dichlorodiphenyltrichloroethane), an organochlorine that was widely used as an insecticide during World War II to protect soldiers from malaria carried by mosquitoes. [ 2 ] Due to the low cost and low toxicity to mammals, the widespread use of DDT for agricultural and commercial motives started around 1940. However, the overuse of DDT lead to insect tolerance to the chemical. It was also discovered that DDT had a high toxicity to fish. DDT was banned in the US by 1973 because of building evidence that DDT's stable structure, high fat solubility, and low rate of metabolism caused it to bioaccumulate in animals. [ 13 ] While DDT is banned in the US, other countries such as China and Turkey still produce and use it quite regularly through Dicofol , an insecticide that has DDT as an impurity. [ 14 ] This continued use in other parts of the world is still a global problem due to the mobility and persistence of DDT. The initial contact from DDT is on vegetation and soil. From here, the DDT can travel many routes; for instance, when plants and vegetation are exposed to the chemical to protect from insects, the plants may absorb it. Then these plants may either be consumed by humans or other animals. These consumers ingest the chemical and begin metabolizing the toxicant, accumulating more through ingestion, and posing health risks to the organism, their offspring, and any predators. Alternatively, the ingestion of the contaminated plant by insects may lead to tolerance by the organism. Another route is the chemical travelling through the soil and ending up in ground water and in human water supply. [ 15 ] In the case that the soil is near a moving water system, the chemical could end up in large freshwater systems or the ocean, where fish are at high risk from the toxicological effects of DDT. [ 16 ] Lastly, the most common transport route is the evaporation of DDT into the atmosphere followed by condensation and eventually precipitation where it is released into environments anywhere on earth. [ 17 ] Due to the long-range transport of DDT, the presence of this harmful toxicant will continue as long as it is still used anywhere and until the current contamination eventually degrades. Even after its complete discontinued use, it will still remain in the environment for many more years after because of DDT's persistent attributes. [ 16 ] Previous studies have shown that DDT and other similar chemicals directly elicited a response and effect from excitable membranes. [ 18 ] DDT causes membranes such as sense organs and nerves endings to activate repetitively by slowing down the ability for the sodium channel to close and stop releasing sodium ions. The sodium ions are what polarize the opposing synapse after it has depolarized from firing. [ 19 ] This inhibition of closing the sodium ion channel can lead to a variety of problems including a dysfunctional nervous system, decreased motor abilities/function/control, reproductive impairment (egg-shell thinning in birds), and development deficiencies. Presently, DDT has been labeled as a possible human carcinogen based on animal liver tumor studies. [ 20 ] DDT toxicity on humans have been associated with dizziness, tremors, irritability, and convulsions. Chronic toxicity has led to long term neurological and cognitive issues. [ 21 ] Inorganic mercury (elemental mercury) is less bioavailable and less toxic than that of organic mercury but is still toxic, nonetheless. It is released into the environment through both natural sources as well as human activity, and it has the capability to travel long distances through the atmosphere. [ 22 ] Around 2,700 to 6,000 tons of elemental mercury are released via natural activity such as volcanoes and erosion. Another 2,000–3,000 tons are released by human industrial activities such as coal combustion, metal smelting and cement production. [ 23 ] Due to its high volatility and atmospheric residence time of around one year, mercury has the ability to travel across continents before being deposited. Inorganic mercury has a wide spectrum of toxicological effects that include damage to the respiratory, nervous, immune and excretory systems in humans. [ 22 ] Inorganic mercury also possesses the ability to bioaccumulate individuals and biomagnify through trophic systems. [ 24 ] Organic mercury is significantly more detrimental to the environment than its inorganic form due to its widespread distribution as well as its higher mobility, general toxicity and rates of bioaccumulation than that of the inorganic form. Environmental organic mercury is mainly created by the transformation of elemental (inorganic) mercury via anaerobic bacteria into methylated mercury (organic). [ 25 ] The global distribution of organic mercury is the result of general mobility of the compound, activation via bacteria and transportation from animal consumption. [ 1 ] Organic mercury shares a lot of the same effects as the inorganic form but it has a higher toxicity due to its higher mobility in the body, especially its ability to readily move across the blood brain barrier. [ 22 ] The high toxicity of both forms of mercury (especially organic mercury) poses a threat to almost all organisms that comes in contact with it. This is one of the reasons that there is such high attention to mercury in the environment but even more so than its toxicity is both its persistence and atmospheric retention times. The ability of mercury to readily volatilize allows it to enter the atmosphere and travel far and wide. Unlike most other PBTs that have atmospheric half-lives between 30 min and 7 days mercury has an atmospheric residence time of at least 1 year. [ 26 ] This atmospheric retention time along with mercury's resistance to degradation factors such as electromagnetic radiation and oxidation, which are two of the main factors leading to degradation of many PBTs in the atmosphere, allows mercury from any source to be transported extensively. This characteristic of mercury transportation globally along with its high toxicity is the reasoning behind its incorporation into the BNS list of PBTs. [ 1 ] The realization of the adverse effects from environmental pollution were made public from several disasters that occurred globally. In 1965, it was recognized that extensive mercury pollution by the Chisso chemical factory in Minamata, Japan due to improper handling of industrial wastes resulted in significant effects to the humans and organisms exposed. [ 27 ] Mercury was released into the environment as methyl mercury (bioavailable state) into industrial wastewater and was then bioaccumulated by shellfish and fish in Minamata Bay and the Shiranui Sea . [ 27 ] When the contaminated seafood was consumed by the local populace it caused a neurological syndrome, coined Minamata disease . [ 27 ] Symptoms include general muscle weakness, hearing damage, reduced field of vision, and ataxia. [ 27 ] The Minamata disaster contributed to the global realization of the potential dangers from environmental pollution and to the characterization of PBTs. Despite the ban on DDT 30 years earlier and years of various efforts to clean up Puget Sound from DDT and PCBs, there is still a significant presence of both compounds which pose a constant threat to human health and the environment. [ 21 ] Harbor seals ( Phoca vitulina ), a common marine species in the Puget Sound area, have been the focus of a few studies to monitor and examine the effects of DDT accumulation and magnification in aquatic wildlife. One study tagged and reexamined seal pups every 4 to 5 years to be tested for DDT concentrations. [ 28 ] The trends showed the pups to be highly contaminated; this means their prey are also highly contaminated. [ 28 ] Due to DDT's high lipid solubility, it also has the ability to accumulate in the local populace who consume seafood from the area. This also translates to women who are pregnant or breastfeeding, since DDT will be transferred from the mother to child. [ 21 ] Both animal and human health risk to DDT will continue to be an issue in Puget Sound especially because of the cultural significance of fish in this region.
https://en.wikipedia.org/wiki/Persistent,_bioaccumulative_and_toxic_substances
A persistent carbene (also known as stable carbene ) is an organic molecule whose natural resonance structure has a carbon atom with incomplete octet (a carbene ), but does not exhibit the tremendous instability typically associated with such moieties. The best-known examples and by far largest subgroup are the N -heterocyclic carbenes (NHC) [ 1 ] (sometimes called Arduengo carbenes ), in which nitrogen atoms flank the formal carbene. Modern theoretical analysis suggests that the term "persistent carbene" is in fact a misnomer . Persistent carbenes do not in fact have a carbene electronic structure in their ground state , but instead an ylide stabilized by aromatic resonance or steric shielding . Excitation to a carbene structure then accounts for the carbene-like dimerization that some persistent carbenes undergo over the course of days. Persistent carbenes in general, and Arduengo carbenes in particular, are popular ligands in organometallic chemistry . In 1957, Ronald Breslow proposed that a relatively stable nucleophilic carbene, a thiazol-2-ylidene derivative of vitamin B 1 (thiamine), was the catalyst involved in the benzoin condensation that yields furoin from furfural . [ 2 ] [ 3 ] In this cycle, the vitamin's thiazolium ring exchanges a hydrogen atom (attached to carbon 2 of the ring) for a furfural residue. In deuterated water , the C2- proton was found to rapidly exchange for a deuteron in a statistical equilibrium . [ 4 ] This exchange was proposed to proceed via intermediacy of a thiazol-2-ylidene. In 2012 the isolation of the so-called Breslow intermediate was reported. [ 5 ] [ 6 ] In 1960, Hans-Werner Wanzlick and coworkers conjectured that carbenes derived from dihydroimidazol-2-ylidene were produced by vacuum pyrolysis of the corresponding 2-trichloromethyl dihydroimidazole compounds with the loss of chloroform . [ 7 ] [ 8 ] [ 9 ] They conjectured that the carbene existed in equilibrium with its dimer , a tetraaminoethylene derivative, the so-called Wanzlick equilibrium . This conjecture was challenged by Lemal and coworkers in 1964, who presented evidence that the dimer did not dissociate; [ 10 ] and by Winberg in 1965. [ 11 ] However, subsequent experiments by Denk, Herrmann and others have confirmed this equilibrium, albeit in specific circumstances. [ 12 ] [ 13 ] In 1970, Wanzlick's group generated imidazol-2-ylidene carbenes by the deprotonation of an imidazolium salt. [ 14 ] Wanzlick as well as Roald Hoffmann , [ 9 ] [ 15 ] proposed that these imidazole-based carbenes should be more stable than their 4,5-dihydro analogues, due to Hückel-type aromaticity . Wanzlick did not however isolate imidazol-2-ylidenes, but instead their coordination compounds with mercury and isothiocyanate : In 1988, Guy Bertrand and others isolated a phosphinocarbene . These species can be represented as either a λ 3 -phosphinocarbene or λ 5 - phosphaacetylene : [ 16 ] [ 17 ] These compounds were called "push-pull carbenes" in reference to the contrasting electron affinities of the phosphorus and silicon atoms. They exhibit both carbenic and alkynic reactivity. An X-ray structure of this molecule has not been obtained and at the time of publication some doubt remained as to their exact carbenic nature. In 1991, Arduengo and coworkers crystallized a diaminocarbene by deprotonation of an imidazolium cation: [ 18 ] This carbene, the forerunner of a large family of carbenes with the imidazol-2-ylidene core, is indefinitely stable at room temperature in the absence of oxygen and moisture. It melts at 240–241 °C without decomposition. The 13 C NMR spectrum shows a signal at 211 ppm for the carbenic atom. [ 19 ] The X-ray structure revealed longer N–C bond lengths in the ring of the carbene than in the parent imidazolium compound, indicating that there was very little double bond character to these bonds. [ 20 ] The first air-stable ylidic carbene, a chlorinated member of the imidazol-2-ylidene family, was obtained in 1997. [ 21 ] In 2000, Bertrand obtained additional carbenes of the phosphanyl type, including (phosphanyl)(trifluoromethyl)carbene, stable in solution at -30 °C [ 22 ] and a moderately stable (amino)(aryl)carbene with only one heteroatom adjacent to the carbenic atom. [ 23 ] [ 24 ] In the modern understanding, the superficially unoccupied p-orbital on a (meta)stable carbene is not, in fact, fully empty. Instead, the carbene Lewis structures are in resonance with dative bonds toward adjacent lone-pair or pi-bond orbitals. [ 25 ] Early workers attributed the stability of Arduengo carbenes to the bulky N - adamantyl substituents, which prevent the carbene from dimerising. But replacement of the N -adamantyl groups with methyl groups also affords 1,3,4,5-tetramethylimidazol-2‑ylidene (Me 4 ImC:), a thermodynamically stable unhindered NHC. [ 26 ] In 1995, Arduengo's group obtained a carbene derivative of dihydroimidazol-2-ylidene , proving that stability did not arise from the aromaticity of the conjugated imidazole backbone. [ 27 ] The following year, the first acyclic persistent carbene demonstrated that stability did not even require a cyclic backbone. [ 28 ] Unhindered derivatives of the hydrogenated [ 29 ] [ 30 ] and acyclic [ 30 ] [ 31 ] [ 32 ] carbenes dimerized, suggesting that Me 4 ImC: might be exceptional, rather than paradigmatic. But the behavior of the acyclic carbenes offered a tantalizing clue to the stabilization mechanism. [ citation needed ] Unlike the cyclic derivatives, acyclic carbenes are flexible and bonds to the carbenic atom admit rotation. But bond rotation in the compound appeared hindered , suggesting a double bond character that would place the positive charge on adjacent nitrogen atoms while preserving the octet rule . [ 28 ] Indeed, most persistent carbenes are stabilized by two flanking nitrogen centers. The outliers include an aminothiocarbene and an aminooxycarbene, which use other heteroatoms , [ 33 ] [ 34 ] and room-temperature-stable bis(diisopropylamino)cyclopropenylidene, in which the carbene atom is connected to two carbon atoms in a three-member, aromatic, cyclopropenylidene ring. [ 35 ] The following are examples of the classes of stable carbenes isolated to date: The first stable carbenes to be isolated were based on an imidazole ring, with the hydrogen in carbon 2 of the ring (between the two nitrogen atoms) removed, and other hydrogens replaced by various groups. These imidazol-2-ylidenes are still the most stable and the most well studied and understood family of persistent carbenes. [ citation needed ] A considerable range of imidazol-2-ylidenes have been synthesised, including those in which the 1,3-positions have been functionalised with alkyl , aryl , [ 26 ] alkyloxy, alkylamino, alkylphosphino [ 36 ] and even chiral substituents: [ 36 ] In particular, substitution of two chlorine atoms for the two hydrogens at ring positions 4 and 5 yielded the first air-stable carbene. [ 21 ] Its extra stability probably results from the electron-withdrawing effect of the chlorine substituents, which reduce the electron density on the carbon atom bearing the lone pair , via induction through the sigma-backbone. Molecules containing two and even three imidazol-2-ylidene groups have also been synthesised. [ 37 ] [ 38 ] Imidazole-based carbenes are thermodynamically stable and generally have diagnostic 13 C NMR chemical shift values between 210 and 230 ppm for the carbenic carbon. X-ray structures of these molecules can show N–C–N bond angles of 103–110° but is typically 104°. [ 39 ] [ 40 ] [ 41 ] [ 42 ] Depending on the arrangement of the three nitrogen atoms in triazol-5-ylidene, there are two possible isomers, namely 1,2,3-triazol-5-ylidenes and 1,2,4-triazol-5-ylidenes. The triazol-5-ylidenes based on the 1,2,4-triazole ring are pictured below and were first prepared by Enders and coworkers [ 43 ] by vacuum pyrolysis through loss of methanol from 2-methoxytriazoles. Only a limited range of these molecules have been reported, with the triphenyl substituted molecule being commercially available. Triazole -based carbenes are thermodynamically stable and have diagnostic 13 C NMR chemical shift values between 210 and 220 ppm for the carbenic carbon. The X-ray structure of the triphenyl substituted carbene above shows an N–C–N bond angle of around 101°. The 5-methoxytriazole precursor to this carbene was made by the treatment of a triazolium salt with sodium methoxide, which attacks as a nucleophile . [ 43 ] This may indicate that these carbenes are less aromatic than imidazol-2-ylidenes, as the imidazolium precursors do not react with nucleophiles due to the resultant loss of aromaticity. [ citation needed ] The two families above can be seen as special cases of a broader class of compounds which have a carbenic atom bridging two nitrogen atoms. A range of such diaminocarbenes have been prepared principally by Roger Alder 's research group. In some of these compounds, the N–C–N unit is a member of a five- or six-membered non-aromatic ring, [ 27 ] [ 29 ] [ 44 ] including a bicyclic example. In other examples, the adjacent nitrogens are connected only through the carbenic atom, and may or may not be part of separate rings. [ 28 ] [ 31 ] [ 32 ] Unlike the aromatic imidazol-2-ylidenes or triazol-5-ylidenes, these carbenes appear not to be thermodynamically stable, as shown by the dimerisation of some unhindered cyclic and acyclic examples. [ 29 ] [ 31 ] Studies [ 30 ] suggest that these carbenes dimerise via acid catalysed dimerisation (as in the Wanzlick equilibrium ). Diaminocarbenes have diagnostic 13 C NMR chemical shift values between 230 and 270 ppm for the carbenic atom. The X-ray structure of dihydroimidazole-2-ylidene shows a N–C–N bond angle of about 106°, whilst the angle of the acyclic carbene is 121°, both greater than those seen for imidazol-2-ylidenes. Some cyclic monoamino carbenes are known. There exist several variants of the stable carbenes above where one of the nitrogen atoms adjacent to the carbene center (the α nitrogens) has been replaced by an alternative heteroatom, such as oxygen, sulfur, or phosphorus . [ 16 ] [ 17 ] [ 33 ] [ 34 ] In particular, the formal substitution of sulfur for one of the nitrogens in imidazole would yield the aromatic heterocyclic compound thiazole . A thiazole based carbene (analogous to the carbene postulated by Breslow) [ 45 ] has been prepared and characterised by X-ray crystallography. [ 33 ] Other non-aromatic aminocarbenes with O, S and P atoms adjacent (i.e. alpha) to the carbene centre have been prepared, for example, thio- and oxyiminium based carbenes have been characterised by X-ray crystallography. [ 34 ] Since oxygen and sulfur are divalent , steric protection of the carbenic centre is limited especially when the N–C–X unit is part of a ring. These acyclic carbenes have diagnostic 13 C NMR chemical shift values between 250 and 300 ppm for the carbenic carbon, further downfield than any other types of stable carbene. X-ray structures have shown N–C–X bond angles of around 104° and 109° respectively. [ citation needed ] Carbenes that formally derive from imidazole-2-ylidenes by substitution of sulfur, oxygen, or other chalcogens for both α-nitrogens are expected to be unstable, as they have the potential to dissociate into an alkyne (R 1 C≡CR 2 ) and a carbon dichalcogenide (X 1 =C=X 2 ). [ 46 ] [ 47 ] The reaction of carbon disulfide (CS 2 ) with electron deficient acetylene derivatives is proposed to give transient 1,3-dithiolium carbenes (i.e. where X 1 = X 2 = S), which then dimerise to give derivatives of tetrathiafulvene . Thus it is possible that the reverse of this process might be occurring in similar carbenes. [ 46 ] [ 47 ] In Bertrand's persistent carbenes, the unsaturated carbon is bonded to a phosphorus and a silicon . [ 48 ] However, these compounds seem to exhibit some alkynic properties, and when published the exact carbenic nature of these red oils was in debate. [ 17 ] One stable N -heterocyclic carbene [ 49 ] has a structure analogous to borazine with one boron atom replaced by a methylene group . This results in a planar six-electron compound. Another family of carbenes is based on a cyclopropenylidene core, a three-carbon ring with a double bond between the two atoms adjacent to the carbenic one. This family is exemplified by bis(diisopropylamino)cyclopropenylidene . [ 35 ] Persistent carbenes tend to exist in the singlet , dimerizing when forced into triplet states. Nevertheless, Hideo Tomioka and associates used electron delocalization to produce a comparatively stable triplet carbene ( bis(9-anthryl)carbene ) in 2001. It has an unusually long half-life of 19 minutes. [ 50 ] [ 51 ] Although the figure below shows the two parts of the molecule in one flat plane, molecular geometry puts the two aromatic parts in orthogonal positions with respect to each other. In 2006 a triplet carbene was reported by the same group with a half-life of 40 minutes. [ 52 ] This carbene is prepared by a photochemical decomposition of a diazomethane precursor by 300 nm light in benzene with expulsion of nitrogen gas. Again the figure below is not an adequate representation of the actual molecular structure: both phenyl rings are positioned orthogonal with respect to each other. The carbene carbon has an sp- hybridisation , the two remaining orthogonal p- orbitals each conjugating with one of the aromatic rings. Exposure to oxygen (a triplet diradical) converts this carbene to the corresponding benzophenone . The diphenylmethane compound is formed when it is trapped by cyclohexa-1,4-diene . As with the other carbenes, this species contains large bulky substituents, namely bromine and the trifluoromethyl groups on the phenyl rings, that shield the carbene and prevent or slow down the process of dimerization to a 1,1,2,2-tetra(phenyl)alkene. Based on computer simulations , the distance of the divalent carbon atom to its neighbors is claimed to be 138 picometers with a bond angle of 158.8°. The planes of the phenyl groups are almost at right angles to each other (the dihedral angle being 85.7°). Mesoionic carbenes (MICs) are similar to N -heterocyclic carbenes (NHCs) except that canonical resonance structures with the carbene depicted cannot be drawn without adding additional charges. Mesoionic carbenes are also referred to as abnormal N -heterocyclic carbenes (aNHC) or remote N -heterocyclic carbenes (rNHC). A variety of free carbenes can be isolated and are stable at room temperature. Other free carbenes are not stable and are susceptible to intermolecular decomposition pathways. [ citation needed ] The imidazol-2-ylidenes are strong bases, having p K a ≈ 24 for the conjugate acid in dimethyl sulfoxide (DMSO): [ 53 ] However, further work showed that diaminocarbenes will deprotonate the DMSO solvent, with the resulting anion reacting with the resulting amidinium salt. Reaction of imidazol-2-ylidenes with 1-bromohexane gave 90% of the 2-substituted adduct, with only 10% of the corresponding alkene , indicating that these molecules are also reasonably nucleophilic . p K a values for the conjugate acids of several NHC families have been examined in aqueous solution. pKa values of triazolium ions lie in the range 16.5–17.8, [ 54 ] around 3 p K a units more acidic than related imidazolium ions. [ 55 ] At one time, stable carbenes were thought to reversibly dimerise through the so-called Wanzlick equilibrium . However, imidazol-2-ylidenes and triazol-5-ylidenes are thermodynamically stable and do not dimerise, and have been stored in solution in the absence of water and air for years. This is presumably due to the aromatic nature of these carbenes, which is lost upon dimerisation. In fact imidazol-2-ylidenes are so thermodynamically stable that only in highly constrained conditions are these carbenes forced to dimerise. Chen and Taton [ 56 ] made a doubly tethered diimidazol-2-ylidene by deprotonating the respective diimidazolium salt. Only the deprotonation of the doubly tethered diimidazolium salt with the shorter methylene bridge (–CH 2 –) resulted in the dicarbene dimer: If this dimer existed as a dicarbene, the electron lone pairs on the carbenic carbon would be forced into close proximity. Presumably the resulting repulsive electrostatic interactions would have a significant destabilising effect. To avoid this electronic interaction, the carbene units dimerise. On the other hand, heteroamino carbenes (such as R 2 N–C–OR or R 2 N–C–SR) and non-aromatic carbenes such as diaminocarbenes (such as R 2 N–C–NR 2 ) have been shown to dimerise, [ 57 ] albeit quite slowly. This has been presumed to be due to the high barrier to singlet state dimerisation: Diaminocarbenes do not truly dimerise, but rather form the dimer by reaction via formamidinium salts, a protonated precursor species. [ 30 ] Accordingly, this reaction can be acid catalysed. This reaction occurs because unlike imidazolium based carbenes, there is no loss of aromaticity in protonation of the carbene. Unlike the dimerisation of triplet state carbenes, these singlet state carbenes do not approach head to head ("least motion"), but rather the carbene lone pair attacks the empty carbon p-orbital ("non-least motion"). Carbene dimerisation can be catalyzed by both acids and metals. The chemistry of stable carbenes has not been fully explored. However, Enders et al. [ 43 ] [ 58 ] [ 59 ] have performed a range of organic reactions involving a triazol-5-ylidene. These reactions are outlined below and may be considered as a model for other carbenes. These carbenes tend to behave in a nucleophilic fashion ( e and f ), performing insertion reactions ( b ), addition reactions ( c ), [2+1] cycloadditions ( d , g and h ), [4+1] cycloadditions ( a ) as well as simple deprotonations . The insertion reactions ( b ) probably proceed via deprotonation, resulting in the generation of a nucleophile ( − XR) which can attack the generated salt giving the impression of a H–X insertion. The reported stable isothiazole carbene ( 2b ) derived from an isothiazolium perchlorate ( 1 ) [ 60 ] was questioned. [ 61 ] The researchers were only able to isolate 2-imino-2 H -thiete ( 4 ). The intermediate 3 was proposed through a rearrangement reaction . The carbene 2b is no longer considered as stable. [ 62 ] Imidazol-2-ylidenes, triazol-5-ylidenes (and less so, diaminocarbenes) have been shown to coordinate to a plethora of elements, from alkali metals , main group elements , transition metals and even lanthanides and actinides . A periodic table of elements gives some idea of the complexes which have been prepared, and in many cases these have been identified by single crystal X-ray crystallography . [ 44 ] [ 63 ] [ 64 ] Stable carbenes are believed to behave in a similar fashion to organophosphines in their coordination properties to metals. These ligands are said to be good σ-donors through the carbenic lone pair , but poor π-acceptors due to internal ligand back-donation from the nitrogen atoms adjacent to the carbene centre, and so are able to coordinate to even relatively electron deficient metals. Enders [ 65 ] and Hermann [ 66 ] [ 67 ] have shown that these carbenes are suitable replacements for phosphine ligands in several catalytic cycles . Whilst they have found that these ligands do not activate the metal catalyst as much as phosphine ligands they often result in more robust catalysts. Several catalytic systems have been looked into by Hermann and Enders, using catalysts containing imidazole and triazole carbene ligands, with moderate success. [ 63 ] [ 65 ] [ 66 ] [ 67 ] Grubbs [ 68 ] has reported replacing a phosphine ligand (PCy 3 ) with an imidazol-2-ylidene in the olefin metathesis catalyst RuCl 2 (PCy 3 ) 2 CHPh, and noted increased ring closing metathesis as well as exhibiting "a remarkable air and water stability". Molecules containing two and three carbene moieties have been prepared as potential bidentate and tridentate carbene ligands. [ 37 ] [ 38 ] Carbenes can be stabilised as organometallic species. These transition metal carbene complexes fall into two categories: [ 69 ] Persistent triplet state carbenes are likely to have very similar reactivity as other non-persistent triplet state carbenes . Those carbenes that have been isolated to date tend to be colorless solids with low melting points. These carbenes tend to sublime at low temperatures under high vacuum. One of the more useful physical properties is the diagnostic chemical shift of the carbenic carbon atom in the 13 C- NMR spectrum. Typically this peak is in the range between 200 and 300 ppm, where few other peaks appear in the 13 C- NMR spectrum. An example is shown on the left for a cyclic diaminocarbene which has a carbenic peak at 238 ppm. Upon coordination to metal centers, the 13 C carbene resonance usually shifts highfield, depending on the Lewis acidity of the complex fragment. Based on this observation, Huynh et al. developed a new methodology to determine ligand donor strengths by 13 C NMR analysis of trans -palladium(II)-carbene complexes. The use of a 13 C-labeled N-heterocyclic carbene ligand also allows for the study of mixed carbene-phosphine complexes, which undergo trans - cis -isomerization due to the trans effect . [ 70 ] NHCs are widely used as ancillary ligand in organometallic chemistry. One practical application is the ruthenium -based Grubbs' catalyst and NHC-Palladium Complexes for cross-coupling reactions. [ 71 ] [ 72 ] [ 73 ] NHC-metal complexes, specifically Ag(I)-NHC complexes have been widely tested for their biological applications. [ 74 ] NHCs are often strongly basic (the pKa value of the conjugate acid of an imidazol-2-ylidene was measured at ca. 24) [ 53 ] and react with oxygen . Clearly these reactions are performed using air-free techniques , avoiding compounds of even moderate acidity . Although imidazolium salts are stable to nucleophilic addition, other non-aromatic salts are not (i.e. formamidinium salts). [ 75 ] In these cases, strong unhindered nucleophiles are avoided whether they are generated in situ or are present as an impurity in other reagents (such as LiOH in BuLi). Several approaches have been developed in order to prepare stable carbenes, these are outlined below. Deprotonation of carbene precursor salts with strong bases has proved a reliable route to almost all stable carbenes: Imidazol-2-ylidenes and dihydroimidazol-2-ylidenes, such IMes , have been prepared by the deprotonation of the respective imidazolium and dihydroimidazolium salts. The acyclic carbenes [ 28 ] [ 31 ] and the tetrahydropyrimidinyl [ 44 ] based carbenes were prepared by deprotonation using strong homogeneous bases. Several bases and reaction conditions have been employed with varying success. The degree of success has been principally dependent on the nature of the precursor being deprotonated. The major drawback with this method of preparation is the problem of isolation of the free carbene from the metals ions used in their preparation. One might believe that sodium or potassium hydride [ 27 ] [ 33 ] would be the ideal base for deprotonating these precursor salts. The hydride should react irreversibly with the loss of hydrogen to give the desired carbene, with the inorganic by-products and excess hydride being removed by filtration. In practice this reaction is often too slow, requiring the addition of DMSO or t -BuOH . [ 18 ] [ 26 ] These reagents generate soluble catalysts , which increase the rate of reaction of this heterogeneous system, via the generation of tert-butoxide or dimsyl anion . However, these catalysts have proved ineffective for the preparation of non-imidazolium adducts as they tend to act as nucleophiles towards the precursor salts and in so doing are destroyed. The presence of hydroxide ions as an impurity in the metal hydride could also destroy non-aromatic salts. Deprotonation with sodium or potassium hydride in a mixture of liquid ammonia / THF at −40 °C has been reported [ 36 ] for imidazole-based carbenes. Arduengo and coworkers [ 33 ] managed to prepare a dihydroimidazol-2-ylidene using NaH. However, this method has not been applied to the preparation of diaminocarbenes. In some cases, potassium tert-butoxide can be employed without the addition of a metal hydride. [ 26 ] The use of alkyllithiums as strong bases [ 18 ] has not been extensively studied, and have been unreliable for deprotonation of precursor salts. With non-aromatic salts, n-BuLi and PhLi can act as nucleophiles whilst t-BuLi can on occasion act as a source of hydride, reducing the salt with the generation of isobutene : Lithium amides like the diisopropylamide (LDA) and the ( tetramethylpiperidide (LiTMP) ) [ 28 ] [ 31 ] generally work well for the deprotonation of all types of salts, providing that not too much LiOH is present in the n -butyllithium used to make the lithium amide. Titration of lithium amide can be used to determine the amount of hydroxide in solution. The deprotonation of precursor salts with metal hexamethyldisilazides [ 44 ] works very cleanly for the deprotonation of all types of salts, except for unhindered formamidinium salts, where this base can act as a nucleophile to give a triaminomethane adduct. The preparation of stable carbenes free from metal cations has been keenly sought to allow further study of the carbene species in isolation from these metals. Separating a carbene from a carbene-metal complex can be problematic due to the stability of the complex. Accordingly, it is preferable to make the carbene free from these metals in the first place. Indeed, some metal ions, rather than stabilising the carbene, have been implicated in the catalytic dimerisation of unhindered examples. Shown right is an X-ray structure showing a complex between a diaminocarbene and potassium HMDS . This complex was formed when excess KHMDS was used as a strong base to deprotonate the formamidinium salt. Removing lithium ions resulting from deprotonation with reagents such as lithium diisopropylamide (LDA) can be especially problematic. Potassium and sodium salt by-products tend to precipitate from solution and can be removed. Lithium ions may be chemically removed by binding to species such as cryptands or crown ethers . Metal free carbenes have been prepared in several ways as outlined below: Another approach of preparing carbenes has relied on the desulfurisation of thioureas with potassium in THF . [ 29 ] [ 76 ] A contributing factor to the success of this reaction is that the byproduct, potassium sulfide , is insoluble in the solvent. The elevated temperatures suggest that this method is not suitable for the preparation of unstable dimerising carbenes. A single example of the deoxygenation of a urea with a fluorene derived carbene to give the tetramethyldiaminocarbene and fluorenone has also been reported: [ 77 ] The desulfurisation of thioureas with molten potassium to give imidazol-2-ylidenes or diaminocarbenes has not been widely used. The method was used to prepare dihydroimidazole carbenes. [ 29 ] Vacuum pyrolysis, with the removal of neutral volatile byproducts i.e. methanol or chloroform, has been used to prepare dihydroimidazole and triazole based carbenes. Historically the removal of chloroform by vacuum pyrolysis of adducts A was used by Wanzlick [ 8 ] in his early attempts to prepare dihydroimidazol-2-ylidenes but this method is not widely used. The Enders laboratory [ 43 ] has used vacuum pyrolysis of adduct B to generate a triazol-5-ylidene. Bis(trimethylsilyl)mercury (CH 3 ) 3 Si-Hg-Si(CH 3 ) 3 reacts with chloro- iminium and chloro- amidinium salts to give a metal-free carbene and elemental mercury . [ 78 ] For example: Persistent triplet state carbenes have been prepared by photochemical decomposition of a diazomethane product via the expulsion of nitrogen gas, at a wavelength of 300 nm in benzene. Stable carbenes are very reactive, and so the minimum amount of handling is desirable using air-free techniques . However, provided rigorously dry, relatively non-acidic and air-free materials are used, stable carbenes are reasonably robust to handling per se . By way of example, a stable carbene prepared from potassium hydride can be filtered through a dry celite pad to remove excess KH (and resulting salts) from the reaction. On a relatively small scale, a suspension containing a stable carbene in solution can be allowed to settle and the supernatant solution pushed through a dried membrane syringe filter . Stable carbenes are readily soluble in non-polar solvents such as hexane, and so typically recrystallisation of stable carbenes can be difficult, due to the unavailability of suitable non-acidic polar solvents. Air-free sublimation as shown right can be an effective method of purification, although temperatures below 60 °C under high vacuum are preferable as these carbenes are relatively volatile and also could begin to decompose at these higher temperatures. Indeed, sublimation in some cases can give single crystals suitable for X-ray analysis. However, strong complexation to metal ions like lithium will in most cases prevent sublimation. Reviews on persistent carbenes: For a review on the physico-chemical properties (electronics, sterics, ...) of N-heterocyclic carbenes:
https://en.wikipedia.org/wiki/Persistent_carbene
In physics , persistent current is a perpetual electric current that does not require an external power source. Such a current is impossible in normal electrical devices, since all commonly-used conductors have a non-zero resistance, and this resistance would rapidly dissipate any such current as heat. However, in superconductors and some mesoscopic devices , persistent currents are possible and observed due to quantum effects . In resistive materials, persistent currents can appear in microscopic samples due to size effects. Persistent currents are widely used in the form of superconducting magnets . In electromagnetism, all magnetizations can be seen as microscopic persistent currents. By definition a magnetization M {\displaystyle \mathbf {M} } can be replaced by its corresponding microscopic form, which is an electric current density: This current is a bound current, not having any charge accumulation associated with it since it is divergenceless . What this means is that any permanently magnetized object, for example a piece of lodestone , can be considered to have persistent electric currents running throughout it (the persistent currents are generally concentrated near the surface). The converse is also true: any persistent electric current is divergence-free, and can therefore be represented instead by a magnetization. Therefore, in the macroscopic Maxwell's equations , it is purely a choice of mathematical convenience, whether to represent persistent currents as magnetization or vice versa. In the microscopic formulation of Maxwell's equations, however, M {\displaystyle \mathbf {M} } does not appear and so any magnetizations must be instead represented by bound currents. In superconductors , charge can flow without any resistance. It is possible to make pieces of superconductor with a large built-in persistent current, either by creating the superconducting state (cooling the material) while charge is flowing through it, or by changing the magnetic field around the superconductor after creating the superconducting state. [ 1 ] This principle is used in superconducting electromagnets to generate sustained high magnetic fields that only require a small amount of power to maintain. The persistent current was first identified by H. Kamerlingh Onnes , and attempts to set a lower bound on their duration have reached values of over 100,000 years. [ 2 ] Surprisingly, it is also possible to have tiny persistent currents inside resistive metals that are placed in a magnetic field, even in metals that are nominally "non-magnetic". [ 4 ] The current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. This type of persistent current is a mesoscopic low temperature effect: the magnitude of the current becomes appreciable when the size of the metallic system is reduced to the scale of the electron quantum phase coherence length and the thermal length. Persistent currents decrease with increasing temperature and will vanish exponentially above a temperature known as the Thouless temperature. This temperature scales as the inverse of the circuit diameter squared. [ 3 ] Consequently, it has been suggested that persistent currents could flow up to room temperature and above in nanometric metal structures such as metal (Au, Ag,...) nanoparticles . This hypothesis has been offered for explaining the singular magnetic properties of nanoparticles made of gold and other metals. [ 5 ] Unlike with superconductors, these persistent currents do not appear at zero magnetic field, as the current fluctuates symmetrically between positive and negative values; the magnetic field breaks that symmetry and allows a nonzero average current. Although the persistent current in an individual ring is largely unpredictable due to uncontrolled factors like the disorder configuration, it has a slight bias so that an average persistent current appears even for an ensemble of conductors with different disorder configurations. [ 6 ] This kind of persistent current was first predicted to be experimentally observable in micrometer-scale rings in 1983 by Markus Büttiker, Yoseph Imry , and Rolf Landauer . [ 7 ] Because the effect requires the phase coherence of electrons around the entire ring, the current can not be observed when the ring is interrupted by an ammeter and thus the current must by measured indirectly through its magnetization . In fact, all metals exhibit some magnetization in magnetic fields due a combination of de Haas–van Alphen effect , core diamagnetism , Landau diamagnetism , Pauli paramagnetism , which all appear regardless of the shape of the metal. The additional magnetization from persistent current becomes strong with a connected ring shape, and for example would disappear if the ring were cut. [ 6 ] Experimental evidence of the observation of persistent currents were first reported in 1990 by a research group at Bell Laboratories using a superconducting resonator to study an array of copper rings. [ 8 ] Subsequent measurements using superconducting resonators and extremely sensitive magnetometers known as superconducting quantum interference devices (SQUIDs) produced inconsistent results. [ 9 ] In 2009, physicists at Stanford University using a scanning SQUID [ 10 ] and at Yale University using microelectromechanical cantilevers [ 3 ] reported measurements of persistent currents in nanoscale gold and aluminum rings respectively that both showed a strong agreement with the simple theory for non-interacting electrons. "These are ordinary, non-superconducting metal rings, which we typically think of as resistors, yet these currents will flow forever, even in the absence of an applied voltage." The 2009 measurements both reported greater sensitivity to persistent currents than previous measurements and made several other improvements to persistent current detection. The scanning SQUID's ability to change the position of the SQUID detector relative to the ring sample allowed for a number of rings to be measured on one sample chip and better extraction of the current signal from background noise . The cantilever detector's mechanical detection technique made it possible to measure the rings in a clean electromagnetic environment over a large range of magnetic field and also to measure a number of rings on one sample chip. [ 11 ]
https://en.wikipedia.org/wiki/Persistent_current
Commonly referred to as phosphorescence , persistent luminescence is the emission of light by a phosphorescent material after an excitation by ultraviolet or visible light . The mechanism underlying this phenomenon is not fully understood. [ 1 ] It is neither fluorescence nor phosphorescence . [ 2 ] [ 3 ] In fluorescence, the lifetime of the excited state lasts a few nanoseconds. In phosphorescence, even if the emission lives several seconds, this is due to deexcitation between two electronic states of different spin multiplicity . Persistent luminescence involves energy traps (such as electron or hole traps) in a material, [ 4 ] which are filled during the excitation. Afterward, the stored energy is gradually released to light emitter centers, usually by a fluorescence-like mechanism. Persistent luminescence materials are mainly used in safety signs, watch dials, decorative objects and toys. [ 5 ] They have also been used as nanoprobes in small animal optical imaging. [ 6 ]
https://en.wikipedia.org/wiki/Persistent_luminescence
The persistent radical effect (PRE) in chemistry describes and explains the selective product formation found in certain free-radical cross-reactions. In these type of reactions, different radicals compete in secondary reactions. The so-called persistent (long-lived) radicals do not self-terminate and only react in cross-couplings. In this way, the cross-coupling products in the product distribution are more prominent. [ 1 ] [ 2 ] [ 3 ] The effect was first described in 1936 by Bachmann & Wiselogle. [ 4 ] They heated pentaphenylethane and observed that the main reaction product was the starting product (87%) with only 2% of tetraphenylethane formed. They concluded that the dissociation of pentaphenylethane into triphenylmethyl and diphenylmethyl radicals was reversible and that persistent triphenylmethyl did not self terminate and transient diphenylmethyl did to a certain extent. [ 1 ] In 1964, Perkins [ 1 ] [ 5 ] [ 6 ] performed a similar reaction with phenylazotriphenylmethane in benzene . Again, the dimerization product of the persistent radical (phenylcyclohexydienyl) was absent as reaction product. In 1981, Geiger and Huber found that the photolysis of dimethylnitrosamine into dimethylaminyl radical and nitrous oxide was also completely reversible. [ 2 ] [ 7 ] A similar effect was observed by Kräutler in 1984 for methylcobalamin . [ 8 ] [ 9 ] The term 'persistent radical effect' was coined in 1992 by Daikh and Finke in their work related to the thermolysis of a cyanocobalamin model compound. [ 10 ] The PRE is a kinetic feature which provides a self-regulating effect in certain controlled/living radical polymerization systems such as atom transfer radical polymerization and nitroxide mediated polymerization . Propagating radicals P n * are rapidly trapped in the deactivation process (with a rate constant of deactivation, k deact ) by species X , which is typically a stable radical such as a nitroxide. The dormant species are activated (with a rate constant k act ) either spontaneously/thermally, in the presence of light, or with an appropriate catalyst (as in ATRP) to reform the growing centers. Radicals can propagate ( k p ) but also terminate ( k t ). However, persistent radicals ( X ), as stated above, cannot terminate with each other but only (reversibly) cross-couple with the growing species ( k deact ). Thus, every act of radical–radical termination is accompanied by the irreversible accumulation of X . Consequently, the concentration of radicals as well as the probability of termination decreases with time. The growing radicals (established through the activation–deactivation process) then predominantly react with X rather than with themselves. [ 11 ]
https://en.wikipedia.org/wiki/Persistent_radical_effect
Persister cells are subpopulations of cells that resist treatment, and become antimicrobial tolerant by changing to a state of dormancy or quiescence. [ 1 ] [ 2 ] Persister cells in their dormancy do not divide. [ 3 ] The tolerance shown in persister cells differs from antimicrobial resistance in that the tolerance is not inherited and is reversible. [ 4 ] When treatment has stopped the state of dormancy can be reversed and the cells can reactivate and multiply. Most persister cells are bacterial, and there are also fungal persister cells, [ 5 ] yeast persister cells, and cancer persister cells that show tolerance for cancer drugs . [ 6 ] Recognition of bacterial persister cells dates back to 1944 when Joseph Warwick Bigger , an Irish physician working in England, was experimenting with the recently discovered penicillin . Bigger used penicillin to lyse a suspension of bacteria and then inoculate a culture medium with the penicillin-treated liquid. Colonies of bacteria were able to grow after antibiotic exposure. The important observation that Bigger made was that this new population could again be almost eliminated by the use of penicillin except for a small residual population. Hence the residual organisms were not antibiotic resistant mutants but rather a subpopulation of what he called ‘persisters’. [ 7 ] The formation of bacterial persisters is now known to be a common phenomenon that can occur by the formation of persister cells prior to the antibiotic treatment [ 8 ] or in response to a variety of antibiotics. [ 9 ] Antimicrobial tolerance is achieved by a small subpopulation of microbial cells termed persisters. [ 7 ] Persisters are not mutants, but rather are dormant cells that can survive the antimicrobials that effectively eliminate their much greater number. Persister cells have entered a non-growing, or extremely slow-growing physiological state which makes them tolerant (insensitive or refractory) to the action of antimicrobials. When such persisting pathogenic microbes cannot be eliminated by the immune system, they become a reservoir from which recurrence of infection will develop. [ 10 ] Such non-growing bacteria have been observed to persist during infections from Salmonella . [ 11 ] Persister cells are the main cause of relapsing and chronic infections. [ 2 ] [ 5 ] The bacteria species Listeria monocytogenes , the main causal agent of listeriosis , has been shown to demonstrate persistence during infection in hepatocyte and trophoblast cells. The usual active lifestyle can change and the bacteria can remain in intracellular vacuoles entering into a slow non-growing state of persistence thus promoting their survival from antibiotics. [ 12 ] Fungal persister cells are a common cause of recurring infections due to Candida albicans a common biofilm infection of implants. [ 5 ] Antibiotic tolerance poses medically important challenges. It is largely responsible for the inability to eradicate bacterial infections with antibiotic treatment. Persister cells are highly enriched in biofilms , and this makes biofilm-related diseases difficult to treat. Examples are chronic infections of implanted medical devices such as catheters and artificial joints, urinary tract infections , middle ear infections and fatal lung disease. [ 13 ] Unlike multiple drug resistance , and antimicrobial resistance, antimicrobial tolerance is transient, and not inherited. [ 2 ] [ 7 ] [ 10 ] Antibiotic tolerant persister cells are not antibiotic resistant mutants. Resistance is caused by newly acquired genetic traits (by mutation or horizontal gene transfer ) that are heritable and confer the ability to grow at elevated concentrations of antibiotics. In contrast, tolerant bacteria have the same minimum inhibitory concentration (MIC) as susceptible bacteria, [ 3 ] and differ in the duration of the treatment that they can survive. Antibiotic tolerance can be caused by a reversible physiological state in a small subpopulation of genetically identical cells, [ 2 ] [ 7 ] [ 10 ] similar to a differentiated cell type. [ 14 ] It enables this small subpopulation of bacteria to survive their complete elimination by antibiotic use. Persisting cells resume growth when the antibiotic is removed, and their progeny is sensitive to antibiotics. [ 2 ] [ 7 ] [ 10 ] The molecular mechanisms that underlie persister cell formation, and antimicrobial tolerance are largely unknown. [ 2 ] [ 10 ] Persister cells are thought to arise spontaneously in a growing microbial population by a stochastic genetic switch, [ 10 ] [ 4 ] although inducible mechanisms of persister cell formation have been described. [ 10 ] [ 15 ] For instance, toxin-antitoxin systems , [ 16 ] and a number of different stress responses such as the SOS response , [ 15 ] the envelope stress response , [ 17 ] and the starvation response have also been associated with persister cell formation in biofilms. [ 18 ] Owing to their transient nature and relatively low abundance, it is hard to isolate persister cells in sufficient numbers for experimental characterization, and only a few relevant genes have been identified to date. [ 2 ] [ 10 ] The best-understood persistence factor is the E. coli high persistence gene, commonly abbreviated as hipA . [ 19 ] Although tolerance is widely considered a passive state, there is evidence indicating it can be an energy-dependent process. [ 20 ] Persister cells in E. coli can transport intracellular accumulations antibiotic using an energy requiring efflux pump called TolC. [ 21 ] A persister subpopulation has also been demonstrated in budding yeast Saccharomyces cerevisiae . Yeast persisters are triggered in a small subset of unperturbed exponentially growing cells by spontaneously occurring DNA damage, which leads to the activation of a general stress response and protection against a range of harsh drug and stress environments. As a result of the DNA damage, yeast persisters are also enriched for random genetic mutations that occurred prior to the stress, and are unrelated to the stress survival. [ 22 ] In response to antifungals, fungal persister cells activate stress-response pathways, and two stress-protective molecules – glycogen , and trehalose accumulate in large amounts. [ 5 ] A study has shown that adding certain metabolites to aminoglycosides could enable bacterial persisters to be eliminated. This study was carried out on a number of bacterial species including E. coli and S. aureus . [ 23 ] Phage therapy , where applicable, is thought to entirely circumvent antibiotic tolerance, [ 24 ] [ 25 ] though phages themselves may be capable of inducing the persister state. [ 26 ]
https://en.wikipedia.org/wiki/Persister_cells
A personal communications service ( PCS ) is set of communications capabilities that provide a combination of terminal mobility, personal mobility , and service profile management. [ 1 ] This class of services comprises several types of wireless voice or wireless data communications systems, typically incorporating digital technology, providing services similar to advanced cellular mobile or paging services. In addition, PCS can also be used to provide other wireless communications services, including services that allow people to place and receive communications while away from their home or office, as well as wireless communications to homes, office buildings and other fixed locations. [ 2 ] Described in more commercial terms, PCS is a generation of wireless cellular-phone technology, that combines a range of features and services surpassing those available in analogue- and first-generation ( 2G ) digital-cellular phone systems, providing a user with an all-in-one wireless phone, paging, messaging, and data service. [ 3 ] The International Telecommunication Union (ITU) describes personal communications services as a component of the IMT-2000 ( 3G ) standard. PCS and the IMT-2000 standard of which PCS is a part do not specify a particular air interface and channel access method . Wireless service providers may deploy equipment using any of several air interface and channel access methods, as long as the network meets the service description for technical characteristics described in the standard. [ 4 ] In ITU Region 2, PCS are provided in the '1900 MHz ' band (specifically 1850–1995 MHz). [ 5 ] This frequency band was designated by the United States Federal Communications Commission (FCC) and Industry Canada to be used for new wireless services to alleviate capacity caps inherent in the original Advanced Mobile Phone System (AMPS) and Digital AMPS (D-AMPS) cellular networks in the '850 MHz' band (specifically 814–894 MHz). Only Region 2 has a PCS band. In the United States, Sprint PCS was the first company to build and operate a PCS network, launching service in November 1995 under the Sprint Spectrum brand in the Baltimore-Washington metropolitan area . Sprint originally built the network using GSM radio interface equipment. Sprint PCS later selected CDMA as the radio interface for its nationwide network, and built a parallel CDMA network in the Baltimore-Washington area, launching service in 1997. Sprint operated the two networks in parallel until finishing a migration of its area customers to the CDMA network. After completing the customer migration, Sprint PCS sold [ when? ] the GSM radio interface network equipment to Omnipoint Communications in January 2000. [ 6 ] Omnipoint was later purchased by VoiceStream Wireless [ when? ] which subsequently became T-Mobile US . In August 2022, T-Mobile US announced dead-zone cell phone coverage across the US using "midband" (1900 MHz) PCS spectrum [ 7 ] and Starlink Gen2 satellite cell coverage, to begin testing in 2023. Using this satellite and midband spectrum, T-Mobile plans to be able to connect by satellite to common mobile devices , unlike previous generations of satellite phones which used specialized Earth-bound radios to connect to geosynchronous satellites with characteristic long lag time in communications. [ 8 ] [ 9 ] ITU Regions 1 and 3 (Eurasia, Africa) does not have a PCS band. The comparable technology in the context of GSM is GSM-1800 , also known as " Digital Cellular System " (DCS). [ 10 ] GSM-1800 launched in Hong Kong in 1997. It can form dual band service with GSM at 900MHz. This frequency was inherited into UMTS , LTE , and 5G NR . Korea, which has never used GSM, runs CDMA on 1800 MHz. See CDMA frequency bands .
https://en.wikipedia.org/wiki/Personal_Communications_Service
The Personal Genome Project (PGP) is a long term, large cohort study which aims to sequence and publicize the complete genomes and medical records of 100,000 volunteers, in order to enable research into personal genomics and personalized medicine . It was initiated by Harvard University 's George M. Church in 2005. [ 1 ] [ 2 ] [ 3 ] As of November 2017, more than 10,000 volunteers had joined the project. Volunteers were accepted initially if they were permanent residents of the US and were able to submit tissue and/or genetic samples. Later the project was expanded to other countries. [ citation needed ] The Project was initially launched in the US in 2005 [ 1 ] and later extended to Canada (2012), [ 4 ] United Kingdom (2013), [ 5 ] Austria (2014), [ 6 ] Korea (2015) [ 7 ] and China (2017). [ 8 ] The project allowed participants to publish the genotype (the full DNA sequence of all 46 chromosomes ) of the volunteers, along with extensive information about their phenotype : medical records, various measurements, MRI images, etc. All data were placed within the public domain and made available over the Internet so that researchers could test various hypotheses about the relationships among genotype , environment and phenotype . Participants could decide what data they are comfortable to publish publicly and could choose to upload additional data or remove existing data at their own convenience. [ citation needed ] An important part of the project was the exploration of the resulting risks to the participants, such as possible discrimination by insurers and employers if the genome shows a predisposition for certain diseases. [ citation needed ] The PGP is establishing an international network of sites, including the United States (Harvard PGP), Canada (University of Toronto / Hospital for Sick Kids), and other countries that adhere to certain "conforming implementation" criteria such as no promise of anonymity and data return. [ 9 ] The Harvard Medical School Institutional Review Board requested that the first set of volunteers include the principal investigator George Church and other diverse stakeholders in the scientific, medical, and social implications of personal genomes, because they were well positioned to give highly informed consent . As sequencing technology becomes cheaper, and the societal issues mentioned above are worked out, it was hoped that a large number of volunteers from all walks of life would participate. The long-term goal was that every person have access to his or her genotype to be used for personalized medical decisions. [ citation needed ] The first ten volunteers were referred to as the "PGP-10". These volunteers were: In order to enroll, each participant must pass a series of short online tests to ensure that they are providing informed consent . [ 11 ] By 2012, 2000 participants had enrolled [ 12 ] and by November 2017 10,000 had joined the project. [ 8 ] In July 2014, at the 'Genetics, Genomics and Global Health—Inequalities, Identities and Insecurities' conference, Stephan Beck , the head of the UK arm of this project indicated that they had over 1000 volunteers, and had temporarily paused collection data due to lack of funding. As of November 2016, the pause was still in effect. [ 13 ] Since 2016, participants of the PGP could choose to obtain their whole-genome sequenced performed for $999. [ 14 ] In the same year Complete Genomics contributed over 184 phased human genomes to the project. [ 15 ] In February 2018, the results were published of the first 56 Canadian participants who had their whole genome analyzed. [ 16 ] Several DNA mutations that would have been expected by expert consensus to affect health of the participants had not done so, indicating that getting health data from the human genome was difficult. [ 17 ] On March 9, 2017, producers of the popular online brain-training program Lumosity announced they would collaborate with Harvard researchers to investigate the relationship between genetics and memory, attention, and reaction speed. [ 18 ] [ 19 ] Scientists at the Wyss Institute for Biologically Inspired Engineering and the Harvard Medical School Personal Genome Project (PGP) planned to recruit 10,000 members from the PGP, to perform a set of cognitive tests from Lumos Labs’ NeuroCognitive Performance Test, a brief, repeatable, online assessment to evaluate participants’ memory functions, including object recall, object pattern memorization, and response times. The researchers would then correlate extremely high performance scores with naturally occurring variations in the participants’ genomes. To validate their findings, the team would sequence, edit, and visualize DNA, model neuronal development in 3-D brain organoids ex vivo, and finally test emerging hypotheses in experimental models of neurodegeneration. [ citation needed ]
https://en.wikipedia.org/wiki/Personal_Genome_Project
In telecommunication , a personal communications service is defined by the Alliance for Telecommunications Industry Solutions (ATIS) as "a set of capabilities that allows some combination of personal mobility, terminal mobility, and service profile management". [ 1 ] Personal communications services use a special non-geographic area code of the format 5XX for assigning telephone numbers for service instances. [ 2 ] The designation of the 5XX area code format was authorized by the United States Federal Communications Commission , and introduced into the North American Numbering Plan in 1995. In 1993, the North American Numbering Plan designated area code 500 for personal communication services. An initial service concept was that customers could move a given seven-digit telephone number when relocating between numbering plan areas. The 500-code would thus be a non-geographic area code. In 1995, AT&T introduced a "follow-me" service under the brand name of AT&T True Connections using area code 500. It was designed to replace the AT&T EasyReach 700 service. [ 3 ] Other local exchange carriers and interexchange carriers introduced similar competitive services. With the rise of mobile devices such as mobile phones and pagers, making the service all but superfluous, AT&T True Connections and their competitors failed quickly. Companies, hotels, and others with PBX equipment continued to block the dialing of 500 because it was a caller-paid number. It was also misused by premium rate services such as phone sex lines, with it being used to forward calls to various foreign countries. [ 4 ] The prefix was also used by some Internet service providers to allow non-subscribers to dial into their systems for dial-up Internet access . [ 5 ] In 1996, AT&T attempted to migrate users to its revised service called "Personal Reach" 800, built on a toll-free (receiver-paid) platform rather than the original (caller-paid) 500 program. [ 6 ] [ 7 ] AT&T has a US patent (5,907,811) on "personal reach service". [ 8 ] AT&T then licensed and transferred all personal reach services to MCE, Inc. MCE was supposedly the company providing the back-end system for all personal reach services to AT&T. No public information was released on the transfer away from AT&T. Subscribers were notified by mail that bills would begin to arrive from MCE instead of AT&T. It is also believed that MCE is a subsidiary of EMNS, Inc., a web hosting company in Chicago. MCE continues to supply personal reach service using the AT&T transport network. [ citation needed ] AT&T discontinued AT&T True Connections in 2000, following the Federal Communications Commission approval of its tariff to cease providing the service. The numbering resources are now designated 5XX-NXX in NANP and are used for machine to machine communication. [ 9 ] Although AT&T no longer uses the 500 code, it was supplemented by 533 in 2009, followed by 544 in December 2010. The 566 code was activated in April 2012. [ 10 ] In March 2014, the 577 code was also activated. [ 11 ] In September 2015 the 588 code was activated. In August 2016 the 522 code was activated. In September 2017 the 521 code was activated. In November 2018 the 523 code was activated. In September 2019 the 524 code was activated. [ 12 ] In July 2020 the 525 code was activated. [ 13 ] In January 2021 the 526 code was activated. [ 14 ] Other codes in reserve for this use: 527, 528, 529, 532, 535, 538, 542, 543, 545, 546, 547, 549, 550, 552, 553, 554, 556, 558, 569, 578, and 589. [ 15 ] [ 16 ] [ 17 ] On February 11, 2022, 103 of the 5XX-NXX codes were reported as available for assignment and the code 528 was designated as the next NPA code to be assigned. [ 18 ] On March 2, 2022, the NANPA announced the initiation of NPA 528. [ 19 ] On August 29, 2022, 193 of the 5XX-NXX codes were reported as available for assignment and the code 529 was designated as the next NPA code to be assigned. [ 20 ] On September 16, 2022, the NANPA announced the initiation of NPA 529. [ 21 ] 2024: NPA 532 activated In 2015, the Canadian Radio-television and Telecommunications Commission (CRTC) approved the Canadian Non-Geographic Code Assignment Guideline and the assignment of the 622, 633, 644, 655, 677, and 688 non-geographic numbering plan area (NPA) codes to meet the demand for telephone numbers related to technologies such as machine-to-machine applications. The first 6YY NPA to be used is 622 NPA, with additional numbers requested when 622 approaches exhaustion. [ 22 ]
https://en.wikipedia.org/wiki/Personal_communications_service_(NANP)
Personal environmental impact accounting (PEIA) is a computer software-based methodology developed in 1992 by Don Lotter [ 1 ] [ 2 ] [ 3 ] for quantifying an individual's impact on the environment via analysis of answers to an extensive quantity-based questionnaire that the individual fills out regarding their lifestyle. The questions are arranged in six areas: home energy and water, transportation, consumerism, waste, advocacy, and demographics. Lotter, at the time a graduate student in ecology at the University of California, Davis , [ 4 ] developed the PEIA methodology while teaching a course on the History of Western Consciousness in the UC Davis Experimental College. [ 5 ] He realized that, while individuals in contemporary Western society generally have an enormous environmental impact , most were unaware of it, and no method existed for its quantification or assessment . The first software version of the PEIA methodology was the DOS -based EnviroAccount software, written in QuickBasic and completed in 1992. The program asked users 115 questions, then provided a score to indicate the user's personal environmental impact. [ 4 ] Lotter later created EarthAware, released in 1996, which built from EnviroAccount, and ran on Windows 3.1 . EarthAware provided internet links for users to learn more about their environmental impact. [ 5 ] After the test, users could print out their test results and areas for improvement. [ 5 ] They would also receive a label ranging from "Eco-Titan" for the most environmentally friendly to "Eco-Tyrannosaurus Rex" for those "bound for extinction" doing the most harm to the planet. [ 5 ] Lotter also authored a book on the topic, EarthScore: Your Personal Environmental Audit and Guide . PEIA is similar in concept to the ecological footprint .
https://en.wikipedia.org/wiki/Personal_environmental_impact_accounting
The term personal equation , in 19th- and early 20th-century science , referred to the idea that different observers have different reaction times, which can introduce bias when it comes to measurements and observations . [ 1 ] The term originated in astronomy , when it was discovered that numerous observers making simultaneous observations would record slightly different values (for example, in recording the exact time at which a star crossed the wires of a reticule in a telescope ), some of which were of a significant enough difference to afford for problems in larger calculations. [ 2 ] The existence of the effect was first discovered when, in 1796, the Astronomer Royal Neville Maskelyne dismissed his assistant Kinnebrooke because he could not better the error of his observations relative to Maskelyne's own values. [ 3 ] The problem was forgotten and only analysed two decades later by Friedrich Wilhelm Bessel at Königsberg Observatory in Prussia . Setting up an experiment to compare the values, Bessel and an assistant measured the times at which several stars crossed the wires of a reticule in different nights. Compared to his assistant, Bessel found himself to be ahead by more than a second. In response to this realization, astronomers became increasingly suspicious of the results of other astronomers and their own assistants and began systematic programs to attempt to find ways to remove or lessen the effects. These included attempts at the automation of observations (appealing to the presumed objectivity of machines), training observers to try to avoid certain known errors (such as those caused by lack of sleep ), developing machines that could allow multiple observers to make observations at the same time, the taking of redundant data and using techniques such as the method of least squares to derive possible values from them, and trying to quantify the biases of individual workers so that they could be subtracted from the data. [ 4 ] It became a major topic in experimental psychology as well, and was a major motivation for developing methods to deal with error in astronomy. William James helped move the concept of the personal equation from astronomy to social science, arguing that theoretical preconceptions and personal knowledge could lead investigators to wild interpretations based largely on their own personal equations. [ 5 ] Carl Jung took up the idea in his book Psychological Types , arguing that in psychology "one sees what one can best see oneself". [ 6 ] He continued to wrestle in later writings with the problems of psychological solipsism and infinite regress this potentially posed, [ 7 ] and considered every therapist should have at least a good working knowledge of his or her own personal equation. [ 8 ]
https://en.wikipedia.org/wiki/Personal_equation
Personal knowledge management ( PKM ) is a process of collecting information that a person uses to gather, classify, store, search, retrieve and share knowledge in their daily activities ( Grundspenkis 2007 ) and the way in which these processes support work activities ( Wright 2005 ). It is a response to the idea that knowledge workers need to be responsible for their own growth and learning ( Smedley 2009 ). It is a bottom-up approach to knowledge management (KM) ( Pollard 2008 ). Although as early as 1998 Davenport wrote on the importance to worker productivity of understanding individual knowledge processes (cited in Zhang 2009 ), the term personal knowledge management appears to be relatively new. Its origin can be traced in a working paper by Frand & Hixon (1999) . PKM integrates personal information management (PIM), focused on individual skills, with knowledge management (KM) in addition to input from a variety of disciplines such as cognitive psychology , management and philosophy ( Pauleen 2009 ). From an organizational perspective, understanding of the field has developed in light of expanding knowledge about human cognitive capabilities and the permeability of organizational boundaries. From a metacognitive perspective, it compares various modalities within human cognition as to their competence and efficacy ( Sheridan 2008 ). It is an underresearched area ( Pauleen 2009 ). More recently, research has been conducted to help understand "the potential role of Web 2.0 technologies for harnessing and managing personal knowledge" ( Razmerita, Kirchner & Sudzina 2009 ). The Great Resignation has expanded the category of knowledge workers and is predicted to increase demand for personal knowledge management in the future ( Serenko 2023 ). Dorsey (2001) identified information retrieval, assessment and evaluation, organization, analysis, presentation, security, and collaboration as essential to PKM (cited in Zhang 2009 ). Wright's model involves four interrelated domains: analytical, information, social, and learning. The analytical domain involves competencies such as interpretation, envisioning, application, creation, and contextualization. The information dimension comprises the sourcing, assessment, organization, aggregation, and communication of information. The social dimension involves finding and collaborating with people, the development of both close networks and extended networks, and dialogue. The learning dimension entails expanding pattern recognition and sensemaking capabilities, reflection, development of new knowledge, improvement of skills, and extension to others. This model stresses the importance of both bonding and bridging networks ( Wright 2007 ). In Nonaka and Takeuchi's SECI model of knowledge dimensions (see under knowledge management ), knowledge can be tacit or explicit, with the interaction of the two resulting in new knowledge ( Nonaka & Takeuchi 1995 ). Smedley has developed a PKM model based on Nonaka and colleagues' model in which an expert provides direction while a community of practice provides support for personal knowledge creation ( Smedley 2009 ). Trust is central to knowledge sharing in this model. Nonaka has returned to his earlier work in an attempt to further develop his ideas about knowledge creation ( Nonaka & von Krogh 2009 ) Personal knowledge management can also be viewed along two main dimensions, personal knowledge and personal management ( Zhang 2009 ). Zhang has developed a model of PKM in relation to organizational knowledge management (OKM) that considers two axes of knowledge properties and management perspectives, either organizational or personal. These aspects of organizational and personal knowledge are interconnected through the OAPI process (organizationalize, aggregate, personalize, and individualize), whereby organizational knowledge is personalized and individualized, and personal knowledge is aggregated and operationalized as organizational knowledge ( Zhang 2009 ). It is not clear whether PKM is anything more than a new wrapper around personal information management (PIM). William Jones argued that only personal information as a tangible resource can be managed, whereas personal knowledge cannot ( Jones 2010 ). Dave Snowden has asserted that most individuals cannot manage their knowledge in the traditional sense of "managing" and has advocated thinking in terms of sensemaking rather than PKM ( Snowden & Pauleen 2008 ). Knowledge is not solely an individual product—it emerges through connections, dialog, and social interaction (see Sociology of knowledge ). However, in Wright's model, PKM involves the application to problem-solving of analytical, information, social, and learning dimensions, which are interrelated ( Wright 2007 ), and so is inherently social. An aim of PKM is "helping individuals to be more effective in personal, organizational and social environments" ( Pauleen 2009 , p. 221), often through the use of technology such as networking software. It has been argued, however, that equation of PKM with technology has limited the value and utility of the concept (e.g., Pollard 2008 , Snowden & Pauleen 2008 ). In 2012, Mohamed Chatti introduced the personal knowledge network (PKN) model to KM as an alternative perspective on PKM, based on the concepts of a personal knowledge network and knowledge ecology ( Chatti 2012 ). Skills associated with personal knowledge management include: Some organizations are introducing PKM "systems" with some or all four components: [ citation needed ] PKM has also been linked to these tools: [ citation needed ] Other useful tools include stories and narrative inquiry , decision support systems , various kinds of node–link diagram (such as argument maps , mind maps , concept maps ), and similar information visualization techniques. Individuals use these tools to capture ideas, expertise, experience, opinions or thoughts, and this "voicing" will encourage cognitive diversity and promote free exchanges away from a centralized policed knowledge repository. [ citation needed ] The goal is to facilitate knowledge sharing and personal content management. Some examples of PKM tools include:
https://en.wikipedia.org/wiki/Personal_knowledge_management
Personal media are media of communication which are used by an individual rather than by a corporation or institution . [ 1 ] They are generally contrasted with mass media which are produced by teams of people and broadcast to a general population. [ 2 ] : 1–7 In other words, personal media allow individuals, as opposed to corporate entities, to contribute knowledge and opinion to the public. [ 3 ] : 684 The term dates from the 1980s. [ 4 ] New technologies such as social media and self-publishing are creating a variety of modes for modern media. Marika Lüders suggests a two-dimensional model for classifying such media with one dimension being the degree of interaction between the senders and receivers; and the other dimension being the level of institutionalisation and professionalism . [ 3 ] Katherine Nashleanas links the concept of personal media to the notion of 'control' by an individual as opposed to a centralised authority. She argues that although personal media including the fax have been available to the general public since the 1960s, more recent technologies such as the smartphone confer greater control over content production and distribution to their users. [ 5 ] This publishing -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Personal_media
A Personal Navigation Assistant ( PNA ) also known as Personal Navigation Device or Portable Navigation Device ( PND ) is a portable electronic product which combines a positioning capability (such as GPS ) and navigation functions. Some PNA devices are PDAs with limited features and can be unlocked. [ 1 ] The earliest PNAs were hand-held GPS units (circa mid-1980s) which were capable of displaying the user's location on an electronic map . These units included simple navigation functions such as course-to-steer and course-made-good. This first generation of PNAs were used by the US military. According to the analyst firm Berg Insight , there were more than 150 million turn-by-turn navigation systems worldwide in mid-2009, including about 35 million factory installed and aftermarket in-dash navigation systems, over 90 million Personal Navigation Devices (PNDs) and an estimated 28 million navigation-enabled mobile handsets with GPS. [ 2 ] The term PNA has come into widespread use with the growing popularity of automobile navigation systems . Original PNAs provided users with a map layer, real-time-traffic, and a routing engine with audio/visual cues for turn-by-turn guidance. The latest generation of PNA have sophisticated navigation functions such as parking assistance and personalization engines that enhance the user experience. To reduce total cost of ownership and time to market, most modern PNA devices such as those made by Garmin Ltd. , Mio Technology Ltd. or TomTom International BV. are running an off-the-shelf embedded operating system such as Windows CE or Embedded Linux on commodity hardware with OEM versions of popular PDA Navigation software packages such as TomTom Navigator, I-GO 2006, Netropa IntelliNav iGuidance, or Destinator. Other manufacturers such as Garmin and Magellan prefer to bundle their own software developed in-house. Because many of these devices use an embedded OS, many technically inclined users find it easy to modify PNAs to run third party software and use them for things other than navigation, such as a low-cost audio-video player or PDA replacement. GPS equipped mobile phones have now eclipsed the sale of dedicated GPS units. Nokia , Samsung Electronics , Motorola and other handset makers were predicted to sell 162 million GPS equipped phones in 2007, dwarfing the 20 million units Garmin and TomTom have forecast they will sell combined, according to iSuppli, a leading market researcher in California. The inclusion of Google Maps Navigation in Android devices such as Motorola Droid and Nokia 's announcement of free Ovi Maps has led to many people using their smartphones instead of having a separate PNA for trip navigation. Systems designed for automobiles are able to calculate routes taking into account the road network, and sometimes in real time: their popularity has led to the wide spread of navigation assistants. On some devices, the user can define the place of arrival by his postal address (and no longer only by his geographical coordinates ), and sometimes with the name of the place. Instructions are often given step by step, with directional pictograms commented on by a voice synthesis system. The navigator then gives route suggestions that the driver can follow when they are relevant. Sometimes, these navigation systems use incorrect data (Map not adapted to the vehicle or not updated, canyon effect, etc.) generating erroneous information, and the driver who follows them blindly can cause an accident which can be fatal, in particular for heavy vehicles: coaches and other heavy goods vehicles . Thus, the systems display alerts warning the user of these possible errors. Some navigators are specialized for heavy goods vehicles and take into account the size of the vehicles but also their mass and dimensions, in order to only offer itineraries using suitable roads. On the other hand, other applications for the general public, such as Waze or Coyote, are unable to give a route including all the constraints that this type of vehicle must follow (including mass and height) to truck drivers. [ 3 ] Some versions are very complete, and can offer:
https://en.wikipedia.org/wiki/Personal_navigation_assistant
Personality neuroscience uses neuroscientific methods to study the neurobiological mechanisms underlying individual differences in stable psychological attributes. Specifically, personality neuroscience aims to investigate the relationships between inter-individual variation in brain structures as well as functions and behavioral measures of persistent psychological traits, broadly defined as "predispositions and average tendencies to be in particular states", including but are not limited to personality traits, sociobehavioral tendencies, and psychopathological risk factors. [ 1 ] Personality neuroscience is considered as an interdisciplinary field integrating research questions and methodologies from social psychology , personality psychology , and neuroscience . It is closely related to other interdisciplinary fields, such as social , cognitive , and affective neuroscience . Personality neuroscience is a field built upon the study of personality, which has been a central theme in psychology and evolving through various theoretical perspectives as well as methodological approaches over many years. Specifically, personality neuroscience aims to understand what neurobiological mechanisms underlie and contribute to personality, and therefore, is primarily based on theories that attribute individual differences to physiological and biological systems of the human body or brain. These theories can be traced back to many theories proposed by early physicians, philosophers, and psychologists. [ 2 ] Ancient Greek physician Hippocrates developed the theories of Humorism by identifying four vital bodily "humors" or fluids (i.e., blood, phlegm, black bile, and yellow bile) to be associated with temperaments (i.e., sanguine, phlegmatic, melancholic, and choleric, respectively) as well as physical health outcomes. [ 3 ] In the early 20th century, the psychoanalytic theories put forth by Austrian neurologist Sigmund Freud was anchored on the unconscious mental processes. [ 4 ] Influenced by the psychoanalytic theories, American psychologist Henry A. Murray proposed five principles of personology, his term for the study and system of personality, in which the first principle states that "personality is rooted in the brain. The individual's cerebral physiology guides and govern every aspect of personality". Relatedly, Murray also suggested that "needs", which is the motivation that drive behaviors, arose as a result of "a physiochemical force in the brain". [ 5 ] American psychologist William Sheldon was known for his work on defining three " somatotypes " (i.e., body types: endomorphs, mesomorphs, and ectomorphs) to personality attributes. [ 6 ] As early as the late 19th century, the case study of Phineas Gage , a railroad worker who survived a severe brain injury from an accident and underwent a significant personality change, was the first to suggest a causal link between the brain and personality. [ 7 ] In the 1940s, there were studies investigating the association between brain wave patterns and individual differences using twin study paradigms , demonstrating that identical twins showed remarkably similar brain wave patterns measured by electroencephalography (EEG) when compared to fraternal twins. [ 8 ] [ 9 ] However, results from these studies were deemed hard to interpret "in the absence of any satisfactory theory linking brain-wave patterns to personality". [ 10 ] Building off these studies and other studies that investigated the genetic inheritance of psychological attributes, in 1951, Hans J. Eysenck and D.B. Prell experimentally tested the heredity of neuroticism using a twin study paradigm and concluded that "the factor of neuroticism is not a statistical [artifact], but constitutes a biological unit which is inherited as a whole" and "the neurotic predisposition is to a large extent hereditarily determined". [ 10 ] Following this work, Eysenck continued to investigate psychological traits in relation to neurobiological systems, including the nervous systems, arousal, and brain structures (e.g., reticular formation and the limbic system). [ 11 ] [ 12 ] [ 13 ] In 1961, American psychologist Gordon W. Allport defined personality as "the dynamic organization within the individual of those psychophysical systems that determine his characteristic behavior and thought", localizing personality within "psychophysical systems". [ 14 ] Extending from Eysenck's theory on the biological basis of personality, Jeffrey A. Gray 's reinforcement sensitivity theory of personality and his work that studied the neural mechanisms underlying personality traits set the foundation for the contemporary field in personality neuroscience. [ 15 ] [ 16 ] For example, Gray's work suggested that introversion involved both the ascending reticular activating system and an inhibitory system of brain areas including the orbital frontal cortex, medial septal area, and. the hippocampus. [ 15 ] In 1999, a chapter titled "The neuroscience of personality" written by Alan D. Pickering and Jeffrey A. Gray was published in the Handbook of personality: Theory and research , in which it introduced ways to "build a modern, integrated neuroscience of personality". [ 17 ] Although there had long been theoretically driven interests and experimental endeavors to understand the neurobiological basis of personality, it wasn't until recent years that, with the advancement in neuroscientific methodologies (e.g., non-invasive neuroimaging methods), the focus of personality psychology began to shift from observing, describing, and categorizing the phenomenon of individual differences towards discovering what may contribute to these observed individual differences. [ 18 ] In 2010, the name "personality neuroscience" was coined by Colin G. DeYoung , [ 19 ] [ 20 ] who is a psychology professor and the director of the DeYoung Personality Lab at the University of Minnesota. [ 21 ] [ 22 ] In 2018, the Personality Neuroscience journal was established to "[publish] papers in the neuroscience of personality (including cognitive abilities, emotionality, and other individual differences) concerned with understanding causal bases" with "its focus on the equal importance of personality and neuroscience". [ 23 ] As personality neuroscience seeks to understand the link between personality and its underlying neurobiological mechanisms, generating testable hypotheses involve both the measurements of personality attributes and neurobiological structures and/or functions. [ 19 ] In the field of personality psychology, there have been two main approaches to define personality traits: [ 2 ] In personality neuroscience, personality is often defined using the nomothetic approach. Personality trait is typically measured using scales developed for the personality attributes of interests and administered through self-report surveys and questionnaires. One of the most commonly used ways to measure personality attributes in personality neuroscience research is the Big-Five personality traits . In addition to the criticism by proponents of the idiographic approach as mentioned above, self-report measures on personality traits in general are susceptible to response biases (e.g., social desirability bias , acquiescent response bias , etc.) and inaccurate introspection of mental states. Therefore, it is important to establish construct validity of the self-report measures of personality by using other scales of the same construct or other modalities of measures, such as behavioral data or aggregated ratings from other knowledgeable informants. [ 33 ] Another common nomothetic approach is the Affective Neuroscience Personality Scales (ANPS). The ANPS was originally published in 2003 and was used by neuroscientists to evaluate the primary emotional systems that underlie mental well-being and affective brain disorders. [ 34 ] This scale was created by Jaak Panksepp so that researchers could use this self-report test to measure differences in the primary emotions, SEEKING, LUST, CARE, PLAY (the positive emotions) and FEAR, SADNESS, ANGER (the negative emotions). These differences in the emotions were then compared to the Big Five personality to look at the scale from an evolutionary perspective as the primary emotions were seen as a survival mechanism of inherited behavioral patterns by humans interacting with their environment. [ 35 ] Each of these primary emotions have “been evolutionarily shaped in terms of inherited tools for survival and, more generally, for fitness” and are seen to regulate human nature. [ 36 ] The SEEKING energy is used to seek valuable resources for survival, such as food, a mate, or shelter. The LUST energy is used to sustain the human species through reproductive means. The CARE system is significant in order to protect offspring so that they can grow into adults and the species is sustained once again. PLAY is important in order to foster social bonding between humans, to learn social and motor skills, and to regulate emotions. On the other hand with negative emotions, the FEAR energy is used for safety and to keep away from danger through means such as the flight or fight response. The SADNESS system, from an evolutionary perspective, is used to maintain socialness of an individual, as being isolated often evokes this emotion. The ANGER energy is important to protect resources from others or the environment. [ 37 ] However, since the late 2010s, researchers have begun to question the relevancy of ANPS and have identified areas of improvement. The primary emotional systems in psychopathologies often fluctuate. The assessment is also only found in one long version, and patients with depression who suffer from fatigue would benefit from a shorter version. Furthermore, the FEAR and SADNESS emotions exhibit high correlation because they are closely related, and it would be useful to find a method to disentangle them to better study them. Another concern is that the original ANPS does not assess individual differences in LUST. [ 38 ] To study the neurobiological mechanisms, or the structures and functions of the brain, underlying personality, personality neuroscience research employs established methods from neuroscience research. Some of the available neuroscientific methods are listed below with brief descriptions and how they can be incorporated in personality neuroscience research. [ 18 ] Magnetic resonance imaging (MRI) is a non-invasive imaging technique that uses the physical properties of magnetic fields and injection of radio-frequency pulses to examine the brain structure and functions with high spatial resolution. [ 39 ] Both sMRI and fMRI have been used widely in both clinical and research settings to establish associations between the brain and a wide range of human socio-cognitive and psychological processes, [ 40 ] [ 41 ] [ 42 ] as well as individual differences. [ 43 ] Structural MRI (sMRI) of the brain provides the information on the neuroanatomical properties of the brain, such as the volumes of the gray and white matter. [ 44 ] Functional MRI (fMRI) of the brain maps the functional organization of the brain by monitoring the localized brain activation through the change in blood oxygenation level as a result of the cerebral blood flow (CBF), either when participants are engaging in tasks (i.e., task-based fMRI) or at rest (i.e., resting-state fMRI). [ 45 ] In addition to examine brain structure and function within localized brain regions, topological network analyses, such as graph theory in network neuroscience, can be conducted across brain regions to map out structural and functional connectivity patterns that vary with inter-individual variation in cognition and behaviors. [ 46 ] [ 47 ] In recent years, large MRI datasets, such as the Human Connectome Project (HCP), were collected with the aim to investigate the individual differences in structural and functional connectivity of the brain networks underlying a wide range of cognitive processes elicited by fMRI tasks. [ 48 ] Positron emission tomography (PET) is an imaging technique that uses radiotracers to spatially localize and track the distribution of changes in metabolic processes. [ 49 ] Specifically, PET neuroimaging scans have been widely used in pre-clinical and clinical settings in relation to epilepsy, dementia, Parkinson's disease, and traumatic brain injuries. [ 50 ] [ 51 ] Electroencephalography (EEG) is a tool that directly measures and records the electrical activity generated in the brain with high temporal resolution but relatively low spatial resolution. [ 52 ] The EEG signal can be obtained non-invasively by placing electrodes on the scalp to capture the electrical impulses produced by neurons in the brain. It is commonly used in clinical settings to assess and detect neurological abnormalities in brain functions, such as epilepsy, sleep disorder, and brain injuries; in research, it has been used in couple with tasks to probe brain activities underlying various cognitive and emotional processes. [ 53 ] Molecular genetics is a sub-field in biology that investigates the structure, expression, and functions of genes, informing brain development and functions at the level of the genome. [ 54 ] In the context of personality neuroscience, methods in molecular genetics have been used to establish genetic underpinnings of personality traits. [ 55 ] Assay measures biological processes by detecting signals produced by reagents. [ 56 ] It can be used to quantify "endogenous psychoactive substances or their byproducts" (e.g., levels of dopamine, oxytocin, serotonin, etc.) that have been associated with psychological processes which may contribute to personality trait development or psychopathology. [ 18 ] Neuropharmocological manipulation involves the use of medication to induce changes in neurochemical processes and has been primarily studied for neurological or psychiatric drug treatments. [ 57 ] Personality neuroscience can incorporate neuropsychopharmocological manipulation to establish causal link between personality traits and specific neurochemical processes (e.g., induced manipulation on levels of dopamine). [ 19 ] In the past two decades, research in the field of personality neuroscience, utilizing neuroscientific methods outlined in the previous section, has identified neural mechanisms underlying a wide range of trait variables. This section reviews some of the major research findings in the field. [ 1 ] [ 19 ] Empathy, in the discussion here as a stable trait as in empathetic ability or capacity, can be defined as an affective response that "is similar to one’s perception (directly experienced or imagined) and understanding (cognitive empathy) of the stimulus emotion, with recognition that the source of the emotion is not one’s own", although there is still ongoing debate in the field on how to best define empathy. [ 70 ] One sMRI study has demonstrated that inter subject variability in different facets of empathy is linked to neuroanatomical variation across different brain regions, such that (1) affective empathic abilities towards others were negatively correlated with the gray matter volumes of the precuneus, inferior frontal gyrus, and anterior cingulate, (2) cognitive perspective taking abilities were positively correlated with the gray matter volume of the anterior cingulate, and (3) the ability to empathize with fictional characters was positively linked to gray matter changes in the right dorsolateral prefrontal cortex. [ 71 ] A meta-analysis of a series of fMRI studies have revealed that, when humans engage in empathetic processes, a network of brain regions are engaged, encompassing the insula, inferior frontal gyrus, medial frontal regions around the cingulate cortex, amygdala, thalamus, putamen, caudate, and primary somatosensory area SI. [ 72 ] In addition to MRI studies, neuromodulation on mice and monkeys have shown that interference with oxytocin signaling causally influences empathy-related phenomena. [ 73 ] Prior research focused primarily on the causes of specific traits like extraversion, but 2018 research indicated that these individual traits do not alone determine personality. Researchers looked into the genes that are related to human personality. They identified the genes that interact with each other and one’s environment to create personality. Around 1000 of such genes that affect temperament and character were found. This was further studied by looking at 1000 people in Germany and a 1000 people in Korea, and they found that in both countries and cultures, the genes for personality were all expressed in the brain. Around 33% of the genes were involved in the expression of temperament and character, while 67% of the genes were involved in either one or the other. These genes for character were expressed primarily in the brain circuits that regulate complex cognitive processes, such as goal seeking, conflict solving, and self-awareness. The genes were found to affect temperament and were expressed primarily in the habit learning pathways. Through these studies, these researchers were able to determine that the components of personality are numerous complex profiles. They also found that many molecular pathways can cause the exact same personality trait. Furthermore, environmental influences had small interactions with the genes for temperament and character but still had significant influence. [ 74 ] As an interdisciplinary field that lie between personality psychology and neuroscience, personality neuroscience research may benefit both fields by informing the formation of neuroscience hypotheses and helping interpret findings through theoretical framework developed in personality psychology, and in turn, developing and refining personality models and theories with an enhanced understanding of underlying neurobiological mechanisms. [ 19 ] Nonetheless, in the meantime, the interdisciplinary nature aggregates paradigmatic and methodological challenges from both fields. [ 19 ] [ 46 ] One prominent challenge for neuroimaging studies that aim to investigate individual differences is the low statistical power as a result of small sample sizes due to the high cost of data collection. [ 75 ] [ 76 ] Personality neuroscience research can thus benefit from data-sharing among studies and collective efforts to aggregate large neuroimaging datasets that include personality measures, such as the Human Connectome Project (HCP) and the Adolescent Brain and Cognitive Development (ABCD) Study. [ 1 ] [ 77 ] [ 65 ] Ongoing effort to collect data from more diverse sample is also recommended to allow for generalization of study results to a larger population or investigation of similarities/differences among diverse communities. [ 19 ] Another challenge is to establish reliable, systematic, and high-quality measurement of personality traits. [ 78 ] Unlike intelligence tests that are performance-based, personality questionnaires are susceptible to biases as mentioned in earlier sections. As the theories of personality psychology continues to evolve and develop, extensive psychometric research may need to be conducted on various types of scales or assessments that are used to measure psychological traits to ensure that they produce reliable measures of personality variables of interest. [ 1 ] One other challenge is that personality neuroscience is a relatively young field. Because of this, many of the previously published studies may be proven to be false positives due to under-powered studies that use small samples. Larger sample sizes are needed to detect smaller effects, which are common in personality neuroscience. A sample size of around 200 is needed to have 80% power and detect a correlation .2, which is often the average effect size in personality neuroscience. Thus, larger sample sizes are a needed change for this field. [ 79 ] The complexity of both the brain and personality traits poses additional challenge to the interdisciplinary field of personality neuroscience which studies the relationship between these two complex systems. [ 19 ] Current research suggests that there exists no one-to-one mapping between neurobiological and personality variables: multiple brain regions or neurochemical processes may underlie one trait variable, while in turn, one brain region or neurochemical processes may be instrumental for several cognitive and affective processes that may influence multiple traits. As a result, personality network neuroscience approaches, integrating quantitative methodologies from network analysis, have been proposed to encode the complex nature of both neural mechanisms and personality variables as networks to facilitate the investigation the brain-personality relationship. [ 46 ] [ 78 ] [ 80 ]
https://en.wikipedia.org/wiki/Personality_neuroscience
A personalization management system ( PMS ) is an integrated software solution that enables users in an organization to manage and deliver personalized messages, campaigns, and interactive experiences to consumers across different communications channels and devices. The term PMS was first used in a 2003 study [ 1 ] on personalization, but it was later [ 2 ] popularized by the startup Croct, [ 3 ] which was the first company to use the term PMS to distinguish the emerging category of platforms and technologies focused on delivering personalized customer experiences. Previously, these services were typically included under the umbrella of CMS or CRM solutions, which did not adequately encapsulate the nuances of this new category. The concept of personalization in marketing has been around for decades. But it all began with simple strategies, such as calling consumers by name in direct mail marketing . [ 4 ] In the late 1990s and early 2000s, the concept of personalization began to evolve with the use of cookies , which allowed websites to track a user's browsing history and behavior. This information could then be used to serve personalized content, such as recommended products or tailored advertisements. As the internet and digital marketing continued to grow, the need for efficient and effective content management systems (CMS) became apparent. One of the main challenges with traditional CMS systems was that they were designed for the web, which often meant that content had to be created multiple times to be delivered across multiple devices. That was both a time-consuming and inefficient process. To address this issue, [ 5 ] the concept of headless CMS was introduced. A headless CMS is a content management system that separates the backend (where content is stored and served) from the frontend (where content is displayed to the user). This allows content to be created once and delivered to any device or platform without creating multiple versions of the same content. As personalization in marketing evolved, it became important not just to create personalized content but also to analyze and track its effectiveness. Headless CMSs were well-suited to handle this task, as they allow for more flexibility when integrating marketing technology, such as analytics and testing tools. Together, these capabilities led to the emergence of a specialized category known as a Personalization Management System (PMS). [ 6 ] As technology advanced, so has the ability to personalize marketing efforts. Modern PMS platforms now offer a range of sophisticated features and capabilities. Companies such as Microsoft, [ 7 ] Netflix, [ 8 ] and Spotify [ 9 ] are some of the most successful examples of companies using modern personalized marketing strategies nowadays to offer tailored experiences to their customers. Personalization allows businesses to create more targeted, relevant, and timely interactions with their customers and prospects. This can have many benefits, such as: Personalization management systems typically offer a range of features to manage and deliver personalized messages, campaigns, and experiences. These features typically include: Personalization management systems are available as both on-premises and cloud-based solutions. On-premises solutions offer more control and customization. Yet, cloud-based ( SaaS ) solutions are more common due to their lower costs and ease of management. Some of the key vendors in the personalization management space are Adobe Target, [ 10 ] Croct, [ 11 ] Twilio , [ 12 ] Responsys, [ 13 ] and Monetate. [ 14 ]
https://en.wikipedia.org/wiki/Personalization_management_system
Personalized medicine , also referred to as precision medicine , is a medical model that separates people into different groups —with medical decisions , practices , interventions and/or products being tailored to the individual patient based on their predicted response or risk of disease .The terms personalized medicine, precision medicine, stratified medicine and P4 medicine are used interchangeably to describe this concept, though some authors and organizations differentiate between these expressions based on particular nuances. P4 is short for "predictive, preventive, personalized and participatory". While the tailoring of treatment to patients dates back at least to the time of Hippocrates , the usage of the term has risen in recent years thanks to the development of new diagnostic and informatics approaches that provide an understanding of the molecular basis of disease , particularly genomics . This provides a clear biomarker on which to stratify related patients. [ 1 ] [ 2 ] [ 3 ] Among the 14 Grand Challenges for Engineering , an initiative sponsored by National Academy of Engineering (NAE), personalized medicine has been identified as a key and prospective approach to "achieve optimal individual health decisions", therefore overcoming the challenge to " engineer better medicines ". [ 4 ] [ 5 ] In personalised medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the patient's genetics or their other molecular or cellular characteristics. [ citation needed ] The use of genetic information has played a major role in certain aspects of personalized medicine (e.g. pharmacogenomics ), and the term was first coined in the context of genetics, though it has since broadened to encompass all sorts of personalization measures, [ 6 ] including the use of proteomics , [ 7 ] imaging analysis, nanoparticle -based theranostics, [ 8 ] among others. Precision medicine is a medical model that proposes the customization of healthcare , with medical decisions, treatments, practices, or products being tailored to a subgroup of patients, instead of a one‐drug‐fits‐all model. [ 9 ] [ 10 ] In precision medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis. [ 11 ] Tools employed in precision medicine can include molecular diagnostics , imaging, and analytics. [ 10 ] [ 12 ] Precision medicine and personalized medicine (also individualized medicine) are analogous, applying a person's genetic profile to guide clinical decisions about the prevention, diagnosis, and treatment of a disease. [ 13 ] Personalized medicine is established on discoveries from the Human Genome Project . [ 13 ] In explaining the distinction from the similar term of personalized medicine , the United States President's Council of Advisors on Science and Technology writes: [ 14 ] Precision medicine refers to the tailoring of medical treatment to the individual characteristics of each patient. It does not literally mean the creation of drugs or medical devices that are unique to a patient, but rather the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease, in the biology or prognosis of those diseases they may develop, or in their response to a specific treatment. Preventive or therapeutic interventions can then be concentrated on those who will benefit, sparing expense and side effects for those who will not. [ 14 ] The use of the term "precision medicine" can extend beyond treatment selection to also cover creating unique medical products for particular individuals—for example, "...patient-specific tissue or organs to tailor treatments for different people." [ 15 ] Hence, the term in practice has so much overlap with "personalized medicine" that they are often used interchangeably, even though the latter is sometimes misinterpreted as involving a unique treatment for each individual. [ 16 ] Every person has a unique variation of the human genome . [ 17 ] Although most of the variation between individuals has no effect on health, an individual's health stems from genetic variation with behaviors and influences from the environment. [ 18 ] [ 11 ] Modern advances in personalized medicine rely on technology that confirms a patient's fundamental biology, DNA , RNA , or protein , which ultimately leads to confirming disease. For example, personalised techniques such as genome sequencing can reveal mutations in DNA that influence diseases ranging from cystic fibrosis to cancer. Another method, called RNA-seq , can show which RNA molecules are involved with specific diseases. Unlike DNA, levels of RNA can change in response to the environment. Therefore, sequencing RNA can provide a broader understanding of a person's state of health. Recent studies have linked genetic differences between individuals to RNA expression , [ 19 ] translation, [ 20 ] and protein levels. [ 21 ] The concepts of personalised medicine can be applied to new and transformative approaches to health care. Personalised health care is based on the dynamics of systems biology and uses predictive tools to evaluate health risks and to design personalised health plans to help patients mitigate risks, prevent disease and to treat it with precision when it occurs. The concepts of personalised health care are receiving increasing acceptance with the Veterans Administration committing to personalised, proactive patient driven care for all veterans. [ 22 ] In some instances personalised health care can be tailored to the markup of the disease causing agent instead of the patient's genetic markup; examples are drug resistant bacteria or viruses. [ 23 ] Precision medicine often involves the application of panomic analysis and systems biology to analyze the cause of an individual patient's disease at the molecular level and then to utilize targeted treatments (possibly in combination) to address that individual patient's disease process. The patient's response is then tracked as closely as possible, often using surrogate measures such as tumor load (versus true outcomes, such as five-year survival rate), and the treatment finely adapted to the patient's response. [ 24 ] [ 25 ] The branch of precision medicine that addresses cancer is referred to as "precision oncology". [ 26 ] [ 27 ] The field of precision medicine that is related to psychiatric disorders and mental health is called "precision psychiatry." [ 28 ] [ 29 ] Inter-personal difference of molecular pathology is diverse, so as inter-personal difference in the exposome , which influence disease processes through the interactome within the tissue microenvironment , differentially from person to person. As the theoretical basis of precision medicine, the "unique disease principle" [ 30 ] emerged to embrace the ubiquitous phenomenon of heterogeneity of disease etiology and pathogenesis . The unique disease principle was first described in neoplastic diseases as the unique tumor principle. [ 31 ] As the exposome is a common concept of epidemiology , precision medicine is intertwined with molecular pathological epidemiology , which is capable of identifying potential biomarkers for precision medicine. [ 32 ] In order for physicians to know if a mutation is connected to a certain disease, researchers often do a study called a " genome-wide association study " (GWAS). A GWAS study will look at one disease, and then sequence the genome of many patients with that particular disease to look for shared mutations in the genome. Mutations that are determined to be related to a disease by a GWAS study can then be used to diagnose that disease in future patients, by looking at their genome sequence to find that same mutation. The first GWAS, conducted in 2005, studied patients with age-related macular degeneration (ARMD). [ 33 ] It found two different mutations, each containing only a variation in only one nucleotide (called single nucleotide polymorphisms , or SNPs), which were associated with ARMD. GWAS studies like this have been very successful in identifying common genetic variations associated with diseases. As of early 2014, over 1,300 GWAS studies have been completed. [ 34 ] Multiple genes collectively influence the likelihood of developing many common and complex diseases. [ 18 ] Personalised medicine can also be used to predict a person's risk for a particular disease, based on one or even several genes. This approach uses the same sequencing technology to focus on the evaluation of disease risk, allowing the physician to initiate preventive treatment before the disease presents itself in their patient. For example, if it is found that a DNA mutation increases a person's risk of developing Type 2 Diabetes , this individual can begin lifestyle changes that will lessen their chances of developing Type 2 Diabetes later in life. [ citation needed ] The ability to provide precision medicine to patients in routine clinical settings depends on the availability of molecular profiling tests, e.g. individual germline DNA sequencing. [ 35 ] While precision medicine currently individualizes treatment mainly on the basis of genomic tests (e.g. Oncotype DX [ 36 ] ), several promising technology modalities are being developed, from techniques combining spectrometry and computational power to real-time imaging of drug effects in the body. [ 37 ] Many different aspects of precision medicine are tested in research settings (e.g., proteome, microbiome), but in routine practice not all available inputs are used. The ability to practice precision medicine is also dependent on the knowledge bases available to assist clinicians in taking action based on test results. [ 38 ] [ 39 ] [ 40 ] Early studies applying omics -based precision medicine to cohorts of individuals with undiagnosed disease has yielded a diagnosis rate ~35% with ~1 in 5 of newly diagnosed receiving recommendations regarding changes in therapy. [ 41 ] It has been suggested that until pharmacogenetics becomes further developed and able to predict individual treatment responses, the N-of-1 trials are the best method of identifying patients responding to treatments. [ 42 ] [ 43 ] On the treatment side, PM can involve the use of customized medical products such drug cocktails produced by pharmacy compounding [ 44 ] or customized devices. [ 45 ] It can also prevent harmful drug interactions, increase overall efficiency when prescribing medications, and reduce costs associated with healthcare. [ 46 ] The question of who benefits from publicly funded genomics is an important public health consideration, and attention is needed to ensure that implementation of genomic medicine does not further entrench social‐equity concerns. [ 47 ] Artificial intelligence is providing a paradigm shift toward precision medicine. [ 48 ] Machine learning algorithms are used for genomic sequence and to analyze and draw inferences from the vast amounts of data patients and healthcare institutions recorded in every moment. [ 49 ] AI techniques are used in precision cardiovascular medicine to understand genotypes and phenotypes in existing diseases, improve the quality of patient care, enable cost-effectiveness, and reduce readmission and mortality rates. [ 50 ] A 2021 paper reported that machine learning was able to predict the outcomes of Phase III clinical trials (for treatment of prostate cancer) with 76% accuracy. [ 51 ] This suggests that clinical trial data could provide a practical source for machine learning-based tools for precision medicine. [ citation needed ] Precision medicine may be susceptible to subtle forms of algorithmic bias . For example, the presence of multiple entry fields with values entered by multiple observers can create distortions in the ways data is understood and interpreted. [ 52 ] A 2020 paper showed that training machine learning models in a population-specific fashion (i.e. training models specifically for Black cancer patients) can yield significantly superior performance than population-agnostic models. [ 53 ] In his 2015 State of the Union address , then- U.S. President Barack Obama stated his intention to give $215 million [ 54 ] of funding to the " Precision Medicine Initiative " of the United States National Institutes of Health . [ 55 ] A short-term goal of this initiative was to expand cancer genomics to develop better prevention and treatment methods. [ 56 ] In the long term, the Precision Medicine Initiative aimed to build a comprehensive scientific knowledge base by creating a national network of scientists and embarking on a national cohort study of one million Americans to expand our understanding of health and disease. [ 57 ] The mission statement of the Precision Medicine Initiative read: "To enable a new era of medicine through research, technology, and policies that empower patients, researchers, and providers to work together toward development of individualized treatments". [ 58 ] In 2016 this initiative was renamed to "All of Us" and by January 2018, 10,000 people had enrolled in its pilot phase . [ 59 ] Precision medicine helps health care providers better understand the many things—including environment, lifestyle, and heredity—that play a role in a patient's health, disease, or condition. This information lets them more accurately predict which treatments will be most effective and safe, or possibly how to prevent the illness from starting in the first place. In addition, benefits are to: [ citation needed ] Advances in personalised medicine will create a more unified treatment approach specific to the individual and their genome. Personalised medicine may provide better diagnoses with earlier intervention, and more efficient drug development and more targeted therapies. [ 60 ] Having the ability to look at a patient on an individual basis will allow for a more accurate diagnosis and specific treatment plan. Genotyping is the process of obtaining an individual's DNA sequence by using biological assays . [ 61 ] By having a detailed account of an individual's DNA sequence, their genome can then be compared to a reference genome, like that of the Human Genome Project , to assess the existing genetic variations that can account for possible diseases. A number of private companies, such as 23andMe , Navigenics , and Illumina , have created Direct-to-Consumer genome sequencing accessible to the public. [ 17 ] Having this information from individuals can then be applied to effectively treat them. An individual's genetic make-up also plays a large role in how well they respond to a certain treatment, and therefore, knowing their genetic content can change the type of treatment they receive. [ citation needed ] An aspect of this is pharmacogenomics , which uses an individual's genome to provide a more informed and tailored drug prescription. [ 62 ] Often, drugs are prescribed with the idea that it will work relatively the same for everyone, but in the application of drugs, there are a number of factors that must be considered. The detailed account of genetic information from the individual will help prevent adverse events, allow for appropriate dosages, and create maximum efficacy with drug prescriptions. [ 17 ] For instance, warfarin is the FDA approved oral anticoagulant commonly prescribed to patients with blood clots. Due to warfarin 's significant interindividual variability in pharmacokinetics and pharmacodynamics , its rate of adverse events is among the highest of all commonly prescribed drugs. [ 4 ] However, with the discovery of polymorphic variants in CYP2C9 and VKORC1 genotypes, two genes that encode the individual anticoagulant response, [ 63 ] [ 64 ] physicians can use patients' gene profile to prescribe optimum doses of warfarin to prevent side effects such as major bleeding and to allow sooner and better therapeutic efficacy. [ 4 ] The pharmacogenomic process for discovery of genetic variants that predict adverse events to a specific drug has been termed toxgnostics . [ 65 ] An aspect of a theranostic platform applied to personalized medicine can be the use of diagnostic tests to guide therapy. The tests may involve medical imaging such as MRI contrast agents (T1 and T2 agents), fluorescent markers ( organic dyes and inorganic quantum dots ), and nuclear imaging agents ( PET radiotracers or SPECT agents). [ 8 ] [ 66 ] or in vitro lab test [ 67 ] including DNA sequencing [ 68 ] and often involve deep learning algorithms that weigh the result of testing for several biomarkers . [ 69 ] In addition to specific treatment, personalised medicine can greatly aid the advancements of preventive care. For instance, many women are already being genotyped for certain mutations in the BRCA1 and BRCA2 gene if they are predisposed because of a family history of breast cancer or ovarian cancer. [ 70 ] As more causes of diseases are mapped out according to mutations that exist within a genome, the easier they can be identified in an individual. Measures can then be taken to prevent a disease from developing. Even if mutations were found within a genome, having the details of their DNA can reduce the impact or delay the onset of certain diseases. [ 60 ] Having the genetic content of an individual will allow better guided decisions in determining the source of the disease and thus treating it or preventing its progression. This will be extremely useful for diseases like Alzheimer 's or cancers that are thought to be linked to certain mutations in our DNA. [ 60 ] A tool that is being used now to test efficacy and safety of a drug specific to a targeted patient group/sub-group is companion diagnostics . This technology is an assay that is developed during or after a drug is made available on the market and is helpful in enhancing the therapeutic treatment available based on the individual. [ 71 ] These companion diagnostics have incorporated the pharmacogenomic information related to the drug into their prescription label in an effort to assist in making the most optimal treatment decision possible for the patient. [ 71 ] Having an individual's genomic information can be significant in the process of developing drugs as they await approval from the FDA for public use. Having a detailed account of an individual's genetic make-up can be a major asset in deciding if a patient can be chosen for inclusion or exclusion in the final stages of a clinical trial. [ 60 ] Being able to identify patients who will benefit most from a clinical trial will increase the safety of patients from adverse outcomes caused by the product in testing, and will allow smaller and faster trials that lead to lower overall costs. [ 72 ] In addition, drugs that are deemed ineffective for the larger population can gain approval by the FDA by using personal genomes to qualify the effectiveness and need for that specific drug or therapy even though it may only be needed by a small percentage of the population., [ 60 ] [ 73 ] Physicians commonly use a trial and error strategy until they find the treatment therapy that is most effective for their patient. [ 60 ] With personalized medicine, these treatments can be more specifically tailored by predicting how an individual's body will respond and if the treatment will work based on their genome. [ 17 ] This has been summarized as "therapy with the right drug at the right dose in the right patient." [ 74 ] Such an approach would also be more cost-effective and accurate. [ 60 ] For instance, Tamoxifen used to be a drug commonly prescribed to women with ER+ breast cancer, but 65% of women initially taking it developed resistance. After research by people such as David Flockhart , it was discovered that women with certain mutation in their CYP2D6 gene, a gene that encodes the metabolizing enzyme, were not able to efficiently break down Tamoxifen, making it an ineffective treatment for them. [ 75 ] Women are now genotyped for these specific mutations to select the most effective treatment. [ citation needed ] Screening for these mutations is carried out via high-throughput screening or phenotypic screening . Several drug discovery and pharmaceutical companies are currently utilizing these technologies to not only advance the study of personalised medicine, but also to amplify genetic research . Alternative multi-target approaches to the traditional approach of "forward" transfection library screening can entail reverse transfection or chemogenomics . [ citation needed ] Pharmacy compounding is another application of personalised medicine. Though not necessarily using genetic information, the customized production of a drug whose various properties (e.g. dose level, ingredient selection, route of administration, etc.) are selected and crafted for an individual patient is accepted as an area of personalised medicine (in contrast to mass-produced unit doses or fixed-dose combinations) . Computational and mathematical approaches for predicting drug interactions are also being developed. For example, phenotypic response surfaces model the relationships between drugs, their interactions, and an individual's biomarkers. [ citation needed ] One active area of research is efficiently delivering personalized drugs generated from pharmacy compounding to the disease sites of the body. [ 5 ] For instance, researchers are trying to engineer nanocarriers that can precisely target the specific site by using real-time imaging and analyzing the pharmacodynamics of the drug delivery . [ 76 ] Several candidate nanocarriers are being investigated, such as iron oxide nanoparticles , quantum dots , carbon nanotubes , gold nanoparticles , and silica nanoparticles. [ 8 ] Alteration of surface chemistry allows these nanoparticles to be loaded with drugs, as well as to avoid the body's immune response, making nanoparticle-based theranostics possible. [ 5 ] [ 8 ] Nanocarriers' targeting strategies are varied according to the disease. For example, if the disease is cancer, a common approach is to identify the biomarker expressed on the surface of cancer cells and to load its associated targeting vector onto nanocarrier to achieve recognition and binding; the size scale of the nanocarriers will also be engineered to reach the enhanced permeability and retention effect (EPR) in tumor targeting. [ 8 ] If the disease is localized in the specific organ, such as the kidney, the surface of the nanocarriers can be coated with a certain ligand that binds to the receptors inside that organ to achieve organ-targeting drug delivery and avoid non-specific uptake. [ 77 ] Despite the great potential of this nanoparticle-based drug delivery system, the significant progress in the field is yet to be made, and the nanocarriers are still being investigated and modified to meet clinical standards. [ 8 ] [ 76 ] Theranostics is a personalized approach in nuclear medicine , using similar molecules for both imaging (diagnosis) and therapy. [ 78 ] [ 79 ] [ 80 ] The term is a portmanteau of " therapeutics " and " diagnostics ". Its most common applications are attaching radionuclides (either gamma or positron emitters) to molecules for SPECT or PET imaging, or electron emitters for radiotherapy . [ citation needed ] One of the earliest examples is the use of radioactive iodine for treatment of people with thyroid cancer . [ 78 ] Other examples include radio-labelled anti- CD20 antibodies (e.g. Bexxar ) for treating lymphoma , Radium-223 for treating bone metastases , Lutetium-177 DOTATATE for treating neuroendocrine tumors and Lutetium-177 PSMA for treating prostate cancer . [ 78 ] A commonly used reagent is fluorodeoxyglucose , using the isotope fluorine-18 . [ 81 ] Respiratory diseases affect humanity globally, with chronic lung diseases (e.g., asthma, chronic obstructive pulmonary disease, idiopathic pulmonary fibrosis, among others) and lung cancer causing extensive morbidity and mortality. These conditions are highly heterogeneous and require an early diagnosis. However, initial symptoms are nonspecific, and the clinical diagnosis is made late frequently. Over the last few years, personalized medicine has emerged as a medical care approach that uses novel technology [ 7 ] aiming to personalize treatments according to the particular patient's medical needs. In specific, proteomics is used to analyze a series of protein expressions, instead of a single biomarker . [ 82 ] Proteins control the body's biological activities including health and disease, so proteomics is helpful in early diagnosis. In the case of respiratory disease, proteomics analyzes several biological samples including serum, blood cells, bronchoalveolar lavage fluids (BAL), nasal lavage fluids (NLF), sputum, among others. [ 82 ] The identification and quantification of complete protein expression from these biological samples are conducted by mass spectrometry and advanced analytical techniques. [ 83 ] Respiratory proteomics has made significant progress in the development of personalized medicine for supporting health care in recent years. For example, in a study conducted by Lazzari et al. in 2012, the proteomics-based approach has made substantial improvement in identifying multiple biomarkers of lung cancer that can be used in tailoring personalized treatments for individual patients. [ 84 ] More and more studies have demonstrated the usefulness of proteomics to provide targeted therapies for respiratory disease. [ 82 ] Over recent decades cancer research has discovered a great deal about the genetic variety of types of cancer that appear the same in traditional pathology . There has also been increasing awareness of tumor heterogeneity , or genetic diversity within a single tumor. Among other prospects, these discoveries raise the possibility of finding that drugs that have not given good results applied to a general population of cases may yet be successful for a proportion of cases with particular genetic profiles. Personalized oncogenomics is the application of personalized medicine to cancer genomics. High-throughput sequencing methods are used to characterize genes associated with cancer to better understand disease pathology and improve drug development . Oncogenomics is one of the most promising branches of genomics , particularly because of its implications in drug therapy. Examples of this include: Through the use of genomics ( microarray ), proteomics (tissue array), and imaging ( fMRI , micro-CT ) technologies, molecular-scale information about patients can be easily obtained. These so-called molecular biomarkers have proven powerful in disease prognosis, such as with cancer. [ 89 ] [ 90 ] [ 91 ] The main three areas of cancer prediction fall under cancer recurrence, cancer susceptibility and cancer survivability. [ 92 ] Combining molecular scale information with macro-scale clinical data, such as patients' tumor type and other risk factors, significantly improves prognosis. [ 92 ] Consequently, given the use of molecular biomarkers, especially genomics, cancer prognosis or prediction has become very effective, especially when screening a large population. [ 93 ] Essentially, population genomics screening can be used to identify people at risk for disease, which can assist in preventative efforts. [ 93 ] Genetic data can be used to construct polygenic scores , which estimate traits such as disease risk by summing the estimated effects of individual variants discovered through a GWAS. These have been used for a wide variety of conditions, such as cancer, diabetes, and coronary artery disease. [ 94 ] [ 95 ] Many genetic variants are associated with ancestry, and it remains a challenge to both generate accurate estimates and to decouple biologically relevant variants from those that are coincidentally associated. Estimates generated from one population do not usually transfer well to others, requiring sophisticated methods and more diverse and global data. [ 96 ] [ 97 ] Most studies have used data from those with European ancestry, leading to calls for more equitable genomics practices to reduce health disparities. [ 98 ] Additionally, while polygenic scores have some predictive accuracy, their interpretations are limited to estimating an individual's percentile and translational research is needed for clinical use. [ 99 ] As personalised medicine is practiced more widely, a number of challenges arise. The current approaches to intellectual property rights, reimbursement policies, patient privacy, data biases and confidentiality as well as regulatory oversight will have to be redefined and restructured to accommodate the changes personalised medicine will bring to healthcare. [ 100 ] For instance, a survey performed in the UK concluded that 63% of UK adults are not comfortable with their personal data being used for the sake of utilizing AI in the medical field. [ 101 ] Furthermore, the analysis of acquired diagnostic data is a recent challenge of personalized medicine and its implementation. [ 38 ] For example, genetic data obtained from next-generation sequencing requires computer-intensive data processing prior to its analysis. [ 102 ] In the future, adequate tools will be required to accelerate the adoption of personalised medicine to further fields of medicine, which requires the interdisciplinary cooperation of experts from specific fields of research, such as medicine , clinical oncology , biology , and artificial intelligence . [ citation needed ] The U.S. Food and Drug Administration (FDA) has started taking initiatives to integrate personalised medicine into their regulatory policies . In October 2013, the agency published a report entitled " Paving the Way for Personalized Medicine: FDA's role in a New Era of Medical Product Development ," in which they outlined steps they would have to take to integrate genetic and biomarker information for clinical use and drug development. [ 72 ] These included developing specific regulatory standards , research methods and reference materials . [ 72 ] An example of the latter category they were working on is a "genomic reference library", aimed at improving quality and reliability of different sequencing platforms. [ 72 ] A major challenge for those regulating personalized medicine is a way to demonstrate its effectiveness relative to the current standard of care . [ 103 ] The new technology must be assessed for both clinical and cost effectiveness, and as of 2013 [update] , regulatory agencies had no standardized method. [ 103 ] As with any innovation in medicine, investment and interest in personalised medicine is influenced by intellectual property rights. [ 100 ] There has been a lot of controversy regarding patent protection for diagnostic tools, genes, and biomarkers. [ 104 ] In June 2013, the U.S. Supreme Court ruled that natural occurring genes cannot be patented, while "synthetic DNA" that is edited or artificially- created can still be patented. The Patent Office is currently reviewing a number of issues related to patent laws for personalised medicine, such as whether "confirmatory" secondary genetic tests post initial diagnosis, can have full immunity from patent laws. Those who oppose patents argue that patents on DNA sequences are an impediment to ongoing research while proponents point to research exemption and stress that patents are necessary to entice and protect the financial investments required for commercial research and the development and advancement of services offered. [ 104 ] Reimbursement policies will have to be redefined to fit the changes that personalised medicine will bring to the healthcare system. Some of the factors that should be considered are the level of efficacy of various genetic tests in the general population, cost-effectiveness relative to benefits, how to deal with payment systems for extremely rare conditions, and how to redefine the insurance concept of "shared risk" to incorporate the effect of the newer concept of "individual risk factors". [ 100 ] The study, Barriers to the Use of Personalized Medicine in Breast Cancer , took two different diagnostic tests which are BRACAnalysis and Oncotype DX. These tests have over ten-day turnaround times which results in the tests failing and delays in treatments. Patients are not being reimbursed for these delays which results in tests not being ordered. Ultimately, this leads to patients having to pay out-of-pocket for treatments because insurance companies do not want to accept the risks involved. [ 105 ] Perhaps the most critical issue with the commercialization of personalised medicine is the protection of patients. One of the largest issues is the fear and potential consequences for patients who are predisposed after genetic testing or found to be non-responsive towards certain treatments. This includes the psychological effects on patients due to genetic testing results. The right of family members who do not directly consent is another issue, considering that genetic predispositions and risks are inheritable. The implications for certain ethnic groups and presence of a common allele would also have to be considered. [ 100 ] Moreover, we could refer to the privacy issue at all layers of personalized medicine from discovery to treatment. One of the leading issues is the consent of the patients to have their information used in genetic testing algorithms primarily AI algorithms. The consent of the institution who is providing the data to be used is of prominent concern as well. [ 101 ] In 2008, the Genetic Information Nondiscrimination Act (GINA) was passed in an effort to minimize the fear of patients participating in genetic research by ensuring that their genetic information will not be misused by employers or insurers. [ 100 ] On February 19, 2015, FDA issued a press release titled: "FDA permits marketing of first direct-to-consumer genetic carrier test for Bloom syndrome. [ 6 ] Data biases also play an integral role in personalized medicine. It is important to ensure that the sample of genes being tested come from different populations. This is to ensure that the samples do not exhibit the same human biases we use in decision making. [ 106 ] Consequently, if the designed algorithms for personalized medicine are biased, then the outcome of the algorithm will also be biased because of the lack of genetic testing in certain populations. [ 107 ] For instance, the results from the Framingham Heart Study have led to biased outcomes of predicting the risk of cardiovascular disease. This is because the sample was tested only on white people and when applied to the non-white population, the results were biased with overestimation and underestimation risks of cardiovascular disease. [ 108 ] Several issues must be addressed before personalized medicine can be implemented. Very little of the human genome has been analyzed, and even if healthcare providers had access to a patient's full genetic information, very little of it could be effectively leveraged into treatment. [ 109 ] Challenges also arise when processing such large amounts of genetic data. Even with error rates as low as 1 per 100 kilobases, processing a human genome could have roughly 30,000 errors. [ 110 ] This many errors, especially when trying to identify specific markers, can make discoveries and verifiability difficult. There are methods to overcome this, but they are computationally taxing and expensive. There are also issues from an effectiveness standpoint, as after the genome has been processed, function in the variations among genomes must be analyzed using genome-wide studies. While the impact of the SNPs discovered in these kinds of studies can be predicted, more work must be done to control for the vast amounts of variation that can occur because of the size of the genome being studied. [ 110 ] In order to effectively move forward in this area, steps must be taken to ensure the data being analyzed is good, and a wider view must be taken in terms of analyzing multiple SNPs for a phenotype. The most pressing issue that the implementation of personalized medicine is to apply the results of genetic mapping to improve the healthcare system. This is not only due to the infrastructure and technology required for a centralized database of genome data, but also the physicians that would have access to these tools would likely be unable to fully take advantage of them. [ 110 ] In order to truly implement a personalized medicine healthcare system, there must be an end-to-end change. The Copenhagen Institute for Futures Studies and Roche set up FutureProofing Healthcare [ 111 ] which produces a Personalised Health Index, rating different countries performance against 27 different indicators of personalised health across four categories called 'Vital Signs'. They have run conferences in many countries to examine their findings. [ 112 ] [ 113 ]
https://en.wikipedia.org/wiki/Personalized_medicine
Personoid is the concept coined by Stanisław Lem , a Polish science-fiction writer, in Non Serviam, from his book A Perfect Vacuum (1971). His personoids are an abstraction of functions of human mind and they live in computers; they do not need any human-like physical body. In cognitive and software modeling, personoid is a research approach to the development of intelligent autonomous agents. In frame of the IPK (Information, Preferences, Knowledge) architecture, it is a framework of abstract intelligent agent with a cognitive and structural intelligence. It can be seen as an essence of high intelligent entities. From the philosophical and systemics perspectives, personoid societies can also be seen as the carriers of a culture . According to N. Gessler, the personoids study can be a base for the research on artificial culture and culture evolution . This article about futures studies is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Personoid
A Persoz pendulum is a device used for measuring hardness of materials . The instrument consists of a pendulum which is free to swing on two balls resting on a coated test panel. The pendulum hardness test is based on the principle that the amplitude of the pendulum's oscillation will decrease more quickly when supported on a softer surface. The hardness of any given coating is given by the number of oscillations made by the pendulum within the specified limits of amplitude determined by accurately positioned photo sensors. An electronic counter records the number of swings made by the pendulum The pendulum consists of balls which rest on the coating under test and form the fulcrum . The Persoz pendulum is very similar to the Konig pendulum . Both employ the same principle, that is the softer the coating the more the pendulum oscillations are damped and the shorter the time needed for the amplitude of oscillation to be reduced by a specified amount. The two pendulums differ in shape, mass and oscillation time, and there is no general relationship between the results obtained using the two pieces of equipment. In either case, the test simply involves noting the time in seconds for the amplitude of swing to decrease from either 6 to 3 degrees (Konig pendulum) or 12 to 4 degrees (Persoz pendulum). [ 1 ]
https://en.wikipedia.org/wiki/Persoz_pendulum
Geological perspective correlation is a theory in geology describing geometrical regularities in the layering of sediments. Seventy percent of the Earth's surface are occupied by sedimentary basins [ 1 ] – volumes consisted of sediments accumulated during million years, and alternated by long interruptions in sedimentation (hiatuses). The most noticeable feature of the rocks , which filled the basins, is layering ( stratification ). [ 2 ] Stratigraphy is a part of Geology that investigates the phenomenon of layering. It describes the sequence of layers in the basin as consisted of stratigraphic units . Units are defined on the basis of their lithology and have no clear definition. [ 3 ] Geological Perspective Correlation (GPC) is a theory that divided the geological cross-section in units according strong mathematical rule: all borders of layers in this unit obey the law of perspective geometry . [ 4 ] Sedimentation layers are mainly created in shallow waters of oceans, seas, and lakes. As new layers are deposited the old ones are sinking deeper due to the weight of accumulating sediments. [ 5 ] The content of sedimentary layers (lithological and biological), their order in the sequence, and geometrical characteristics keep records of the history of the Earth , of past climate , sea-level and environment. [ 1 ] Most knowledge about the sedimentary basins came from exploration drilling when searching for oil and gas. The essential feature of this information is that each layer is penetrated by the wells in a number of scattered locations. This raises the problem of identifying each layer in all wells – the geological correlation problem [ 6 ] The identification is based on comparison of 1) physical and mineralogical characteristics of the particular layer (lithostratigraphy), or 2) petrified remnants in this layer (biostratigraphy). [ 7 ] The similarity of layers is decreasing as the distance between the cross-sections increases that leads to ambiguity of the correlation scheme that indicates which layers penetrated at different locations belong to the same body (see A). To improve the results geologists take in consideration the spatial relations between layers, which restricted the number of acceptable correlations. The first restriction was formulated in XVII century: the sequence of layers is the same in any cross-section. The second one was discovered by Haites in 1963: [ 8 ] In an undisturbed sequence of layers (strata) the thicknesses (H1 and H2) of any layer observed in two different locations obey the law of perspective geometry , i.e. the perspective ratio K = H1/H2 is the same for all layers in this succession. This theory attracted attention around the world., [ 9 ] [ 10 ] [ 11 ] and particularly in Russia [ 12 ] [ 13 ] [ 14 ] The theory is also a basis of the method of graphical correlation in biostratigraphy widely used in oil and coal industries. [ 15 ] [ 16 ] [ 17 ] The geometry is the main lead to natural resources exploration. For example, the oil geologists are looking for permeable layers of particular geometry, which allows keeping the oil in place [ 18 ] (for instance, the domed shape anticlinal trap ). The ore geologists are looking for faults in the sediments – the ways, which deliver the melted mantle materials to the upper crust. [ 19 ] Knowledge about underground geometry of the sedimentary basins comes from geological observations, geophysical measurements and from drilling . [ 20 ] Drilling gives the most detailed information about the position, thickness, physical, chemical and biological characteristics of each layer, but the point is that each well presents all this information in one location on the layer. Because the geometry of a layer can be very complicated it becomes a difficult problem and requires a significant number of drilled wells. The challenge is identifying in each well the interval that belongs to the same layer now or in the past [ 21 ] (see A). To do this geologists use all available characteristics of the layer. Only after this it is possible to begin the recovery of the geometry of the layer (to be more precise – the geometry of the top and bottom surfaces of the layer). This procedure is called geological correlation, [ 6 ] and the results are presented as a correlation scheme (A). It is natural that at the beginning of the exploration, when the number of wells is small, the correlation scheme contains expensive mistakes. [ 22 ] The Danish scientist Nicolas Steno (1638–1686) is credited with three principles of sedimentation [ 23 ] The principles 1 allows defining the temporary relations between neighboring geological bodies, the principle 2 organizes the geometrical pattern of the succession of layers, the principle 3 helps uniting the parts of the layer found in separated geological cross-sections. Practical correlation has a lot of difficulties: fuzzy borders of the layers, variations in composition and structure of the rocks in the layer, unconformities in the sequence of layers, etc. This is why errors in correlation schemes are not seldom. [ 22 ] When the distances between available cross-sections are decreasing (for example, by drilling new wells) the quality of correlation is improving, but meanwhile the wrong geological decisions could be made that increases the expenses of geological projects. From Steno's principle of initial horizontality follows that the top borders of the layers (tops) were initially flat, and remained flat until the complete succession stays undisturbed by subsequent tectonic movements, but no regularities about the geometric relations between these flat surfaces in the succession were known. The first to shed light on the problem was Canadian geologist Binner Heites: in 1963 he published the Geological Perspective Correlation hypothesis. [ 8 ] Perspective geological correlation is a theory that establishes strong geometrical restrictions on the geometry of the layers in sedimentary deposits. In 1963 the Canadian geologist Binner Heites discovered a strong regularity of the layering in sedimentary basins : the thicknesses of layers within each stratigraphic unit are governed by the law of perspective correspondence . [ 8 ] It means that in undisturbed succession on the correlation scheme the straight lines drawn through the border points of the same layer in two cross-sections intersect in one point – center of perspectivity (see B). For geological purposes more convenient geometrical presentation of perspective relations is the correlation plot proposed by Jekhowsky [ 24 ] (see C): the depths of the layer's borders in one geologic cross-section are plotted along axis h′ (h1′, h2′, h3′,...), and the position of the same layers in another cross-section are plotted along axis h′′ (h1′′, h2′′, h3′′, ...). Points 1, 2, 3 ... with coordinates (h1′, h1′′), (h2′, h2′′), and (h3′, h3′′), accordingly, are called correlation points , and a curve drawn through these points, a correlation line . Black dots ( connectors ) represent the relative position of correlated borders on the plot. When the layers geometry satisfies the conditions of perspective correspondence the correlation line is a straight line. In the particular case of parallel layers the inclination of the correlation line is 45 0 . The Perspective Geological Correlation also states that Heites also concludes that all strata in each unit were governed by the same rate of deposition, and their borders are synchronous time-planes. Each layer has different thicknesses in different locations, but they lasted equally long. It was a significant input in chronostratigraphy . The following are consequences of the basic statements: The Perspective Geological Correlation is well grounded in traditional geology . The method of convergence maps serves for determining the structure of the layer based on the known structure of the layers lying above. It is based on the assumption that the layers are close to parallel. [ 25 ] Convergence map shows lines of equal distance ( isopach lines ) between key layer and target layer. [ 26 ] If the layers are parallel the distance between these layers is constant, the structures of both layers are identical, and to determine depths of the target horizon it is enough to get only one deep well, which reached the target layer. But in reality such conditions are extremely rare. In reality restoring the geometry of the target horizon demands a number of deep wells in the area. In this case the standard procedure for calculating the distance between the target layer and key layer in any point in the area is linear interpolation between the known wells. [ 27 ] The reliability of the result (the geometrical structure of the target horizon) is estimated by the analysis of the trend of the distances between key horizon and target horizon (isopachs): if the trend is regular, for example, the distances are monotonically changing in one direction, it is a sign of reliability of the reconstruction. In the simplest case the surface of the target horizon is a plain in general position , and the linear interpolation gives the correct result. The assumptions of the convergence method are consequences of the perspective correlations theory, [ 28 ] so, the method obtains the theoretical background. The theory also gave an additional criteria for the validity of the reconstructed surface. It defines the stratigraphic interval where layers were deposited without interruption, and where the layers' thicknesses satisfy the law of perspective geometry .The convergence maps deliver the correct result only when the layers belong to such stratigraphic unit. The description of the theory was supplied by a number of cases in support of the theory. The first review of Heites' publication appeared at 1964 in Russia. [ 13 ] It describes in the details the hypothesis and estimates very high its potential. The idea attracted the programmers working on automation of correlation on computers: the known rules of correlation were fuzzy, and it was impossible to formalize them and transform them into algorithms. The restrictions of the geometry of layering observed by Heites allowed compensating the lack of nonformal human knowledge. A group of Russian scientists (Guberman, Ovchinnikova, Maximov) positively tested Heites' hypothesis in different oil-bearing province using the computer program (in Central Asia, Volga-Ural province, West and East Siberia, and Russian Platform). [ 12 ] [ 29 ] For example, see plot (F). The activity of this group continued in 2000th, and covers new geological provinces around the globe Canada, Kansas, Louisiana, South Welsh. [ 30 ] O. Karpenko demonstrated an effective use of perspective correlation in resolving very practical problems of oil exploration. The law of perspective accordance allowed to discover the boundaries of changing the paleotectonic regime in the thin-layered sedimentary rocks, while the regular correlation technic didn't work. At the example of Rubanivsk gas field author demonstrated that the Dashava deposits of Precarpathian External Zone depression can be divided into number of zones of stable sediment accumulation in different conditions. Some zones correlate with the intervals of enhanced gas flow rate. [ 31 ] These works show that Since the publication of Heites' theory in 1963 it was republished in a number of reviews on quantitative methods of correlation (including automatic correlation). [ 9 ] [ 32 ] [ 10 ] Some of the reports (Hansen, Salin, Barinova) demonstrate that the perspective correlation allowed to achieve better reconstruction of the geological structure at the early stages of geological exploration. Hansen [ 11 ] describes the controversial history of investigating the complicated Patapsco formation in Maryland and Virginia (USA), and claims that “an adaptation of Heites' (1963) technique of perspective correlation is used to subdivide the Patapsco Formation into consistently defied mapping units”. Salin was able to simplify the stratigraphic description of Khatyr depression (Siberia) by applying perspective correlation. [ 14 ] Barinova analysed the structure of Osipovichy gas underground storage (East Europe)) by automatic correlation program baswed on Haites principles. [ 33 ] Because of the high resolving power of the method it was recognized the existence of a number of geological faults that break the leakproofness. Because of small displacements of the faults they were not found by the traditional methods of correlation, and rejected by the geological service of the project. Very soon after the storage started functioning significant leakage of gas was recognized [ 34 ] In 1964 Shaw proposed the method of correlating fossiliferous stratigraphic profiles using the two-axis graph (H). [ 15 ] The markers on each axis are the observed depths of lowest (FAD) and highest (LAD) occurrences of a specially defined group of fossils ( taxa ). The appearances/ disappearances of taxa are regarded as synchronous and used as markers of correlation. When projected on a graph, the corresponding points of two compared profiles form the Line of Correlation (LOC). Shaw showed that the ideal LOC consists of linear segments (H). Such conditions occur when the number of collected fossils is big, and one can be sure that the material covers the complete range of fossils appearance, and FADs and LADs can be accurately determined. In the reality, some sampled ranges will be shorter than true ranges, and this can disturb the linearity of the LOC. In every stratigraphic interval correlated ends of the range (FAD or LAD) belong to the same time surface, and in each geological cross-section (well or outcrop) this interval has identical duration but different thickness. It means that accumulation rates (thickness-to-duration ratio = tg β) are different in different locations. From the fact that the relation of durations of the units and their thicknesses are linear follows that in the limits of the linear section of LOC all strata have the same accumulation rate. The reliability and accuracy of Shaw's method have been tested by Edwards, [ 16 ] using a computer simulation on hypothetical data sets, and by Rubel and Pak [ 35 ] in terms of the formal logic and stochastic theory. The graphical correlation became a very important tool of stratigraphy in coal and oil industries. In 1988 Nemec showed the equivalence of Haites' perspective correlation, and Shaw's graphical correlation [ 17 ] Based on the theory of perspective correlation in 1986 S. Guberman proposed a model of the process of sedimentation [ 28 ] [ 36 ] According Haites’ theory in the given sedimentary basin in each stratigraphic unit the condition of perspective correspondence are satisfied in any pair of wells. [ 8 ] From this follows that the tops and bases of the layers in this stratigraphic unit satisfy the conditions of perspective correspondence in 3-D space (K). Any three points of a plane define the complete plane. It means that if in three wells the thicknesses of layers belonging to the same stratigraphic unit are known, then the thicknesses of these layers can be calculated for any location in the basin. Accordingly, if the structure of the top border of the stratigraphic unit is known, the structure of any other border in this unit can be calculated. The model of creating such sophisticated geometrical pattern is based on the first Steno's principle: the strata are originally horizontal, i.e. are planes. It occurs in the shallow waters due to the turbulence of the undersurface layer of water. The second Steno's principle, which indicates the creation of a series of sedimentary layers lying on top of each others, supposes the subsidence of the basin. The sinking of the basin follows the strong geometrical restrictions: the tectonic bloc, which carry the basin, is rotating around the straight line parallel to the water surface, and located onshore (L). As a result, until the moment of main tectonic disturbance all borders of the layers remain flat and the geometrical inter-relations are described as perspective correspondence. In the future the tectonic movements will distort the shape of the layers – the borders will no more be planes, but in majority of cases the changes are smooth and the perspective relations are maintained. This model allows specifying some geological terms. The Steno's horizontality principle has to state: the top surface of the sediments is horizontal.The conformity is a fundamental notion in stratigraphy. Until now this term is used in two different meanings: a surface between two stratigraphic sequances, and the relationship between two stratigraphic units. Sometimes both were used in the same paragraph (see, [ 37 ] page 84).Perspective correlation principle allows to define the notion of conformity: sequence of layers that obey the conditions of geometrical perspective is a unit of conformity. Two neighboring units of conformity are in relation of unconformity. Here is an example that shows that the borders of undisturbed stratigraphic unit in the Middle Carboniferous (Volga-Ural oil province, Russia) initially were plains. In the central part of the area (about 100 km in diameter) were chosen three wells at distances of 10 – 15 km.. The three tops of the stratigraphic unit in the three wells are points in 3D space with coordinates x, y, z, where x and y are present the position of the well on the surface (M), and z is the thickness of the stratigraphic unit in this location. They determine the top plain of the unit as it was at the time of its creation. The three bases determine the bottom plane of the unit at it was at the same time. This allowed calculating the thickness of the stratigraphic unit at any point in the area. Because the area was well enough drilled the calculated numbers can be compared with the real numbers. The average difference equals 2%.
https://en.wikipedia.org/wiki/Perspective_geological_correlation
Perspiration , also known as sweat , is the fluid secreted by sweat glands in the skin of mammals . [ 1 ] Two types of sweat glands can be found in humans: eccrine glands and apocrine glands . [ 2 ] The eccrine sweat glands are distributed over much of the body and are responsible for secreting the watery, brackish sweat most often triggered by excessive body temperature. Apocrine sweat glands are restricted to the armpits and a few other areas of the body and produce an odorless, oily, opaque secretion which then gains its characteristic odor from bacterial decomposition. In humans , sweating is primarily a means of thermoregulation , which is achieved by the water-rich secretion of the eccrine glands. Maximum sweat rates of an adult can be up to 2–4 litres (0.5–1 US gal) per hour or 10–14 litres (2.5–3.5 US gal) per day, but is less in children prior to puberty. [ 3 ] [ 4 ] [ 5 ] Evaporation of sweat from the skin surface has a cooling effect due to evaporative cooling . Hence, in hot weather, or when the individual's muscles heat up due to exertion, more sweat is produced. Animals with few sweat glands, such as dogs , accomplish similar temperature regulation results by panting, which evaporates water from the moist lining of the oral cavity and pharynx . Although sweating is found in a wide variety of mammals, [ 6 ] [ 7 ] relatively few (apart from humans, horses , some primates and some bovidae ) produce sweat in order to cool down. [ 8 ] In horses, such cooling sweat is created by apocrine glands [ 9 ] and contains a wetting agent, the protein latherin which transfers from the skin to the surface of their coats. [ 10 ] Sweat contributes to body odor when it is metabolized by bacteria on the skin . Medications that are used for other treatments and diet also affect odor. Some medical conditions, such as kidney failure and diabetic ketoacidosis , can also affect sweat odor. [ citation needed ] Diaphoresis is a non-specific symptom or sign, which means that it has many possible causes. Some causes of diaphoresis include physical exertion, menopause , fever, ingestion of toxins or irritants, and high environmental temperature. Strong emotions (anger, fear, anxiety) and recall of past trauma can also trigger sweating. This is sometimes referred to as flop sweat. [ 15 ] The vast majority of sweat glands in the body are innervated by sympathetic cholinergic neurons. [ 16 ] Sympathetic postganglionic neurons typically secrete norepinephrine and are named sympathetic adrenergic neurons; however, the sympathetic postganglionic neurons that innervate sweat glands secrete acetylcholine and hence are termed sympathetic cholinergic neurons. Sweat glands, piloerector muscles, and some blood vessels are innervated by sympathetic cholinergic neurons. Diaphoresis may be associated with some abnormal conditions, such as hyperthyroidism and shock. If it is accompanied by unexplained weight loss , fever / chills , or by palpitations , shortness of breath , unconsciousness , fatigue , dizziness , muscle pain , nausea , vomiting , diarrhea , and chest discomfort, it suggests serious illness. Diaphoresis is also seen in an acute myocardial infarction (heart attack), from the increased firing of the sympathetic nervous system , and is frequent in serotonin syndrome , which can result in serious sickness or even death. Diaphoresis can also be caused by many types of infections, often accompanied by high fever and/or chills which can trigger the result of hyperthermia . Most infections can cause some degree of diaphoresis and it is a very common symptom in some serious infections such as malaria and tuberculosis . In addition, pneumothorax can cause diaphoresis with splinting of the chest wall. Neuroleptic malignant syndrome and other malignant diseases (e.g. leukemias) can also cause diaphoresis. [ 17 ] Diabetics relying on insulin shots or oral medications may have low blood sugar ( hypoglycemia ), which can also cause diaphoresis. Drugs (including caffeine , morphine , alcohol , antidepressants and certain antipsychotics) may be causes, as well as withdrawal from alcohol , benzodiazepines , nonbenzodiazepines or narcotic painkiller dependencies. Sympathetic nervous system stimulants such as cocaine and amphetamines have also been associated with diaphoresis. Diaphoresis due to ectopic catecholamine is a classic symptom of a pheochromocytoma , a rare tumor of the adrenal gland . Acetylcholinesterase inhibitors (e.g. some insecticides ) also cause contraction of sweat gland smooth muscle leading to diaphoresis. Mercury is well known for its use as a diaphoretic, and was widely used in the 19th and early 20th century by physicians to "purge" the body of an illness. However, due to the high toxicity of mercury, secondary symptoms would manifest, which were erroneously attributed to the former disease that was being treated with mercurials. [ citation needed ] Infantile acrodynia (childhood mercury poisoning) is characterized by excessive perspiration. A clinician should immediately consider acrodynia in an afebrile child who is sweating profusely. Some people can develop a sweat allergy . [ 18 ] [ 19 ] The allergy is not due to the sweat itself but instead to an allergy-producing protein secreted by bacteria found on the skin. [ 19 ] : 52 Tannic-acid has been found to suppress the allergic response along with showering. [ 18 ] Millions of people are affected by hyperhidrosis , but more than half never receive treatment due to embarrassment, lack of awareness, or lack of concern. [ 20 ] While it most commonly affects the armpits , feet, and hands, it is possible for someone to experience this condition over their whole body. The face is another common area for hyperhidrosis to be an issue. Sweating uncontrollably is not always expected and may be embarrassing to people with the condition. It can cause both physiological and emotional problems in patients. It is generally inherited. [ 20 ] It is not life-threatening, but it is threatening to a person's quality of life. [ 21 ] Treatments for hyperhidrosis include antiperspirants , iontophoresis, and surgical removal of sweat glands. In severe cases, botulinum toxin injections or surgical cutting of nerves that stimulate the excessive sweating ( endoscopic thoracic sympathectomy ) may be an option. [ 20 ] Night sweats, also known as nocturnal hyperhidrosis, is the occurrence of excessive sweating during sleep. The person may or may not also perspire excessively while awake. One of the most common causes of night sweats in women over 40 is the hormonal changes related to menopause and perimenopause. This is a very common occurrence during the menopausal transition years. While night sweats might be relatively harmless, it can also be a sign of a serious underlying disease. It is important to distinguish night sweats due to medical causes from those that occur simply because the sleep environment is too warm, either because the bedroom is unusually hot or because there are too many covers on the bed. Night sweats caused by a medical condition or infection can be described as "severe hot flashes occurring at night that can drench sleepwear and sheets, which are not related to the environment". Some of the underlying medical conditions and infections that cause these severe night sweats can be life-threatening and should promptly be investigated by a medical practitioner. [ citation needed ] Sweating allows the body to regulate its temperature. Sweating is controlled from a center in the preoptic and anterior regions of the brain's hypothalamus , where thermosensitive neurons are located. The heat-regulatory function of the hypothalamus is also affected by inputs from temperature receptors in the skin . High skin temperature reduces the hypothalamic set point for sweating and increases the gain of the hypothalamic feedback system in response to variations in core temperature . Overall, however, the sweating response to a rise in hypothalamic ('core') temperature is much larger than the response to the same increase in average skin temperature. [ citation needed ] Sweating causes a decrease in core temperature through evaporative cooling at the skin surface. As high energy molecules evaporate from the skin, releasing energy absorbed from the body, the skin and superficial vessels decrease in temperature. Cooled venous blood then returns to the body's core and counteracts rising core temperatures. [ citation needed ] There are two situations in which the nerves will stimulate the sweat glands, causing perspiration: during physical heat and during emotional stress. In general, emotionally induced sweating is restricted to palms , soles , armpits , and sometimes the forehead , while physical heat-induced sweating occurs throughout the body. [ 22 ] People have an average of two to four million sweat glands, but how much sweat is released by each gland is determined by many factors, including sex, genetics, environmental conditions, age and fitness level. Two of the major contributors to sweat rate are an individual's fitness level and weight. If an individual weighs more, sweat rate is likely to increase because the body must exert more energy to function and there is more body mass to cool down. On the other hand, a fit person will start sweating earlier and more readily. As someone becomes fit, the body becomes more efficient at regulating the body's temperature and sweat glands adapt along with the body's other systems. [ 23 ] Human sweat is not pure water ; though it contains no protein, it always contains a small amount (0.2–1%) of solute . When a person moves from a cold climate to a hot climate, adaptive changes occur in the sweating mechanisms of the person. This process is referred to as acclimatization : the maximum rate of sweating increases and its solute composition decreases. The volume of water lost in sweat daily is highly variable, ranging from 100 to 8,000 millilitres per day (0.041 to 3.259 imp fl oz/ks). The solute loss can be as much as 350 mmol/d (or 90 mmol/d acclimatised) of sodium under the most extreme conditions. During average intensity exercise, sweat losses can average up to 2 litres (0.44 imp gal; 0.53 US gal) of water/hour. In a cool climate and in the absence of exercise , sodium loss can be very low (less than 5 mmol/d). Sodium concentration in sweat is 30–65 mmol/L, depending on the degree of acclimatisation. [ citation needed ] Horses have a thick, waterproofed, hairy coat that would normally block the rapid translocation of sweat water from the skin to the surface of the hair required for evaporative cooling. To solve this, horses have evolved a detergent-like protein, latherin , that they release at high concentrations in their sweat. [ 10 ] Their perspiration unlike humans is created by apocrine glands. [ 9 ] This protein, by wetting the horses' coat hairs facilitate water flow for cooling evaporation. The presence of this protein can be seen in the lathering that often occurs on the coats of sweating horses, especially when rubbed. [ 10 ] In hot conditions, horses during three hours of moderate-intensity exercise can lose 30 to 35 litres (6.6 to 7.7 imp gal; 7.9 to 9.2 US gal) of water and 100 grams (3.5 oz) of sodium, 198 grams (7.0 oz) of chloride and 45 grams (1.6 oz) of potassium. [ 9 ] Sweat is mostly water . A microfluidic model of the eccrine sweat gland provides details on what solutes partition into sweat, their mechanisms of partitioning, and their fluidic transport to the skin surface. [ 24 ] Dissolved in the water are trace amounts of minerals , lactic acid , and urea . Although the mineral content varies, some measured concentrations are: sodium ( 0.9 gram/litre ), potassium ( 0.2 g/L ), calcium ( 0.015 g/L ), and magnesium ( 0.0013 g/L ). [ 25 ] Relative to the plasma and extracellular fluid, the concentration of Na + ions is much lower in sweat (≈40 mM in sweat versus ≈150 mM in plasma and extracellular fluid). Initially, within eccrine glands sweat has a high concentration of Na + ions. In the sweat ducts, the Na + ions are re-absorbed into tissue by epithelial sodium channels (ENaC) that are located on the apical membrane of epithelial cells that form the duct (see Fig. 9 of the reference). [ 2 ] Many other trace elements are also excreted in sweat, again an indication of their concentration is (although measurements can vary fifteenfold) zinc ( 0.4 milligrams/litre ), copper ( 0.3–0.8 mg/L ), iron ( 1 mg/L ), chromium ( 0.1 mg/L ), nickel ( 0.05 mg/L ), and lead ( 0.05 mg/L ). [ 26 ] [ 27 ] Probably many other less-abundant trace minerals leave the body through sweating with correspondingly lower concentrations. Some exogenous organic compounds make their way into sweat as exemplified by an unidentified odiferous "maple syrup" scented compound in several of the species in the mushroom genus Lactarius . [ 28 ] In humans, sweat is hypoosmotic relative to plasma [ 29 ] (i.e. less concentrated ). Sweat is found at moderately acidic to neutral pH levels, typically between 4.5 and 7.0. [ 30 ] Sweat contains many glycoproteins . [ 31 ] Sweat may serve an antimicrobial function, like that of earwax or other secretory fluids (e.g., tears, saliva, and milk). [ clarification needed ] It does this through a combination of glycoproteins that either bind directly to, or prevent the binding of microbes to, the skin and seem to form part of the innate immune system . [ 31 ] In 2001, researchers at Eberhard-Karls University in Tübingen, Germany , isolated a large protein called dermcidin from skin. This protein, which could be cleaved into other antimicrobial peptides , was shown to be effective at killing some species of bacteria and fungi that affect humans, including Escherichia coli , Enterococcus faecalis , Staphylococcus aureus , and Candida albicans . It was active at high salt concentrations and in the acidity range of human sweat, where it was present at concentrations of 1–10 mg/ml. [ 32 ] [ 33 ] Artificial skin capable of sweating similar to natural sweat rates and with the surface texture and wetting properties of regular skin has been developed for research purposes. [ 34 ] [ 35 ] Artificial perspiration is also available for in-vitro testing, and contains 19 amino acids and the most abundant minerals and metabolites in sweat. [ citation needed ] There is interest in its use in wearable technology . Sweat can be sampled and sensed non-invasively and continuously using electronic tattoos, bands, or patches. [ 36 ] However, sweat as a diagnostic fluid presents numerous challenges as well, such as very small sample volumes and filtration (dilution) of larger-sized hydrophilic analytes. Currently the only major commercial application for sweat diagnostics is for infant cystic fibrosis testing based on sweat chloride concentrations. [ citation needed ]
https://en.wikipedia.org/wiki/Perspiration
Perstraction is a membrane extraction process, where two liquid phases are contacted across a membrane. The desired species in the feed (solute), selectively crosses the membrane into the extracting solution. Perstraction was originally developed to overcome the downsides of liquid–liquid extraction , for example extractant toxicity and emulsion formation. Perstraction has been applied to many fields including fermentation, [ 1 ] waste water treatment [ 2 ] and alcohol-free beverage production. Perstraction is the separation technique developed from liquid-liquid extraction. Due to the presence of the membrane a wider selection of extractants can be used, this can include the use of miscible solutions, for example the recovery of ammonia from waste water using sulphuric acid. [ 3 ] This process is analogous to pervaporation in some ways. But the permeate is in liquid phase. Perstraction technique eliminates the problem of phase dispersion and separation altogether. [ 4 ] A basic perstraction is called the single perstraction or membrane perstraction. An advantage is minimizing toxic damage to microorganisms or enzymes . Nevertheless, perstraction includes problems like expensive membranes, clogging and fouling of membranes. [ 5 ] Perstraction has been combined with the ABE (acetone butanol ethanol) fermentation for butanol production. [ 1 ] Butanol is toxic to the fermentation, therefore perstraction can be applied to remove the butanol from the vicinity of the bacteria as soon as it is produced. Liquid-liquid extraction (LLE) was combined with the ABE fermentation for in situ product recovery , but the extractants with the highest affinity for butanol tend to be toxic to the bacteria. The application of LLE would also require the extractant to be sterilised prior to contact with the fermentation broth. Perstraction can overcome these problems due to a membrane separating the fermentation broth from the extractant. As an in situ product recovery technique for the ABE fermentation perstraction is still in its development stages. A membrane brings many new elements for the separation. Amino acids have been separated by perstraction. [ 6 ] [ 7 ] Membranes did not only separate extractants and the primary solution but also were selective for amino acids. Charged membranes were used. So they selected amino acids by pKa. Besides the selectivity of a membrane is affected by its thickness, pore diameter and charge potential. The bigger pore is, the better amino acids permeate the membrane. The higher charge potential is, the bigger electrostatic rejection effects are. The thinner membrane, the less it is selective. Pollutants can be deleted from groundwater by perstraction. [ 8 ] Different techniques have been patented. [ 9 ] The oldest one has published in 1990 and the youngest one in 1998. In the 2000s has been done few patent applications but no granted patents. [ 10 ] Organic compounds through a membrane has been concentrated from groundwater. [ 8 ] The concentration factor is from 1 000 to 10 000 bringing 0.1 ppb concentrations to between 0.1 and 1.0 ppm. Besides the concentration of a contaminant has been analyzed in real-time. The membrane is polymer like polysulphane. The hole diameter is 300 μm and thickness is 30 μm. The pharmaceuticals pass sewage treatment plants. They like estrogen conjugates may cause problems. Drugs of the research were common, present in the aquatic environment and inability to be adequately removed by sewage treatment plants . [ 11 ] There were seven different drugs in the research. Dibutyl sebacate and oleic acid formed liquid cores in capsules because they do not diffuse away from capsules and have affinity for drugs. Capsule external diameters were 740 μm and 680 μm and internal diameters were 570 μm and 500 μm. Agitation was 300 rpm. Equilibrium times were 30, 50 and 90 minutes. Since dibutyl sebacate and oleic acid were different affinity for drugs, they were used concurrently. [ 11 ] Four drugs were extracted effectively for 40–50 minutes (at least 50% removed). Extraction rates did not change significantly above 150 rpm. Membrane thickness did not affect the result significantly. On the contrary the capsule size was remarkable for mass transfer. An antibiotic called geldanamycin was separated from media by the capsular perstraction. [ 12 ] Geldanamycin is hydrophobic. Outer particle diameter varied from than less 500 to 750 μm. Alginate formed the shell of capsule and its thickness varied from 30 to 90 μm. Dibutyl sebacate or oleic acid as liquid core extracted geldanamycin well. The bigger agitation and thinner capsule membrane were, the faster transfer rate was. Geldanamycin was back-extracted from capsules. [ 12 ] Dibutyl sebacate capsules were disposable because liquid core came out from capsules in the back-extraction. On the contrary, oleic acid remained in capsules during the back-extraction when an extractant was saturated with oleic acid. Nevertheless, the presence of oleic acid in the back-extraction solution demanded more purification steps (precipitation, centrifugation and filtration). Oleic acid was removed because it prevents crystallization of geldanamycin. Therefore, geldanamycin was crystallized and the end product was highly purified. Enzymes can be immobilized to the capsule membrane. [ 6 ] In this case, the capsule external diameter was 500 μm and internal diameter 300 μm. The product of enzyme-catalyzed reaction can be concentrated to capsules and the end-product inhibition is low. [ 13 ] Enzyme recycling could be performed by back-extracting the product. The technique has been applied to the hydrolysis of penicillin G . Osmosis - A process by which solvent molecules cross between liquids separated by a membrane
https://en.wikipedia.org/wiki/Perstraction
In chemistry , persulfide refers to the functional group R-S-S-H. [ 1 ] Persulfides are intermediates in the biosynthesis of iron-sulfur proteins [ 2 ] and are invoked as precursors to hydrogen sulfide , a signaling molecule. The nomenclature used for organosulfur compounds is often non-systematic. Sometimes persulfides are called hydrodisulfides to further avoid confusion with disulfides with the grouping R-S-S-R, by emphasizing the presence of an H at one end of a disulfide bond . Compared to thiols (R-S-H), persulfides are uncommon. They are thermodynamically unstable with respect to loss of elemental sulfur: Nonetheless, persulfides are often kinetically stable. The S-H bond is both more acidic and more fragile than in thiols. This can be seen in the bond dissociation energy of a typical persulfide, which is 22 kcal/mol weaker than a typical thiol, and the lower pK a of about 6.2 for persulfides compared to 7.5 for thiols. Thus, persulfides exist predominantly in the ionized form at neutral pH. This effect is attributed to the stability of the RSS· radical. [ 1 ] The structure of trityl persulfide has been determined by X-ray crystallography . The S-S bond length is 204 picometers and the C-S-S-H dihedral angle is 82°. These parameters are unexceptional. [ 3 ] (C 6 H 5 ) 3 CSSH behaves as a source of sulfur, illustrated by its reaction with triphenylphosphine to give triphenylphosphine sulfide and triphenylmethanethiol : The cofactors 4-thiouridine and thiamine are produced by the action of persulfides. Cystathionase generates the persulfide of cysteine (sometimes called thiocysteine) from cystine . Persulfides have been invoked as intermediates in the biodegradation of carbon disulfide [ 4 ] and mercaptopyruvate .
https://en.wikipedia.org/wiki/Persulfide
Perturb-seq (also known as CRISP-seq and CROP-seq ) refers to a high-throughput method of performing single cell RNA sequencing (scRNA-seq) on pooled genetic perturbation screens. [ 1 ] [ 2 ] [ 3 ] Perturb-seq combines multiplexed CRISPR mediated gene inactivations with single cell RNA sequencing to assess comprehensive gene expression phenotypes for each perturbation. Inferring a gene’s function by applying genetic perturbations to knock down or knock out a gene and studying the resulting phenotype is known as reverse genetics . Perturb-seq is a reverse genetics approach that allows for the investigation of phenotypes at the level of the transcriptome , to elucidate gene functions in many cells, in a massively parallel fashion. The Perturb-seq protocol uses CRISPR technology to inactivate specific genes and DNA barcoding of each guide RNA to allow for all perturbations to be pooled together and later deconvoluted, with assignment of each phenotype to a specific guide RNA . [ 1 ] [ 2 ] Droplet-based microfluidics platforms (or other cell sorting and separating techniques) are used to isolate individual cells, and then scRNA-seq is performed to generate gene expression profiles for each cell. Upon completion of the protocol, bioinformatics analyses are conducted to associate each specific cell and perturbation with a transcriptomic profile that characterizes the consequences of inactivating each gene. In the December 2016 issue of the Cell journal, two companion papers were published that each introduced and described this technique. [ 1 ] [ 2 ] A third paper describing a conceptually similar approach (termed CRISP-seq) was also published in the same issue. [ 4 ] In October 2016, the CROP-seq method for single-cell CRISPR screening was presented in a preprint on bioRxiv [ 5 ] and later published in the Nature Methods journal. [ 3 ] While each paper shared the core principles of combining CRISPR mediated perturbation with scRNA-seq, their experimental, technological and analytical approaches differed in several aspects, to explore distinct biological questions, demonstrating the broad utility of this methodology. For example, the CRISPR-seq paper demonstrated the feasibility of in vivo studies using this technology, and the CROP-seq protocol facilitates large screens by providing a vector that makes the guide RNA itself readable (rather than relying on expressed barcodes), which allows for single-step guide RNA cloning. [ 6 ] A June 2022 paper in Cell published results from one of the first genome-scale Perturb-seq screens, which uncovered new perturbations that promote chromosomal instability as well as variations in the expression of mitochondrially encoded transcripts in response to different forms of mitochondrial stress. [ 7 ] Pooled CRISPR libraries that enable gene inactivation can come in the form of either knockout or interference. Knockout libraries perturb genes through double stranded breaks that prompt the error prone non-homologous end joining repair pathway to introduce disruptive insertions or deletions. CRISPR interference (CRISPRi) on the other hand utilizes a catalytically inactive nuclease to physically block RNA polymerase , effectively preventing or halting transcription . [ 8 ] Perturb-seq has been utilized with both the knockout and CRISPRi approaches in the Dixit et al. paper [ 2 ] and the Adamson et al. paper, [ 1 ] respectively. Pooling all guide RNAs into a single screen relies on DNA barcodes that act as identifiers for each unique guide RNA. There are several commercially available pooled CRISPR libraries including the guide barcode library used in the study by Adamson et al. [ 1 ] CRISPR libraries can also be custom made using tools for sgRNA design, many of which are listed on the CRISPR/cas9 tools Wikipedia page. The sgRNA expression vector design will depend largely on the experiment performed but requires the following central components: Cells are typically transduced with a Multiplicity of Infection (MOI) of 0.4 to 0.6 lentiviral particles per cell to maximize the likelihood of obtaining the most cells which contain a single guide RNA. [ 9 ] [ 10 ] If the effects of simultaneous perturbations are of interest, a higher MOI may be applied to increase the amount of transduced cells with more than one guide RNA. Selection for successfully transduced cells is then performed using a fluorescence assay or an antibiotic assay, depending on the reporter gene used in the expression vector. After successfully transduced cells have been selected for, isolation of single cells is needed to conduct scRNA-seq. Perturb-seq and CROP-seq have been performed using droplet-based technology for single cell isolation, [ 1 ] [ 2 ] [ 3 ] while the closely related CRISP-seq was performed with a microwell-based approach. [ 4 ] Once cells have been isolated at the single cell level, reverse transcription , amplification and sequencing takes place to produce gene expression profiles for each cell. Many scRNA-seq approaches incorporate unique molecular identifiers (UMIs) and cell barcodes during the reverse transcription step to index individual RNA molecules and cells, respectively. These additional barcodes serve to help quantify RNA transcripts and to associate each of the sequences with their cell of origin. Read alignment and processing are performed to map quality reads to a reference genome. Deconvolution of cell barcodes, guide barcodes and UMIs enables the association of guide RNAs with the cells that contain them, thus allowing the gene expression profile of each cell to be affiliated with a particular perturbation. Further downstream analyses on the transcriptional profiles will depend entirely on the biological question of interest. T-distributed Stochastic Neighbor Embedding (t-SNE) is a commonly used machine learning algorithm to visualize the high-dimensional data that results from scRNA-seq in a 2-dimensional scatterplot. [ 1 ] [ 4 ] [ 11 ] The authors who first performed Perturb-seq developed an in-house computational framework called MIMOSCA that predicts the effects of each perturbation using a linear model and is available on an open software repository. [ 12 ] Perturb-seq makes use of current technologies in molecular biology to integrate a multi-step workflow that couples high-throughput screening with complex phenotypic outputs. When compared to alternative methods used for gene knockdowns or knockouts, such as RNAi , zinc finger nucleases or transcription activator-like effector nucleases (TALENs), the application of CRISPR-based perturbations enables more specificity, efficiency and ease of use. [ 9 ] [ 13 ] Another advantage of this protocol is that while most screening approaches can only assay for simple phenotypes, such as cellular viability, scRNA-seq allows for a much richer phenotypic readout, with quantitative measurements of gene expression in many cells simultaneously. Perturb-seq can therefore combine the high throughput of forward genetics , in terms of the number of genetic perturbations, with the rich phenotype dimension of reverse genetics . [ 7 ] However, while a large and comprehensive amount of data can be a benefit, it can also present a major challenge. Single cell RNA expression readouts are known to produce ‘noisy’ data, with a significant number of false positives. [ 14 ] Both the large size and noise that is associated with scRNA-seq will likely require new and powerful computational methods and bioinformatics pipelines to better make sense of the resulting data. Another challenge associated with this protocol is the creation of large scale CRISPR libraries. The preparation of these extensive libraries depends upon a comparative increase in the resources required to culture the massive numbers of cells that are needed to achieve a successful screen of many perturbations. [ 9 ] In parallel to these single-cell methods, other approaches have been developed to reconstruct genetic pathways using whole-organism RNA-sequencing. These methods use a single aggregate statistic, called the transcriptome-wide epistasis coefficient, to guide pathway reconstruction. [ 15 ] In contrast with the statistical framework of the methods described above, this coefficient may be more robust to noise and is intuitively interpretable in terms of Batesonian epistasis. This approach was used to identify a new state in the life cycle of the nematode C. elegans . [ 16 ] When the phenotyping of the pooled library is performed with microcopy, rather than RNA sequencing, the method is referred to as optical pooled screening, OPS. [ 17 ] This allows investigating spatial, morphological, or dynamic cellular features that are not easily captured by RNA sequencing. Also OPS uses CRISPR technology to inactivate specific genes and DNA barcoding of each guide RNA to allow for all perturbations to be pooled together for phenotyping and later deconvoluted, with assignment of each phenotype to a specific guide RNA . In OPS the barcodes are read out by in situ sequencing. Perturb-seq or other conceptually similar protocols can be used to address a broad scope of biological questions and the applications of this technology will likely grow over time. Three papers on this topic, published in the December 2016 issue of the Journal Cell, demonstrated the utility of this method by applying it to the investigation of several distinct biological functions. In the paper, “Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens”, the authors used Perturb-seq to conduct knockouts of transcription factors related to the immune response in hundreds of thousands of cells to investigate the cellular consequences of their inactivation. [ 2 ] They also explored the effects of transcription factors on cell states in the context of the cell cycle . In the study led by UCSF , “A Multiplexed Single-Cell CRISPR Screening Platform Enables Systematic Dissection of the Unfolded Protein Response” the researchers suppressed multiple genes in each cell to study the unfolded protein response (UPR) pathway. [ 1 ] With a similar methodology, but using the term CRISP-seq instead of Perturb-seq, the paper "Dissecting Immune Circuits by Linking CRISPR-Pooled Screens with Single-Cell RNA-Seq" performed a proof of concept experiment by using the technique to probe regulatory pathways related to innate immunity in mice. [ 4 ] Lethality of each perturbation and epistasis analyses in cells with multiple perturbations was also investigated in these papers. Perturb-seq has so far been used with very few perturbations per experiment, but it can theoretically be scaled up to address the whole genome. Finally, the October 2016 preprint [ 5 ] and subsequent paper [ 3 ] demonstrate the bioinformatic reconstruction of the T cell receptor signaling pathway in Jurkat cells based on CROP-seq data. Recently, the Perturb-seq (CROP-seq) workflow has been adapted to enable genome-scale CRISPRi (CRISPR interference) screens in Jurkat cells at single-cell resolution. [ 18 ] The first-of-its-kind genome-scale CRISPRi screen was conducted to verify factors involved in TCR signaling pathways. In more detail, a guide RNA library targeting 18,595 human genes was utilized for CRISPR-based gene knockdowns in Jurkat cells expressing the dCas9- KRAB fusion endonuclease. In total, one million Jurkat cells were processed for single-cell RNA sequencing allowing transcriptomic readouts of a final list of 374 marker genes involved in TCR signaling. The bioinformatic analysis confirmed more than 70 known activators and repressors of TCR signaling cascades, hence showcasing the potential of Perturb-seq (CROP-seq) screens to support translational research. While these publications used these protocols for answering complex biological questions, this technology can also be used as a validation assay to ensure the specificity of any CRISPR based knockdown or knockout; the expression levels of the target genes as well as others can be measured with single cell resolution in parallel, to detect whether the perturbation was successful and to assess the experiment for off target effects. Furthermore, these protocols make it possible to perform perturbation screens in heterogeneous tissues, while obtaining cell type specific gene expression responses.
https://en.wikipedia.org/wiki/Perturb-seq
In astronomy , perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body . [ 1 ] The other forces can include a third (fourth, fifth, etc.) body, resistance , as from an atmosphere , and the off-center attraction of an oblate or otherwise misshapen body. [ 2 ] The study of perturbations began with the first attempts to predict planetary motions in the sky. In ancient times the causes were unknown. Isaac Newton , at the time he formulated his laws of motion and of gravitation , applied them to the first analysis of perturbations, [ 2 ] recognizing the complex difficulties of their calculation. [ a ] Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for marine navigation . The complex motions of gravitational perturbations can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is a conic section , and can be described in geometrical terms. This is called a two-body problem , or an unperturbed Keplerian orbit . The differences between that and the actual motion of the body are perturbations due to the additional gravitational effects of the remaining body or bodies. If there is only one other significant body then the perturbed motion is a three-body problem ; if there are multiple other bodies it is an n ‑body problem . A general analytical solution (a mathematical expression to predict the positions and motions at any future time) exists for the two-body problem; when more than two bodies are considered analytic solutions exist only for special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. [ 6 ] Most systems that involve multiple gravitational attractions present one primary body which is dominant in its effects (for example, a star , in the case of the star and its planet, or a planet, in the case of the planet and its satellite). The gravitational effects of the other bodies can be treated as perturbations of the hypothetical unperturbed motion of the planet or satellite around its primary body. In methods of general perturbations , general differential equations, either of motion or of change in the orbital elements , are solved analytically, usually by series expansions . The result is usually expressed in terms of algebraic and trigonometric functions of the orbital elements of the body in question and the perturbing bodies. This can be applied generally to many different sets of conditions, and is not specific to any particular set of gravitating objects. [ 7 ] Historically, general perturbations were investigated first. The classical methods are known as variation of the elements , variation of parameters or variation of the constants of integration . In these methods, it is considered that the body is always moving in a conic section , however the conic section is constantly changing due to the perturbations. If all perturbations were to cease at any particular instant, the body would continue in this (now unchanging) conic section indefinitely; this conic is known as the osculating orbit and its orbital elements at any particular time are what are sought by the methods of general perturbations. [ 2 ] General perturbations takes advantage of the fact that in many problems of celestial mechanics , the two-body orbit changes rather slowly due to the perturbations; the two-body orbit is a good first approximation. General perturbations is applicable only if the perturbing forces are about one order of magnitude smaller, or less, than the gravitational force of the primary body. [ 6 ] In the Solar System , this is usually the case; Jupiter , the second largest body, has a mass of about ⁠ 1 / 1000 ⁠ that of the Sun . General perturbation methods are preferred for some types of problems, as the source of certain observed motions are readily found. This is not necessarily so for special perturbations; the motions would be predicted with similar accuracy, but no information on the configurations of the perturbing bodies (for instance, an orbital resonance ) which caused them would be available. [ 6 ] In methods of special perturbations , numerical datasets, representing values for the positions, velocities and accelerative forces on the bodies of interest, are made the basis of numerical integration of the differential equations of motion . [ 8 ] In effect, the positions and velocities are perturbed directly, and no attempt is made to calculate the curves of the orbits or the orbital elements . [ 2 ] Special perturbations can be applied to any problem in celestial mechanics , as it is not limited to cases where the perturbing forces are small. [ 6 ] Once applied only to comets and minor planets, special perturbation methods are now the basis of the most accurate machine-generated planetary ephemerides of the great astronomical almanacs. [ 2 ] [ b ] Special perturbations are also used for modeling an orbit with computers. Cowell's formulation (so named for Philip H. Cowell , who, with A.C.D. Cromellin, used a similar method to predict the return of Halley's comet) is perhaps the simplest of the special perturbation methods. [ 9 ] In a system of n {\displaystyle \ n\ } mutually interacting bodies, this method mathematically solves for the Newtonian forces on body i {\displaystyle \ i\ } by summing the individual interactions from the other j {\displaystyle j} bodies: where r ¨ i {\displaystyle \ \mathbf {\ddot {r}} _{i}\ } is the acceleration vector of body i {\displaystyle i} , G {\displaystyle G} is the gravitational constant , m j {\displaystyle \ m_{j}\ } is the mass of body j {\displaystyle j} , r i {\displaystyle \ \mathbf {r} _{i}\ } and r j {\displaystyle \ \mathbf {r} _{j}\ } are the position vectors of objects i {\displaystyle \ i\ } and j {\displaystyle \ j\ } respectively, and r i j ≡ ‖ r j − r i ‖ {\displaystyle \ r_{ij}\equiv \|\mathbf {r} _{j}-\mathbf {r} _{i}\|\ } is the distance from object i {\displaystyle i} to object j {\displaystyle \ j\ } , all vectors being referred to the barycenter of the system. This equation is resolved into components in x , {\displaystyle \ x\ ,} y , {\displaystyle \ y\ ,} and z , {\displaystyle \ z\ ,} and these are integrated numerically to form the new velocity and position vectors. This process is repeated as many times as necessary. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large. [ 10 ] However, for many problems in celestial mechanics , this is never the case. Another disadvantage is that in systems with a dominant central body, such as the Sun , it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies, although with high precision numbers built into modern computers this is not as much of a limitation as it once was. [ 11 ] Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time. [ 12 ] Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification . [ 10 ] Encke's method is similar to the general perturbation method of variation of the elements, except the rectification is performed at discrete intervals rather than continuously. [ 13 ] Letting ρ {\displaystyle {\boldsymbol {\rho }}} be the radius vector of the osculating orbit , r {\displaystyle \mathbf {r} } the radius vector of the perturbed orbit, and δ r {\displaystyle \delta \mathbf {r} } the variation from the osculating orbit, r ¨ {\displaystyle \mathbf {\ddot {r}} } and ρ ¨ {\displaystyle {\boldsymbol {\ddot {\rho }}}} are just the equations of motion of r {\displaystyle \mathbf {r} } and ρ , {\displaystyle {\boldsymbol {\rho }},} where μ = G ( M + m ) {\displaystyle \mu =G(M+m)} is the gravitational parameter with M {\displaystyle M} and m {\displaystyle m} the masses of the central body and the perturbed body, a per {\displaystyle \mathbf {a} _{\text{per}}} is the perturbing acceleration , and r {\displaystyle r} and ρ {\displaystyle \rho } are the magnitudes of r {\displaystyle \mathbf {r} } and ρ {\displaystyle {\boldsymbol {\rho }}} . Substituting from equations ( 3 ) and ( 4 ) into equation ( 2 ), which, in theory, could be integrated twice to find δ r {\displaystyle \delta \mathbf {r} } . Since the osculating orbit is easily calculated by two-body methods, ρ {\displaystyle {\boldsymbol {\rho }}} and δ r {\displaystyle \delta \mathbf {r} } are accounted for and r {\displaystyle \mathbf {r} } can be solved. In practice, the quantity in the brackets, ρ ρ 3 − r r 3 {\displaystyle {{\boldsymbol {\rho }} \over \rho ^{3}}-{\mathbf {r} \over r^{3}}} , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits . [ 14 ] [ 15 ] Encke's method was more widely used before the advent of modern computers , when much orbit computation was performed on mechanical calculating machines . In the Solar System, many of the disturbances of one planet by another are periodic, consisting of small impulses each time a planet passes another in its orbit. This causes the bodies to follow motions that are periodic or quasi-periodic – such as the Moon in its strongly perturbed orbit , which is the subject of lunar theory . This periodic nature led to the discovery of Neptune in 1846 as a result of its perturbations of the orbit of Uranus . On-going mutual perturbations of the planets cause long-term quasi-periodic variations in their orbital elements , most apparent when two planets' orbital periods are nearly in sync. For instance, five orbits of Jupiter (59.31 years) is nearly equal to two of Saturn (58.91 years). This causes large perturbations of both, with a period of 918 years, the time required for the small difference in their positions at conjunction to make one complete circle, first discovered by Laplace . [ 2 ] Venus currently has the orbit with the least eccentricity , i.e. it is the closest to circular , of all the planetary orbits. In 25,000 years' time, Earth will have a more circular (less eccentric) orbit than Venus. It has been shown that long-term periodic disturbances within the Solar System can become chaotic over very long time scales; under some circumstances one or more planets can cross the orbit of another, leading to collisions. [ c ] The orbits of many of the minor bodies of the Solar System, such as comets , are often heavily perturbed, particularly by the gravitational fields of the gas giants . While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion . For example, in April 1996, Jupiter 's gravitational influence caused the period of Comet Hale–Bopp 's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis. [ 16 ]
https://en.wikipedia.org/wiki/Perturbation_(astronomy)
The perturbed γ-γ angular correlation , PAC for short or PAC-Spectroscopy , is a method of nuclear solid-state physics with which magnetic and electric fields in crystal structures can be measured. In doing so, electrical field gradients and the Larmor frequency in magnetic fields as well as dynamic effects are determined. With this very sensitive method, which requires only about 10–1000 billion atoms of a radioactive isotope per measurement, material properties in the local structure , phase transitions, magnetism and diffusion can be investigated. The PAC method is related to nuclear magnetic resonance and the Mössbauer effect , but shows no signal attenuation at very high temperatures. Today only the time-differential perturbed angular correlation ( TDPAC ) is used. PAC goes back to a theoretical work by Donald R. Hamilton [ 1 ] from 1940. The first successful experiment was carried out by Brady and Deutsch [ 2 ] in 1947. Essentially spin and parity of nuclear spins were investigated in these first PAC experiments. However, it was recognized early on that electric and magnetic fields interact with the nuclear moment, [ 3 ] providing the basis for a new form of material investigation: nuclear solid-state spectroscopy. Step by step the theory was developed. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] After Abragam and Pound [ 18 ] published their work on the theory of PAC in 1953 including extra nuclear fields, many studies with PAC were carried out afterwards. In the 1960s and 1970s, interest in PAC experiments sharply increased, focusing mainly on magnetic and electric fields in crystals into which the probe nuclei were introduced. In the mid-1960s, ion implantation was discovered, providing new opportunities for sample preparation. The rapid electronic development of the 1970s brought significant improvements in signal processing. From the 1980s to the present, PAC has emerged as an important method for the study and characterization of materials, [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] e.g. for the study of semiconductor materials, intermetallic compounds, surfaces and interfaces, and a number of applications have also appeared in biochemistry. [ 24 ] While until about 2008 PAC instruments used conventional high-frequency electronics of the 1970s, in 2008 Christian Herden and Jens Röder et al. developed the first fully digitized PAC instrument that enables extensive data analysis and parallel use of multiple probes. [ 25 ] Replicas and further developments followed. [ 26 ] [ 27 ] PAC uses radioactive probes, which have an intermediate state with decay times of 2 ns to approx. 10 μs, see example 111 In in the picture on the right. After electron capture (EC), indium transmutates to cadmium. Immediately thereafter, the 111 cadmium nucleus is predominantly in the excited 7/2+ nuclear spin and only to a very small extent in the 11/2- nuclear spin, the latter should not be considered further. The 7/2+ excited state transitions to the 5/2+ intermediate state by emitting a 171 keV γ-quantum. The intermediate state has a lifetime of 84.5 ns and is the sensitive state for the PAC. This state in turn decays into the 1/2+ ground state by emitting a γ-quantum with 245 keV. PAC now detects both γ-quanta and evaluates the first as a start signal, the second as a stop signal. Now one measures the time between start and stop for each event. This is called coincidence when a start and stop pair has been found. Since the intermediate state decays according to the laws of radioactive decay, one obtains an exponential curve with the lifetime of this intermediate state after plotting the frequency over time. Due to the non-spherically symmetric radiation of the second γ-quantum, the so-called anisotropy, which is an intrinsic property of the nucleus in this transition, it comes with the surrounding electrical and/or magnetic fields to a periodic disorder ( hyperfine interaction ). The illustration of the individual spectra on the right shows the effect of this disturbance as a wave pattern on the exponential decay of two detectors, one pair at 90° and one at 180° to each other. The waveforms to both detector pairs are shifted from each other. Very simply, one can imagine a fixed observer looking at a lighthouse whose light intensity periodically becomes lighter and darker. Correspondingly, a detector arrangement, usually four detectors in a planar 90 ° arrangement or six detectors in an octahedral arrangement, "sees" the rotation of the core on the order of magnitude of MHz to GHz. According to the number n of detectors, the number of individual spectra (z) results after z=n²-n, for n=4 therefore 12 and for n=6 thus 30. In order to obtain a PAC spectrum, the 90° and 180° single spectra are calculated in such a way that the exponential functions cancel each other out and, in addition, the different detector properties shorten themselves. The pure perturbation function remains, as shown in the example of a complex PAC spectrum. Its Fourier transform gives the transition frequencies as peaks. R ( t ) {\displaystyle R(t)} , the count rate ratio, is obtained from the single spectra by using: Depending on the spin of the intermediate state, a different number of transition frequencies show up. For 5/2 spin, 3 transition frequencies can be observed with the ratio ω 1 +ω 2 =ω 3 . As a rule, a different combination of 3 frequencies can be observed for each associated site in the unit cell. PAC is a statistical method: Each radioactive probe atom sits in its own environment. In crystals, due to the high regularity of the arrangement of the atoms or ions, the environments are identical or very similar, so that probes on identical lattice sites experience the same hyperfine field or magnetic field, which then becomes measurable in a PAC spectrum. On the other hand, for probes in very different environments, such as in amorphous materials, a broad frequency distribution or no is usually observed and the PAC spectrum appears flat, without frequency response. With single crystals, depending on the orientation of the crystal to the detectors, certain transition frequencies can be reduced or extinct, as can be seen in the example of the PAC spectrum of zinc oxide (ZnO). In the typical PAC spectrometer, a setup of four 90° and 180° planar arrayed detectors or six octahedral arrayed detectors are placed around the radioactive source sample. The detectors used are scintillation crystals of BaF 2 or NaI. For modern instruments today mainly LaBr 3 :Ce or CeBr 3 are used. Photomultipliers convert the weak flashes of light into electrical signals generated in the scintillator by gamma radiation. In classical instruments these signals are amplified and processed in logical AND/OR circuits in combination with time windows the different detector combinations (for 4 detectors: 12, 13, 14, 21, 23, 24, 31, 32, 34, 41, 42, 43) assigned and counted. Modern digital spectrometers use digitizer cards that directly use the signal and convert it into energy and time values and store them on hard drives. These are then searched by software for coincidences. Whereas in classical instruments, "windows" limiting the respective γ-energies must be set before processing, this is not necessary for the digital PAC during the recording of the measurement. The analysis only takes place in the second step. In the case of probes with complex cascades, this makes it makes it possible to perform a data optimization or to evaluate several cascades in parallel, as well as measuríng different probes simultaneously. The resulting data volumes can be between 60 and 300 GB per measurement. As materials for the investigation (samples) are in principle all materials that can be solid and liquid. Depending on the question and the purpose of the investigation, certain framework conditions arise. For the observation of clear perturbation frequencies it is necessary, due to the statistical method, that a certain proportion of the probe atoms are in a similar environment and e.g. experiences the same electric field gradient. Furthermore, during the time window between the start and stop, or approximately 5 half-lives of the intermediate state, the direction of the electric field gradient must not change. In liquids, therefore, no interference frequency can be measured as a result of the frequent collisions, unless the probe is complexed in large molecules, such as in proteins. The samples with proteins or peptides are usually frozen to improve the measurement. The most studied materials with PAC are solids such as semiconductors, metals, insulators, and various types of functional materials. For the investigations, these are usually crystalline. Amorphous materials do not have highly ordered structures. However, they have close proximity, which can be seen in PAC spectroscopy as a broad distribution of frequencies. Nano-materials have a crystalline core and a shell that has a rather amorphous structure. This is called core-shell model. The smaller the nanoparticle becomes, the larger the volume fraction of this amorphous portion becomes. In PAC measurements, this is shown by the decrease of the crystalline frequency component in a reduction of the amplitude (attenuation). The amount of suitable PAC isotopes required for a measurement is between about 10 to 1000 billion atoms (10 10 -10 12 ). The right amount depends on the particular properties of the isotope. 10 billion atoms are a very small amount of substance. For comparison, one mol contains about 6.22x10 23 particles. 10 12 atoms in one cubic centimeter of beryllium give a concentration of about 8 nmol/L (nanomol=10 −9 mol). The radioactive samples each have an activity of 0.1-5 MBq, which is in the order of the exemption limit for the respective isotope. How the PAC isotopes are brought into the sample to be examined is up to the experimenter and the technical possibilities. The following methods are usual: During implantation, a radioactive ion beam is generated, which is directed onto the sample material. Due to the kinetic energy of the ions (1-500 keV) these fly into the crystal lattice and are slowed down by impacts. They either come to a stop at interstitial sites or push a lattice-atom out of its place and replace it. This leads to a disruption of the crystal structure. These disorders can be investigated with PAC. By tempering these disturbances can be healed. If, on the other hand, radiation defects in the crystal and their healing are to be examined, unperseived samples are measured, which are then annealed step by step. The implantation is usually the method of choice, because it can be used to produce very well-defined samples. In a vacuum, the PAC probe can be evaporated onto the sample. The radioactive probe is applied to a hot plate or filament, where it is brought to the evaporation temperature and condensed on the opposite sample material. With this method, e.g. surfaces are examined. Furthermore, by vapor deposition of other materials, interfaces can be produced. They can be studied during tempering with PAC and their changes can be observed. Similarly, the PAC probe can be transferred to sputtering using a plasma. In the diffusion method, the radioactive probe is usually diluted in a solvent applied to the sample, dried and it is diffused into the material by tempering it. The solution with the radioactive probe should be as pure as possible, since all other substances can diffuse into the sample and affect thereby the measurement results. The sample should be sufficiently diluted in the sample. Therefore, the diffusion process should be planned so that a uniform distribution or sufficient penetration depth is achieved. PAC probes may also be added during the synthesis of sample materials to achieve the most uniform distribution in the sample. This method is particularly well suited if, for example, the PAC probe diffuses only poorly in the material and a higher concentration in grain boundaries is to be expected. Since only very small samples are necessary with PAC (about 5 mm), micro-reactors can be used. Ideally, the probe is added to the liquid phase of the sol-gel process or one of the later precursor phases. In neutron activation , the probe is prepared directly from the sample material by converting very small part of one of the elements of the sample material into the desired PAC probe or its parent isotope by neutron capture. As with implantation, radiation damage must be healed. This method is limited to sample materials containing elements from which neutron capture PAC probes can be made. Furthermore, samples can be intentionally contaminated with those elements that are to be activated. For example, hafnium is excellently suited for activation because of its large capture cross section for neutrons. Rarely used are direct nuclear reactions in which nuclei are converted into PAC probes by bombardment by high-energy elementary particles or protons. This causes major radiation damage, which must be healed. This method is used with PAD, which belongs to the PAC methods. The currently largest PAC laboratory in the world is located at ISOLDE in CERN with about 10 PAC instruments, that receives its major funding form BMBF . Radioactive ion beams are produced at the ISOLDE by bombarding protons from the booster onto target materials (uranium carbide, liquid tin, etc.) and evaporating the spallation products at high temperatures (up to 2000 °C), then ionizing them and then accelerating them. With the subsequent mass separation usually very pure isotope beams can be produced, which can be implanted in PAC samples. Of particular interest to the PAC are short-lived isomeric probes such as: 111m Cd, 199m Hg, 204m Pb, and various rare earth probes. The first γ {\displaystyle \gamma } -quantum ( γ 1 , k 1 {\displaystyle \gamma _{1},k_{1}} ) will be emitted isotropically. Detecting this quantum in a detector selects a subset with an orientation of the many possible directions that has a given. The second γ {\displaystyle \gamma } -quantum ( γ 2 , k 2 {\displaystyle \gamma _{2},k_{2}} ) has an anisotropic emission and shows the effect of the angle correlation. The goal is to measure the relative probability W ( Θ ) d ( Ω ) {\displaystyle W(\Theta ){\textrm {d}}(\Omega )} with the detection of γ 2 {\displaystyle \gamma _{2}} at the fixed angle Θ {\displaystyle \Theta } in relation to γ 1 {\displaystyle \gamma _{1}} . The probability is given with the angle correlation ( perturbation theory ): For a γ {\displaystyle \gamma } - γ {\displaystyle \gamma } -cascade, k {\displaystyle k} is due to the preservation of parity : Where I S {\displaystyle I_{S}} is the spin of the intermediate state and I i {\displaystyle I_{i}} with i = 1 ; 2 {\displaystyle i=1;2} the multipolarity of the two transitions. For pure multipole transitions, is I i = I i ′ {\displaystyle I_{i}=I'_{i}} . A k k {\displaystyle A_{kk}} is the anisotropy coefficient that depends on the angular momentum of the intermediate state and the multipolarities of the transition. The radioactive nucleus is built into the sample material and emits two γ {\displaystyle \gamma } -quanta upon decay. During the lifetime of the intermediate state, i.e. the time between γ 1 {\displaystyle \gamma _{1}} and γ 2 {\displaystyle \gamma _{2}} , the core experiences a disturbance due to the hyperfine interaction through its electrical and magnetic environment. This disturbance changes the angular correlation to: G k k {\displaystyle G_{kk}} is the perturbation factor. Due to the electrical and magnetic interaction, the angular momentum of the intermediate state I i {\displaystyle I_{i}} experiences a torque about its axis of symmetry. Quantum-mechanically, this means that the interaction leads to transitions between the M states. The second γ {\displaystyle \gamma } -quantum ( γ 2 {\displaystyle \gamma _{2}} ) is then sent from the intermediate level. This population change is the reason for the attenuation of the correlation. The interaction occurs between the magnetic core dipole moment ν → {\displaystyle {\vec {\nu }}} and the intermediate state I S {\displaystyle I_{S}} or/and an external magnetic field B → {\displaystyle {\vec {B}}} . The interaction also takes place between nuclear quadrupole moment and the off-core electric field gradient V z z {\displaystyle V_{zz}} . For the magnetic dipole interaction, the frequency of the precession of the nuclear spin around the axis of the magnetic field B → {\displaystyle {\vec {B}}} is given by: g {\displaystyle g} is the Landé g-factor und u N {\displaystyle u_{N}} is the nuclear magneton . With N = M − M ′ {\displaystyle N=M-M'} follows: From the general theory we get: For the magnetic interaction follows: The energy of the hyperfine electrical interaction between the charge distribution of the core and the extranuclear static electric field can be extended to multipoles. The monopole term only causes an energy shift and the dipole term disappears, so that the first relevant expansion term is the quadrupole term: This can be written as a product of the quadrupole moment Q i j {\displaystyle Q_{ij}} and the electric field gradient V i j {\displaystyle V_{ij}} . Both [tensor]s are of second order. Higher orders have too small effect to be measured with PAC. The electric field gradient is the second derivative of the electric potential Φ ( r → ) {\displaystyle \Phi ({\vec {r}})} at the core: V i j {\displaystyle V_{ij}} becomes diagonalized, that: The matrix is free of traces in the main axis system ( Laplace equation ) Typically, the electric field gradient is defined with the largest proportion V z z {\displaystyle V_{zz}} and η {\displaystyle \eta } : In cubic crystals, the axis parameters of the unit cell x, y, z are of the same length. Therefore: In axisymmetric systems is η = 0 {\displaystyle \eta =0} . For axially symmetric electric field gradients, the energy of the substates has the values: The energy difference between two substates, M {\displaystyle M} and M ′ {\displaystyle M'} , is given by: The quadrupole frequency ω Q {\displaystyle \omega _{Q}} is introduced. The formulas in the colored frames are important for the evaluation: The publications mostly list ν Q {\displaystyle \nu _{Q}} . e {\displaystyle e} as elementary charge and h {\displaystyle h} as Planck constant are well known or well defined. The nuclear quadrupole moment Q {\displaystyle Q} is often determined only very inaccurately (often only with 2-3 digits). Because ν Q {\displaystyle \nu _{Q}} can be determined much more accurately than Q {\displaystyle Q} , it is not useful to specify only V z z {\displaystyle V_{zz}} because of the error propagation. In addition, ν Q {\displaystyle \nu _{Q}} is independent of spin! This means that measurements of two different isotopes of the same element can be compared, such as 199m Hg(5/2−), 197m Hg(5/2−) and 201m Hg(9/2−). Further, ν Q {\displaystyle \nu _{Q}} can be used as finger print method. For the energy difference then follows: If η = 0 {\displaystyle \eta =0} , then: with: For integer spins applies: For half integer spins applies: The perturbation factor is given by: With the factor for the probabilities of the observed frequencies: As far as the magnetic dipole interaction is concerned, the electrical quadrupole interaction also induces a precision of the angular correlation in time and this modulates the quadrupole interaction frequency. This frequency is an overlap of the different transition frequencies ω n {\displaystyle \omega _{n}} . The relative amplitudes of the various components depend on the orientation of the electric field gradient relative to the detectors (symmetry axis) and the asymmetry parameter η {\displaystyle \eta } . For a probe with different probe nuclei, one needs a parameter that allows a direct comparison: Therefore, the quadrupole coupling constant ν Q {\displaystyle \nu _{Q}} independent of the nuclear spin I → {\displaystyle {\vec {I}}} is introduced. If there is a magnetic and electrical interaction at the same time on the radioactive nucleus as described above, combined interactions result. This leads to the splitting of the respectively observed frequencies. The analysis may not be trivial due to the higher number of frequencies that must be allocated. These then depend in each case on the direction of the electric and magnetic field to each other in the crystal. PAC is one of the few ways in which these directions can be determined. If the hyperfine field fluctuates during the lifetime τ n {\displaystyle \tau _{n}} of the intermediate level due to jumps of the probe into another lattice position or from jumps of a near atom into another lattice position, the correlation is lost. For the simple case with an undistorted lattice of cubic symmetry, for a jump rate of ω s < 0.2 ⋅ ν Q {\displaystyle \omega _{s}<0.2\cdot \nu _{Q}} for equivalent places N s {\displaystyle N_{s}} , an exponential damping of the static G 22 ( t ) {\displaystyle G_{22}(t)} -terms is observed: Here λ d {\displaystyle \lambda _{d}} is a constant to be determined, which should not be confused with the decay constant λ = 1 τ {\displaystyle \lambda ={\frac {1}{\tau }}} . For large values of ω s {\displaystyle \omega _{s}} , only pure exponential decay can be observed: The boundary case after Abragam-Pound is λ d {\displaystyle \lambda _{d}} , if ω s > 3 ⋅ ν Q {\displaystyle \omega _{s}>3\cdot \nu _{Q}} , then: Cores that transmute beforehand of the γ {\displaystyle \gamma } - γ {\displaystyle \gamma } -cascade usually cause a charge change in ionic crystals (In 3+ ) to Cd 2+ ). As a result, the lattice must respond to these changes. Defects or neighboring ions can also migrate. Likewise, the high-energy transition process may cause the Auger effect , that can bring the core into higher ionization states. The normalization of the state of charge then depends on the conductivity of the material. In metals, the process takes place very quickly. This takes considerably longer in semiconductors and insulators. In all these processes, the hyperfine field changes. If this change falls within the γ {\displaystyle \gamma } - γ {\displaystyle \gamma } -cascade, it may be observed as an after effect. The number of nuclei in state (a) in the image on the right is depopulated both by the decay after state (b) and after state (c): mit: τ 7 / 2 = 120 ps ln ⁡ 2 {\displaystyle \tau _{7/2}={\frac {120{\textrm {ps}}}{\ln {2}}}} From this one obtains the exponential case: For the total number of nuclei in the static state (c) follows: The initial occupation probabilities ρ {\displaystyle \rho } are for static and dynamic environments: In the general theory for a transition M i → M f {\displaystyle M_{i}\rightarrow M_{f}} is given: with:
https://en.wikipedia.org/wiki/Perturbed_angular_correlation
Pertussis toxin ( PT ) is a protein-based AB 5 -type exotoxin produced by the bacterium Bordetella pertussis , [ 2 ] which causes whooping cough . PT is involved in the colonization of the respiratory tract and the establishment of infection. [ 3 ] Research suggests PT may have a therapeutic role in treating a number of common human ailments, including hypertension, [ 4 ] viral infection, [ 5 ] and autoimmunity. [ 6 ] [ 7 ] [ 8 ] PT clearly plays a central role in the pathogenesis of pertussis although this was discovered only in the early 1980s. The appearance of pertussis is quite recent, compared with other epidemic infectious diseases. The earliest mention of pertussis, or whooping cough, is of an outbreak in Paris in 1414. This was published in Moulton's The Mirror of Health, in 1640. Another epidemic of pertussis took place in Paris in 1578 and was described by a contemporary observer, Guillaume de Baillou . Pertussis was well known throughout Europe by the middle of the 18th century. Jules Bordet and Octave Gengou described in 1900 the finding of a new “ovoid bacillus” in the sputum of a 6-month-old infant with whooping cough. They were also the first to cultivate Bordetella pertussis at the Pasteur Institute in Brussels in 1906. [ 9 ] One difference between the different species of Bordetella is that B. pertussis produces PT and the other species do not. Bordetella parapertussis shows the most similarity to B. pertussis and was therefore used for research determining the role of PT in causing the typical symptoms of whooping cough. Rat studies showed the development of paroxysmal coughing, a characteristic for whooping cough, occurred in rats infected with B. pertussis . Rats infected with B. parapertussis or a PT-deficient mutant of B. pertussis did not show this symptom; neither of these two strains produced PT. [ 10 ] A large group of bacterial exotoxins are referred to as "A/B toxins", in essence because they are formed from two subunits. [ 11 ] The "A" subunit possesses enzyme activity and is transferred to the host cell following a conformational change in the membrane-bound transport "B" subunit. [ 11 ] Pertussis toxin is an exotoxin with six subunits (named S1 through S5 —each complex contains two copies of S4 ). [ 12 ] [ 13 ] The subunits are arranged in A-B structure: the A component is enzymatically active and is formed from the S1 subunit, while the B component is the receptor -binding portion and is made up of subunits S2–S5. [ 13 ] The subunits are encoded by ptx genes encoded on a large PT operon that also includes additional genes that encode Ptl proteins. Together, these proteins form the PT secretion complex. [ 14 ] PT is released from B. pertussis in an inactive form. Following PT binding to a cell membrane receptor , it is taken up in an endosome , after which it undergoes retrograde transport to the trans-Golgi network and endoplasmic reticulum . [ 15 ] At some point during this transport, the A subunit (or protomer) becomes activated, perhaps through the action of glutathione and ATP . [ 12 ] [ 16 ] PT catalyzes the ADP-ribosylation of the α i subunits of the heterotrimeric G protein . This prevents the G proteins from interacting with G protein-coupled receptors on the cell membrane , thus interfering with intracellular communication. [ 17 ] The Gi subunits remain locked in their GDP-bound, inactive state, thus unable to inhibit adenylate cyclase activity, leading to increased cellular concentrations of cAMP. Increased intracellular cAMP affects normal biological signaling. The toxin causes several systemic effects, among which is an increased release of insulin , causing hypoglycemia . Whether the effects of pertussis toxin are responsible for the paroxysmal cough remains unknown. [ 18 ] As a result of this unique mechanism, PT has also become widely used as a biochemical tool to ADP-ribosylate GTP-binding proteins in the study of signal transduction. [ 1 ] It has also become an essential component of new acellular vaccines. [ 1 ] PT has been shown to affect the innate immune response. It inhibits the early recruitment of neutrophils and macrophages , and interferes with early chemokine production and neutrophil chemotaxis . [ 19 ] Chemokines are signaling molecules produced by infected cells and attract neutrophils and macrophages. Neutrophil chemotaxis is thought to be disrupted by inhibiting G-protein-coupled chemokine receptors by the ADP-ribosylation of G i proteins. [ 20 ] Due to the disrupted signaling pathways, synthesis of chemokines will be affected. This will prevent the infected cell from producing them and thereby inhibiting recruitment of neutrophils. Under normal circumstances, alveolar macrophages and other lung cells produce a variety of chemokines. PT has been found to inhibit the early transcription of keratinocyte-derived chemokine, macrophage inflammatory protein 2 and LPS-induced CXC chemokine . [ 20 ] Eventually, PT causes lymphocytosis , one of the systemic manifestations of whooping cough. [ 21 ] PT, a decisive virulence determinant of B. pertussis , is able to cross the blood–brain barrier by increasing its permeability. [ 22 ] As a result, PT can cause severe neurological complications; however, recently it has been found that the medicinal usage of Pertussis toxin can promote the development of regulatory T cells and prevent central nervous system autoimmune disease, such as multiple sclerosis. [ 23 ] PT is known to dissociate into two parts in the endoplasmic reticulum (ER): the enzymatically active A subunit (S1) and the cell-binding B subunit. The two subunits are separated by proteolic cleavage. The B subunit will undergo ubiquitin-dependent degradation by the 26S proteasome . However, the A subunit lacks lysine residues, which are essential for ubiquitin -dependent degradation. Therefore, PT subunit A will not be metabolized like most other proteins. [ 24 ] PT is heat-stable and protease-resistant, but once the A and B are separated, these properties change. The B subunit will stay heat-stable at temperatures up to 60 °C, but it is susceptible to protein degradation. PT subunit A, on the other hand, is less susceptible to ubiquitin-dependent degradation, but is unstable at temperature of 37 °C. This facilitates unfolding of the protein in the ER and tricks the cell into transporting the A subunit to the cytosol, where normally unfolded proteins will be marked for degradation. So, the unfolded conformation will stimulate the ERAD -mediated translocation of PT A into the cytosol. Once in the cytosol, it can bind to NAD and form a stable, folded protein again. Being thermally unstable is also the Achilles heel of PT subunit A. As always, there is an equilibrium between the folded and unfolded states. When the protein is unfolded, it is susceptible to degradation by the 20S proteasome, which can degrade only unfolded proteins. [ 24 ] Since the introduction of pertussis vaccines in the 1940s and 1950s, different genetic changes have been described surrounding the pertussis toxin. ptxP is the pertussis toxin's promoter gene. There is a well documented emergence and global spread of ptxP3 strains evolving from and replacing the native ptxP1 strains, [ 25 ] associated with an increased production of the toxin, and thus an increased virulence. [ 26 ] Such spread has been documented in multiple countries, and sometimes but not always linked to the resurgence of pertussis in the end of the 20th century. Countries with a documented spread of ptxP3 include Australia, [ 26 ] [ 27 ] Denmark, [ 28 ] Finland, [ 29 ] Iran, [ 30 ] Italy, [ 31 ] Japan, [ 32 ] the Netherlands, [ 33 ] and Sweden. [ 34 ]
https://en.wikipedia.org/wiki/Pertussis_toxin
Pervaded volume is a measure of the size of a polymer chain in space. In particular, it is "the volume of solution spanned by the polymer chain". [ 1 ] The pervaded volume V scales as the cube of the chain size V ≈ R 3 {\displaystyle V\approx R^{3}} R is some length scale describing the chain conformation such as the radius of gyration or root-mean-square end-to-end distance of the chain. [ 2 ] Typically the pervaded volume is very large relative to the space actually occupied by the chain as most of the pervaded volume is usually filled with solvent or other chains. [ 1 ] Chain pervaded volume is relevant in the morphology and rheology of melt and bulk polymers through its relation to quantities such as the interchain entanglement density, the number of entanglements between different chains per volume. [ 2 ] This article about polymer science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pervaded_volume
Pervaporation (or pervaporative separation) is a processing method for the separation of mixtures of liquids by partial vaporization through a non-porous or porous membrane . [ 1 ] The term pervaporation is a portmanteau of the two steps of the process: (a) permeation through the membrane by the permeate, then (b) its evaporation into the vapor phase. This process is used by a number of industries for several different processes, including purification and analysis , due to its simplicity and in-line nature. The membrane acts as a selective barrier between the two phases: the liquid-phase feed and the vapor-phase permeate. It allows the desired components of the liquid feed to transfer through it by vaporization . Separation of components is based on a difference in transport rate of individual components through the membrane. Typically, the upstream side of the membrane is at ambient pressure and the downstream side is under vacuum to allow the evaporation of the selective component after permeation through the membrane. Driving force for the separation is the difference in the partial pressures of the components on the two sides and not the volatility difference of the components in the feed. The driving force for transport of different components is provided by a chemical potential difference between the liquid feed/retentate and vapor permeate at each side of the membrane. The retentate is the remainder of the feed leaving the membrane feed chamber, which is not permeated through the membrane. The chemical potential can be expressed in terms of fugacity , given by Raoult's law for a liquid and by Dalton's law for (an ideal) gas. During operation, due to removal of the vapor-phase permeate, the actual fugacity of the vapor is lower than anticipated on basis of the collected (condensed) permeate. Separation of components ( e.g. water and ethanol) is based on a difference in transport rate of individual components through the membrane. This transport mechanism can be described using the solution-diffusion model, based on the rate/degree of dissolution of a component into the membrane and its velocity of transport (expressed in terms of diffusivity) through the membrane, which will be different for each component and membrane type leading to separation. Pervaporation is effective for dilute solutions containing trace or minor amounts of the component to be removed. Based on this, hydrophilic membranes are used for dehydration of alcohols containing small amounts of water and hydrophobic membranes are used for removal/recovery of trace amounts of organics from aqueous solutions. Pervaporation is an efficient energy conserving alternative to processes such as distillation and evaporation . It allows the exchange of two phases without direct contact. [ 2 ] Examples include solvent dehydration: dehydrating the ethanol/water and isopropanol/water azeotropes, continuous ethanol removal from yeast fermentors , continuous water removal from condensation reactions such as esterifications to enhance conversion and rate of the reaction, membrane introduction mass spectrometry , removing organic solvents from industrial waste waters, combination of distillation and pervaporation/vapour permeation, and concentration of hydrophobic flavour compounds in aqueous solutions (using hydrophobic membranes). Recently, a number of organophilic pervaporation membranes have been introduced to the market. Organophilic pervaporation membranes can be used for the separation of organic-organic mixtures, e.g.: reduction of the aromatics content in refinery streams, breaking of azeotropes , purification of extraction media, purification of product stream after extraction, and purification of organic solvents. Hydrophobic membranes are often polydimethylsiloxane based where the actual separation mechanism is based on the solution-diffusion model described above. Hydrophilic membranes are more widely available. The commercially most successful pervaporation membrane system to date is based on polyvinyl alcohol . More recently also membranes based on polyimide have become available. To overcome the intrinsic disadvantages of polymeric membrane systems ceramic membranes have been developed over the last decade. These ceramic membranes consist of nanoporous layers on top of a macroporous support. The pores must be large enough to let water molecules pass through and retain any other solvents that have a larger molecular size such as ethanol. As a result, a molecular sieve with a pore size of about 4 Å is obtained. The most widely available member of this class of membranes is that based on zeolite A. Alternatively to these crystalline materials, the porous structure of amorphous silica layers can be tailored towards molecular selectivity. These membranes are fabricated by sol-gel chemical processes. Research into novel hydrophilic ceramic membranes has been focused on titania or zirconia . Very recently a break-through in hydrothermal stability has been achieved through the development of an organic-inorganic hybrid material. [ citation needed ]
https://en.wikipedia.org/wiki/Pervaporation
Pervasive informatics is the study of how information affects interactions with the built environments they occupy. The term and concept were initially introduced by Professor Kecheng Liu during a keynote speech at the SOLI 2008 international conference. [ 1 ] [ failed verification ] The built environment is rich with information which can be utilised by its occupants to enhance the quality of their work and life. By introducing ICT systems, this information can be created, managed, distributed and consumed more effectively, leading to more advanced interactions between users and the environment. The social interactions in these spaces are of additional value, and Informatics can effectively capture the complexities of such information rich activities. [ 2 ] Information literally pervades, or spreads throughout, these socio-technical systems, and pervasive informatics aims to study, and assist in the design of, pervasive information environments, or pervasive spaces, for the benefit of their stakeholders and users. Pervasive informatics may be initially viewed as simply another branch of pervasive , or ubiquitous computing . However, pervasive informatics places a greater emphasis on the ICT-enhanced socio-technical pervasive spaces, as opposed to the technology driven direction of pervasive computing. This distinction between fields is analogous to that of informatics and computing , where Informatics focuses on the study of information, while the primary concern of computing is the processing of information. Pervasive informatics aims to analyse the pervasive nature of information, examining its various representations and transformations in pervasive spaces, which are enabled by pervasive computing technologies e.g. smart devices and intelligent control systems. A pervasive space is characterised by the physical and informational interaction between the occupants and the built environment e.g. the act of controlling the building is a physical interaction, while the space responding to this action/user instruction is an informational interaction. Intelligent pervasive spaces are those that display intelligent behaviour in the form of adaptation to user requirements or the environment itself. Such intelligent behaviour can be implemented using artificial intelligence algorithms and agent-based technologies. These intelligent spaces aim to provide communication and computing services to their occupants in such a way that the experience is almost transparent e.g. automated control of heating and ventilation based on occupant preference profiles. [ 2 ] The term first appeared in an IBM Research Report [ 3 ] but was not properly defined or discussed until later. An intelligent pervasive space is a “social and physical space with enhanced capability through ICT for human to interact with the built environments” [ 1 ] An alternative definition is “an adaptable and dynamic area that optimises user services and management processes using information systems and networked ubiquitous technologies”. [ 4 ] A common point between these definitions is that pervasive computing technologies are the means by which intelligence and interactions are achieved in pervasive spaces, with the purpose of enhancing a users experience. Historically, there have been few attempts to consolidate approaches to studying the complex interplay between occupants and the built environment, and to assist in the design of pervasive information environments. Many theoretical interdisciplinary approaches are relevant to the design of effective pervasive spaces. A core concept in pervasive informatics is the range of interactions that may occur in pervasive spaces: people to people, people to the physical and the physical space to technological artefacts such as sensors. In order to study these interactions it is necessary to have an understanding of what information is being created and exchanged. In light of this, a series of theories which enable us to consider both social and technological interactions together form the foundations of pervasive informatics [ 2 ] Socio-technical systems provide an approach which assists in understanding and supporting the use of pervasive technologies. The space could be considered as a network of artefacts, information, technology and occupants. By adopting STS approaches, a means for dynamically investigating and mapping such networks becomes possible. Distributed cognition can be used to explain how information is passed and processed, with a focus on both interactions between people, in addition to their interactions with the environment. [ 5 ] These interactions are analysed in terms of the trajectories of information. Human interactions with a space, and its effect on coordination mechanisms have been examined in the field of computer supported cooperative work (CSCW). The concepts of media spaces [ 6 ] and awareness have also emerged from CSCW which are of relevance to pervasive informatics. Semiotics , the study of signs, can be used to assess the effectiveness of a built environment from six different levels: physical, empirical, syntactical, semantic, pragmatic and social. Semiotics enables us to understand the nature and characteristics of sign-based interactions in pervasive spaces. The current technology-centred view of pervasive computing is no longer sufficient for studying the information in the built environment. Socio-technical approaches are required to direct attention to the interaction between the built environment and its occupants. The concept of pervasive informatics then captures this shift, and enables current research efforts in different fields to converge their focus and consolidate their methods under one label, leading to a better direction and understanding of this complex domain. Research issues identified for further study in pervasive informatics: The list, of course, is not exhaustive, but they all address the issues that lie on the boundaries between the physical, informational and social-capturing the essence of pervasive spaces. [ 2 ]
https://en.wikipedia.org/wiki/Pervasive_informatics
Pervious concrete (also called porous concrete , permeable concrete , no fines concrete and porous pavement ) is a special type of concrete with a high porosity used for concrete flatwork applications that allows water from precipitation and other sources to pass directly through, thereby reducing the runoff from a site and allowing groundwater recharge . Pervious concrete is made using large aggregates with little to no fine aggregates. The concrete paste then coats the aggregates and allows water to pass through the concrete slab. Pervious concrete is traditionally used in parking areas , areas with light traffic, residential streets , pedestrian walkways , and greenhouses . [ 1 ] [ 2 ] It is an important application for sustainable construction and is one of many low impact development techniques used by builders to protect water quality . Pervious concrete was first used in the 1800s in Europe as pavement surfacing and load bearing walls. [ 3 ] Cost efficiency was the main motive due to a decreased amount of cement. [ 3 ] It became popular again in the 1920s for two storey homes in Scotland and England. It became increasingly viable in Europe after WWII due to the scarcity of cement. It did not become as popular in the US until the 1970s. [ 3 ] In India it became popular in 2000. [ citation needed ] The proper utilization of pervious concrete is a recognized Best Management Practice by the U.S. Environmental Protection Agency (EPA) for providing first flush pollution control and stormwater management. [ 4 ] As regulations further limit stormwater runoff , it is becoming more expensive for property owners to develop real estate , due to the size and expense of the necessary drainage systems. Pervious concrete lowers the NRCS Runoff Curve Number or CN by retaining stormwater on site. This allows the planner/designer to achieve pre-development stormwater goals for pavement intense projects. Pervious concrete reduces the runoff from paved areas, which reduces the need for separate stormwater retention ponds and allows the use of smaller capacity storm sewers . [ 5 ] This allows property owners to develop a larger area of available property at a lower cost. Pervious concrete also naturally filters storm water [ 6 ] and can reduce pollutant loads entering into streams , ponds , and rivers . [ 7 ] Pervious concrete functions like a storm water infiltration basin and allows the storm water to infiltrate the soil over a large area, thus facilitating recharge of precious groundwater supplies locally. [ 5 ] All of these benefits lead to more effective land use. Pervious concrete can also reduce the impact of development on trees . A pervious concrete pavement allows the transfer of both water and air to root systems to help trees flourish even in highly developed areas. [ 5 ] Pervious concrete consists of cement, coarse aggregate (size should be 9.5 mm to 12.5 mm) and water with little to no fine aggregates. The addition of a small amount of sand will increase the strength. The mixture has a water-to-cement ratio of 0.28 to 0.40 with a void content of 15 to 25 percent. [ 8 ] The correct quantity of water in the concrete is critical. A low water to cement ratio will increase the strength of the concrete, but too little water may cause surface failure. A proper water content gives the mixture a wet-metallic appearance. As this concrete is sensitive to water content, the mixture should be field checked. [ 9 ] Entrained air may be measured by a Rapid Air system, where the concrete is stained black and sections are analyzed under a microscope . [ 10 ] A common flatwork form has riser strips on top such that the screed is 3/8-1/2 inches (9 to 12 mm) above final pavement elevation. Mechanical screeds are preferable to manual. The riser strips are removed to guide compaction. Immediately after screeding, the concrete is compacted to improve the bond and smooth the surface. Excessive compaction of pervious concrete results in higher compressive strength, but lower porosity (and thus lower permeability). [ 11 ] Jointing varies little from other concrete slabs. Joints are tooled with a rolling jointing tool prior to curing or saw cut after curing. Curing consists of covering concrete with 6 mil plastic sheeting within 20 minutes of concrete discharge. [ 12 ] However, this contributes to a substantial amount of waste sent to landfills. Alternatively, preconditioned absorptive lightweight aggregate as well as internal curing admixture (ICA) have been used to effectively cure pervious concrete without waste generation. [ 13 ] [ 14 ] Pervious concrete has a common strength of 600–1,500 pounds per square inch (4.1–10.3 MPa) though strengths up to 4,000 psi (28 MPa) can be reached. There is no standardized test for compressive strength. [ 15 ] Acceptance is based on the unit weight of a sample of poured concrete using ASTM standard no. C1688. [ 16 ] An acceptable tolerance for the density is plus or minus 5 pounds (2.3 kg) of the design density. [ clarification needed ] Slump and air content tests are not applicable to pervious concrete because of the unique composition. The designer of a storm water management plan should ensure that the pervious concrete is functioning properly through visual observation of its drainage characteristics prior to opening of the facility. [ citation needed ] Concerns over the resistance to the freeze-thaw cycle have limited the use of pervious concrete in cold weather environments. [ 17 ] The rate of freezing in most applications is dictated by the local climate. Entrained air may help protect the paste as it does in regular concrete. [ 10 ] The addition of a small amount of fine aggregate to the mixture increases the durability of the pervious concrete. [ 18 ] [ 19 ] [ 20 ] [ clarification needed ] Avoiding saturation during the freeze cycle is the key to the longevity of the concrete. [ 21 ] Related, having a well prepared 8 to 24 inch (200 to 600 mm) sub-base and a good drainage preventing water stagnation will reduce the possibility of freeze-thaw damage. [ 21 ] Using permeable concrete for pavements can make them safer for pedestrians in the winter because water won't settle on the surface and freeze leading to dangerously icy conditions. Roads can also be made safer for cars by the use of permeable concrete as the reduction in the formation of standing water will reduce the possibility of aquaplaning , and porous roads will also reduce tire noise. [ 22 ] To prevent reduction in permeability, pervious concrete needs to be cleaned regularly. Cleaning can be accomplished through wetting the surface of the concrete and vacuum sweeping. [ 12 ] [ 23 ]
https://en.wikipedia.org/wiki/Pervious_concrete
In chemistry , perxenates are salts of the yellow [ 1 ] xenon -containing anion XeO 4− 6 . [ 2 ] This anion has octahedral molecular geometry , as determined by Raman spectroscopy , having O–Xe–O bond angles varying between 87° and 93°. [ 3 ] The Xe–O bond length was determined by X-ray crystallography to be 1.875 Å. [ 4 ] Perxenates are synthesized by the disproportionation of xenon trioxide when dissolved in strong alkali : [ 5 ] When Ba(OH) 2 is used as the alkali, barium perxenate can be crystallized from the resulting solution. [ 5 ] Perxenic acid is the unstable conjugate acid of the perxenate anion, formed by the solution of xenon tetroxide in water . It has not been isolated as a free acid, because under acidic conditions it rapidly decomposes into xenon trioxide and oxygen gas: [ 6 ] [ 7 ] Its extrapolated formula, H 4 XeO 6 , is inferred from the octahedral geometry of the perxenate ion ( XeO 4− 6 ) in its alkali metal salts. [ 6 ] [ 4 ] The p K a of aqueous perxenic acid has been indirectly calculated to be below 0, making it an extremely strong acid. Its first ionization yields the anion H 3 XeO − 6 , which has a p K a value of 4.29, still relatively acidic. The twice deprotonated species H 2 XeO 2− 6 has a p K a value of 10.81. [ 8 ] Due to its rapid decomposition under acidic conditions as described above, however, it is most commonly known as perxenate salts, bearing the anion XeO 4− 6 . [ 6 ] [ 2 ] Perxenic acid and the anion XeO 4− 6 are both strong oxidizing agents , [ 9 ] capable of oxidising silver(I), copper (II) and manganese(II) to (respectively) silver(III), copper(III), [ 10 ] and permanganate . [ 11 ] The perxenate anion is unstable in acidic solutions, [ 10 ] being almost instantaneously reduced to HXeO − 4 . [ 1 ] The sodium , potassium , and barium salts are soluble. [ 12 ] Barium perxenate solution is used as the starting material for the synthesis of xenon tetroxide (XeO 4 ) by mixing it with concentrated sulfuric acid : [ 13 ] Most metal perxenates are stable, except silver perxenate, which decomposes violently. [ 10 ] Sodium perxenate, Na 4 XeO 6 , can be used for the analytic separation of trace amounts of americium from curium . The separation involves the oxidation of Am 3+ to Am 4+ by sodium perxenate in acidic solution in the presence of La 3+ , followed by treatment with calcium fluoride , which forms insoluble fluorides with Cm 3+ and La 3+ , but retains Am 4+ and Pu 4+ in solution as soluble fluorides. [ 9 ]
https://en.wikipedia.org/wiki/Perxenate
In radio astronomy , perytons are short man-made radio signals of a few milliseconds resembling fast radio bursts (FRB). A peryton differs from radio frequency interference by the fact that it is a pulse of several to tens of millisecond duration which sweeps down in frequency. They are further verified by the fact that they occur at the same time in many beams, indicating that they come from Earth, whereas FRBs occur in only one or two of the beams, indicating that they are of galactic origin. [ 1 ] The first signal occurred in 2001 but was not discovered until 2007. First detected at the Parkes Observatory , data gathered by the telescope also suggested the source was local. [ 2 ] [ 3 ] The signals were found to be caused by premature opening of a microwave oven door nearby. Due to the unclear origin of the detections at first, the radio signals were named after the peryton , a mythical winged stag that casts the shadow of a man. This interprets into "strangeness made by man". [ 4 ] This name was chosen for these signals because they are man-made but have characteristics that mimic the natural phenomenon of FRBs. [ 2 ] The name was coined by Sarah Burke-Spolaor et al. in 2011. [ 2 ] Perytons were observed at the Parkes Observatory and Bleien Radio Observatory . [ 5 ] After the discovery of the first FRB in 2007, Dr. Burke searched through old telescope data looking for similar signals. She found what she was looking for, with a small difference. The 16 signals that she found seemed to fill the entire patch of the sky visible to the telescope. The lack of directionality in the new signals led Burke to the considerations that the signals were man-made and of earth. [ 6 ] Between 1998 and 2015, old data showed 46 perytons that were identified at the Parkes Observatory. [ 5 ] On June 23, 1998, 16 perytons were detected at that same location within 7 minutes. [ 5 ] In January 2015, 3 perytons were detected at the Parkes Observatory. [ 5 ] As of 2015, 25 perytons had been the subject of scientific publications. [ 5 ] These signals mimicked some aspects of FRBs that appeared to be coming from outside the Milky Way galaxy , [ 2 ] [ 7 ] but the possibility of their having an astronomical origin was soon excluded. [ 8 ] [ 9 ] [ 10 ] To track activities near the telescope, the Commonwealth Scientific and Industrial Research Organization (CSIRO) installed a radio frequency interference (RFI) monitor at the Parkes site in December 2014. This form of monitoring became more common as radio-emitting devices became more prevalent on radio telescope sites, including mobile phones, Wi-Fi, and digital televisions. Important information was disclosed by the RFI monitor data, which had not been accessible for earlier peryton discoveries. Each peryton event was accompanied by a period of radio emission at a frequency of 2.5 GHz that was outside the telescope's field of view. These spikes were probably related to the perytons. [ 6 ] [ 11 ] Hypothesized potential sources of perytons included: [ 12 ] [ 13 ] In 2015, perytons were found to be the result of premature opening of microwave oven doors at the Parkes Observatory. On March 17, 2015, three perytons were produced by experimentation by microwaving ceramic mugs filled with water and opening the door before the microwave had stopped operating. [ 5 ] The microwave oven releases a frequency-swept radio pulse that mimics an FRB as the magnetron turns off. [ 14 ] [ 5 ] Two Matsushita microwave ovens were deemed responsible for most of the perytons. Both were functional and over 27 years old. [ 5 ] Perytons were found to be produced about 50% of the times that the microwave door was opened before the timer expired. [ 4 ]
https://en.wikipedia.org/wiki/Peryton_(astronomy)
In the philosophy of science , the pessimistic induction , also known as the pessimistic meta-induction , is an argument which seeks to rebut scientific realism , particularly the scientific realist's notion of epistemic optimism. The pessimistic meta-induction is the argument that if past successful and accepted scientific theories were found to be false, we have no reason to believe the scientific realist's claim that our currently successful theories are approximately true. Scientific realists argue that we have good reasons to believe that our presently successful scientific theories are true or approximately true. The pessimistic meta-induction undermines the realist's warrant for their epistemic optimism (the view that science tends to succeed in revealing what the world is like and that there are good reasons to take theories to be true or truthlike) via historical counterexample. Using meta- induction , Larry Laudan argues that if past scientific theories which were successful were found to be false, we have no reason to believe the realist's claim that our currently successful theories are approximately true. The pessimistic meta-induction argument was first fully postulated by Laudan in 1981. However, there are some objections to Laudan's theory. One might see shortcomings in the historic examples Laudan gives as proof of his hypothesis. Theories later refuted, like that of crystalline spheres in astronomy, or the phlogiston theory , do not represent the most successful theories at their time. A further objection tries to point out that in scientific progress, we indeed approximate the truth. When we develop a new theory, the central ideas of the old one usually become refuted. Parts of the old theory, however, we carry over to the new one. In doing so, our theories become more and more well-founded on other principles, they become better in terms of predictive and descriptive power, so that, for example, aeroplanes, computers and DNA sequencing all establish technical, operational proof of the effectiveness of the theories. Therefore, we can hold the realist view that our theoretical terms refer to something in the world and our theories are approximately true. However, as articulated by Thomas Kuhn in his The Structure of Scientific Revolutions , new scientific theories do not always build upon the older ones. In fact, they are created by an entirely new set of premises (a new "paradigm"), and reach vastly different conclusions. This gives greater weight to the proponents of anti-realism, and illustrates that no scientific theory (thus far) has proved infallible. This philosophy of science -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pessimistic_induction
Pestalotiopsis microspora is a species of endophytic fungus capable of breaking down and digesting polyurethane . [ 1 ] Originally identified in 1880 in fallen foliage of common ivy ( Hedera helix ) in Buenos Aires , [ 2 ] it also causes leaf spot in Hypericum 'Hidcote' ( Hypericum patulum ) shrubs in Japan . [ 3 ] However, its polyurethane degradation activity was discovered only in the 2010s in two distinct P. microspora strains isolated from plant stems in the Yasuni National Forest within the Ecuadorian Amazon rainforest by a group of student researchers led by molecular biochemistry professor Scott Strobel as part of Yale 's annual Rainforest Expedition and Laboratory. It is the first fungus species found to be able to subsist on polyurethane in anaerobic conditions. This makes the fungus a potential candidate for bioremediation projects involving large quantities of plastic . [ 4 ] Pestalotiopsis microspora was originally described from Buenos Aires, Argentina in 1880 by mycologist Carlo Luigi Spegazzini , who named it Pestalotia microspora . [ 5 ] In 1996 Julie C. Lee first isolated Torreyanic acid , a dimeric quinone , from P. microspora , and noted that the species is likely the cause of the decline of Florida torreya ( Torreya taxifolia ), an endangered species of a tree that is related to the paclitaxel -producing yew tree Taxus brevifolia . [ 6 ] Pestalotiopsis microspora is a fungus that is known to be the most effective when it comes to penetrating the exterior of a polymer product or polyurethane and dissolving it through the oxidizing enzymes that it possesses. Although this is an amazing discovery, it has mostly been monitored in laboratory settings and still needs more experimentation to use on a wide scale for landfills and clean-up areas.
https://en.wikipedia.org/wiki/Pestalotiopsis_microspora
Pesticides are substances that are used to control pests . [ 1 ] They include herbicides , insecticides , nematicides , fungicides , and many others (see table). [ 2 ] The most common of these are herbicides, which account for approximately 50% of all pesticide use globally. [ 3 ] Most pesticides are used as plant protection products (also known as crop protection products), which in general protect plants from weeds , fungi, or insects . In general, a pesticide is a chemical or biological agent (such as a virus , bacterium , or fungus ) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens , weeds, molluscs , birds , mammals , fish , nematodes (roundworms), and microbes that destroy property, cause nuisance, spread disease, or are disease vectors . Along with these benefits, pesticides also have drawbacks, such as potential toxicity to humans and other species. The word pesticide derives from the Latin pestis (plague) and caedere (kill). [ 5 ] The Food and Agriculture Organization (FAO) has defined pesticide as: Pesticides can be classified by target organism (e.g., herbicides , insecticides , fungicides , rodenticides , and pediculicides – see table), [ 7 ] Biopesticides according to the EPA include microbial pesticides, biochemical pesticides, and plant-incorporated protectants. [ 8 ] Pesticides can be classified into structural classes, with many structural classes developed for each of the target organisms listed in the table. A structural class is usually associated with a single mode of action , whereas a mode of action may encompass more than one structural class. The pesticidal chemical ( active ingredient ) is mixed ( formulated ) with other components to form the product that is sold, and which is applied in various ways. Pesticides in gas form are fumigants . Pesticides can be classified based upon their mode of action , which indicates the exact biological mechanism which the pesticide disrupts. The modes of action are important for resistance management, and are categorized and administered by the insecticide , herbicide , and fungicide resistance action committees. Pesticides may be systemic or non-systemic. [ 9 ] [ 10 ] A systemic pesticide moves (translocates) inside the plant. Translocation may be upward in the xylem , or downward in the phloem or both. Non-systemic pesticides (contact pesticides) remain on the surface and act through direct contact with the target organism. Pesticides are more effective if they are systemic. Systemicity is a prerequisite for the pesticide to be used as a seed-treatment. Pesticides can be classified as persistent (non-biodegradable) or non-persistent ( biodegradable ). A pesticide must be persistent enough to kill or control its target but must degrade fast enough not to accumulate in the environment or the food chain in order to be approved by the authorities. [ 11 ] [ 12 ] Persistent pesticides, including DDT , were banned many years ago , an exception being spraying in houses to combat malaria vectors . [ 13 ] From ancient times until the 1950s the pesticides used were inorganic compounds and plant extracts . [ 14 ] [ 15 ] The inorganic compounds were derivatives of copper , arsenic , mercury, sulfur , among others, and the plant extracts contained pyrethrum , nicotine , and rotenone among others. The less toxic of these are still in use in organic farming . In the 1940s the insecticide DDT , and the herbicide 2,4-D , were introduced. These synthetic organic compounds were widely used and were very profitable. They were followed in the 1950s and 1960s by numerous other synthetic pesticides, which led to the growth of the pesticide industry. [ 14 ] [ 15 ] During this period, it became increasingly evident that DDT, which had been sprayed widely in the environment to combat the vector, had accumulated in the food chain . It had become a global pollutant, as summarized in the well-known book Silent Spring . Finally, DDT was banned in the 1970s in several countries, and subsequently all persistent pesticides were banned worldwide, an exception being spraying on interior walls for vector control. [ 13 ] Resistance to a pesticide was first seen in the 1920s with inorganic pesticides, [ 14 ] and later it was found that development of resistance is to be expected, and measures to delay it are important. Integrated pest management (IPM) was introduced in the 1950s. By careful analysis and spraying only when an economical or biological threshold of crop damage is reached, pesticide application is reduced. This became in the 2020s the official policy of international organisations, industry, and many governments. [ 15 ] With the introduction of high yielding varieties in the 1960s in the green revolution , more pesticides were used. [ 15 ] Since the 1980s genetically modified crops were introduced, which resulted in lower amounts of insecticides used on them. [ 15 ] Organic agriculture, which uses only non-synthetic pesticides, has grown and in 2020 represents about 1.5 per cent of the world's total agricultural land. [ 15 ] Pesticides have become more effective. Application rates fell from 1,000 to 2,500 grams of active ingredient per hectare (g/ha) in the 1950s to 40–100 g/ha in the 2000s. [ 15 ] Despite this, amounts used have increased. In high income countries over 20 years between the 1990s and 2010s amounts used increased 20%, while in the low income countries amounts increased 1623%. [ 15 ] The aim is to find new compounds or agents with improved properties such as a new mode of action or lower application rate. Another aim is to replace older pesticides which have been banned for reasons of toxicity or environmental harm or have become less effective due to development of resistance . [ 16 ] [ 17 ] [ 18 ] [ 19 ] The process starts with testing (screening) against target organisms such as insects , fungi or plants . Inputs are typically random compounds, natural products , [ 20 ] compounds designed to disrupt a biochemical target, compounds described in patents or literature, or biocontrol organisms. Compounds that are active in the screening process, known as hits or leads, cannot be used as pesticides, except for biocontrol organisms and some potent natural products. These lead compounds need to be optimised by a series of cycles of synthesis and testing of analogs. For approval by regulatory authorities for use as pesticides, the optimized compounds must meet several requirements. [ 11 ] [ 12 ] In addition to being potent (low application rate), they must show low toxicity to non-target organisms, low environmental impact, and viable manufacturing cost. The cost of developing a pesticide in 2022 was estimated to be 350 million US dollars. [ 21 ] It has become more difficult to find new pesticides. More than 100 new active ingredients were introduced in the 2000s and less than 40 in the 2010s. [ 15 ] Biopesticides are cheaper to develop, since the authorities require less toxicological and environmental study. Since 2000 the rate of new biological product introduction has frequently exceeded that of conventional products. [ 15 ] More than 25% of existing chemical pesticides contain one or more chiral centres (stereogenic centres). [ 22 ] Newer pesticides with lower application rates tend to have more complex structures, and thus more often contain chiral centres. [ 22 ] In cases when most or all of the pesticidal activity in a new compound is found in one enantiomer (the eutomer ), the registration and use of the compound as this single enantiomer is preferred. This reduces the total application rate and avoids the tedious environmental testing required when registering a racemate. [ 23 ] [ 24 ] However, if a viable enantioselective manufacturing route cannot be found, then the racemate is registered and used. In addition to their main use in agriculture , pesticides have a number of other applications. Pesticides are used to control organisms that are considered to be harmful, or pernicious to their surroundings. [ 25 ] For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West Nile virus , yellow fever , and malaria . They can also kill bees , wasps or ants that can cause allergic reactions. Insecticides can protect animals from illnesses that can be caused by parasites such as fleas . [ 25 ] Pesticides can prevent sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside weeds, trees, and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like swimming and fishing and cause the water to look or smell unpleasant. [ 26 ] Uncontrolled pests such as termites and mold can damage structures such as houses. [ 25 ] Pesticides are used in grocery stores and food storage facilities to manage rodents and insects that infest food such as grain. Pesticides are used on lawns and golf courses , partly for cosmetic reasons. [ 27 ] Integrated pest management , the use of multiple approaches to control pests, is becoming widespread and has been used with success in countries such as Indonesia , China , Bangladesh , the U.S., Australia , and Mexico . [ 28 ] IPM attempts to recognize the more widespread impacts of an action on an ecosystem , so that natural balances are not upset. [ 29 ] Each use of a pesticide carries some associated risk. Proper pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada. DDT , sprayed on the walls of houses, is an organochlorine that has been used to fight malaria vectors (mosquitos) since the 1940s. The World Health Organization recommend this approach. [ 30 ] It and other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the environment and human toxicity. DDT has become less effective, as resistance was identified in Africa as early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT. [ 31 ] [ 32 ] Total pesticides use in agriculture in 2021 was 3.54 million tonnes of active ingredients (Mt), a 4 percent increase with respect to 2020, an 11 percent increase in a decade, and a doubling since 1990. Pesticides use per area of cropland in 2021 was 2.26 kg per hectare (kg/ha), an increase of 4 percent with respect to 2020; use per value of agricultural production was 0.86 kg per thousand international dollar (kg/1000 I$) (+2%); and use per person was 0.45 kg per capita (kg/cap) (+3%). Between 1990 and 2021, these indicators increased by 85 percent, 3 percent, and 33 percent, respectively. Brazil was the world's largest user of pesticides in 2021, with 720 kt of pesticides applications for agricultural use, while the USA (457 kt) was the second-largest user. [ 33 ] [ 34 ] Applications per cropland area in 2021 varied widely, from 10.9 kg/ hectare in Brazil to 0.8 kg/ha in the Russian Federation. The level in Brazil was about twice as high as in Argentina (5.6 kg/ha) and Indonesia (5.3 kg/ha). [ 33 ] Insecticide use in the US has declined by more than half since 1980 (0.6%/yr), mostly due to the near phase-out of organophosphates . In corn fields, the decline was even steeper, due to the switchover to transgenic Bt corn . [ 35 ] Pesticides increase agricultural yields and lower costs. [ 36 ] Median yield increases range between 12% and 27% when pesticides are used, depending on the crop. [ 37 ] Another study found that not using pesticides reduced crop yields by about 10%. [ 38 ] A study conducted in 1999, found that a ban on pesticides in the United States may result in a rise of food prices , loss of jobs, and an increase in world hunger. [ 39 ] There are two levels of benefits for pesticide use, primary and secondary. Primary benefits are direct gains from the use of pesticides and secondary benefits are effects that are more long-term. [ 40 ] Controlling pests and plant disease vectors Controlling human/livestock disease vectors and nuisance organisms Controlling organisms that harm other human activities and structures In 2018 world pesticide sales were estimated to be $65 billion, of which 88% was used for agriculture. [ 15 ] Generic accounted for 85% of sales in 2018. [ 42 ] In one study, it was estimated that for every dollar ($1) that is spent on pesticides for crops results in up to four dollars ($4) in crops which would otherwise be lost to insects, fungi and weeds. [ 43 ] In general, farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available year-round. [ 40 ] On the cost side of pesticide use there can be costs to the environment and costs to human health. [ 44 ] Pesticides safety education and pesticide applicator regulation are designed to protect the public from pesticide misuse , but do not eliminate all misuse. Reducing the use of pesticides and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use. [ 26 ] Most health concerns related to pesticides stem from direct use, whether in occupational or non-occupational settings. In contrast, health risks from pesticide residues in fruits and vegetables are considered minimal. Occupational use of pesticides may affect health negatively. [ 45 ] [ 46 ] mimicking hormones causing reproductive problems, and also causing cancer. [ 47 ] A 2007 systematic review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide exposure" and thus concluded that cosmetic use of pesticides should be decreased. [ 48 ] There is substantial evidence of associations between organophosphate insecticide exposures and neurobehavioral alterations. [ 49 ] [ 50 ] [ 51 ] [ 52 ] Limited evidence also exists for other negative outcomes from pesticide exposure including neurological, birth defects , and fetal death . [ 53 ] 2014 epidemiological review found associations between autism and exposure to certain pesticides, but noted that the available evidence was insufficient to conclude that the relationship was causal. [ 54 ] Owing to inadequate regulation and safety precautions, 99% of pesticide-related deaths occur in developing countries that account for only 25% of pesticide usage. [ 55 ] According to the American Cancer Society there is no evidence that pesticide residues in food increase the risk of people getting cancer. [ 56 ] A 2009 study estimated that lifetime exposure to pesticide residues from eating fruits and vegetables results in only 4.2 and 3.2 minutes of lost life per person in Switzerland and the United States, respectively. [ 57 ] Pesticides are also found in majority of U.S. households with 88 million out of the 121.1 million households indicating that they use some form of pesticide in 2012. [ 58 ] [ 59 ] As of 2007, there were more than 1,055 active ingredients registered as pesticides, [ 60 ] which yield over 20,000 pesticide products that are marketed in the United States. [ 61 ] The American Academy of Pediatrics recommends limiting exposure of children to pesticides and using safer alternatives: [ 62 ] One study found pesticide self-poisoning the method of choice in one third of suicides worldwide, and recommended, among other things, more restrictions on the types of pesticides that are most harmful to humans. [ 63 ] The World Health Organization and the UN Environment Programme estimate that 3 million agricultural workers in the developing world experience severe poisoning from pesticides each year, resulting in 18,000 deaths. [ 28 ] According to one study, as many as 25 million workers in developing countries may suffer mild pesticide poisoning yearly. [ 64 ] Other occupational exposures besides agricultural workers, including pet groomers, groundskeepers , and fumigators , may also put individuals at risk of health effects from pesticides. [ 61 ] Pesticide use is widespread in Latin America , as around US$3 billion are spent each year in the region. Records indicate an increase in the frequency of pesticide poisonings over the past two decades. The most common incidents of pesticide poisoning is thought to result from exposure to organophosphate and carbamate insecticides. [ 65 ] At-home pesticide use, use of unregulated products, and the role of undocumented workers within the agricultural industry makes characterizing true pesticide exposure a challenge. It is estimated that 50–80% of pesticide poisoning cases are unreported. Underreporting of pesticide poisoning is especially common in areas where agricultural workers are less likely to seek care from a healthcare facility that may be monitoring or tracking the incidence of acute poisoning. The extent of unintentional pesticide poisoning may be much greater than available data suggest, particularly among developing countries. Globally, agriculture and food production remain one of the largest industries. In East Africa, the agricultural industry represents one of the largest sectors of the economy, with nearly 80% of its population relying on agriculture for income. [ 66 ] Farmers in these communities rely on pesticide products to maintain high crop yields. Some East Africa governments are shifting to corporate farming , and opportunities for foreign conglomerates to operate commercial farms have led to more accessible research on pesticide use and exposure among workers. In other areas where large proportions of the population rely on subsistence, small-scale farming, estimating pesticide use and exposure is more difficult. Pesticides may exhibit toxic effects on humans and other non-target species, the severity of which depends on the frequency and magnitude of exposure. Toxicity also depends on the rate of absorption, distribution within the body, metabolism, and elimination of compounds from the body. Commonly used pesticides like organophosphates and carbamates act by inhibiting acetylcholinesterase activity, which prevents the breakdown of acetylcholine at the neural synapse . Excess acetylcholine can lead to symptoms like muscle cramps or tremors, confusion, dizziness and nausea. Studies show that farm workers in Ethiopia, Kenya, and Zimbabwe have decreased concentrations of plasma acetylcholinesterase, the enzyme responsible for breaking down acetylcholine acting on synapses throughout the nervous system . [ 67 ] [ 68 ] [ 69 ] Other studies in Ethiopia have observed reduced respiratory function among farm workers who spray crops with pesticides. [ 70 ] Numerous exposure pathways for farm workers increase the risk of pesticide poisoning, including dermal absorption walking through fields and applying products, as well as inhalation exposure. There are multiple approaches to measuring a person's exposure to pesticides, each of which provides an estimate of an individual's internal dose. Two broad approaches include measuring biomarkers and markers of biological effect. [ 71 ] The former involves taking direct measurement of the parent compound or its metabolites in various types of media: urine, blood, serum. Biomarkers may include a direct measurement of the compound in the body before it's been biotransformed during metabolism. Other suitable biomarkers may include the metabolites of the parent compound after they've been biotransformed during metabolism. [ 71 ] Toxicokinetic data can provide more detailed information on how quickly the compound is metabolized and eliminated from the body, and provide insights into the timing of exposure. Markers of biological effect provide an estimation of exposure based on cellular activities related to the mechanism of action. For example, many studies investigating exposure to pesticides often involve the quantification of the acetylcholinesterase enzyme at the neural synapse to determine the magnitude of the inhibitory effect of organophosphate and carbamate pesticides. [ 67 ] [ 68 ] [ 69 ] [ 71 ] Another method of quantifying exposure involves measuring, at the molecular level, the amount of pesticide interacting with the site of action. These methods are more commonly used for occupational exposures where the mechanism of action is better understood, as described by WHO guidelines published in "Biological Monitoring of Chemical Exposure in the Workplace". [ 72 ] Better understanding of how pesticides elicit their toxic effects is needed before this method of exposure assessment can be applied to occupational exposure of agricultural workers. Alternative methods to assess exposure include questionnaires to discern from participants whether they are experiencing symptoms associated with pesticide poisoning. Self-reported symptoms may include headaches, dizziness, nausea, joint pain, or respiratory symptoms. [ 68 ] Multiple challenges exist in assessing exposure to pesticides in the general population, and many others that are specific to occupational exposures of agricultural workers. Beyond farm workers, estimating exposure to family members and children presents additional challenges, and may occur through "take-home" exposure from pesticide residues collected on clothing or equipment belonging to parent farm workers and inadvertently brought into the home. Children may also be exposed to pesticides prenatally from mothers who are exposed to pesticides during pregnancy. [ 49 ] Characterizing children's exposure resulting from drift of airborne and spray application of pesticides is similarly challenging, yet well documented in developing countries. [ 73 ] Because of critical development periods of the fetus and newborn children, these non-working populations are more vulnerable to the effects of pesticides, and may be at increased risk of developing neurocognitive effects and impaired development. [ 49 ] [ 55 ] While measuring biomarkers or markers of biological effects may provide more accurate estimates of exposure, collecting these data in the field is often impractical and many methods are not sensitive enough to detect low-level concentrations. Rapid cholinesterase test kits exist to collect blood samples in the field. Conducting large scale assessments of agricultural workers in remote regions of developing countries makes the implementation of these kits a challenge. [ 71 ] The cholinesterase assay is a useful clinical tool to assess individual exposure and acute toxicity. Considerable variability in baseline enzyme activity among individuals makes it difficult to compare field measurements of cholinesterase activity to a reference dose to determine health risk associated with exposure. [ 71 ] Another challenge in deriving a reference dose is identifying health endpoints that are relevant to exposure. More epidemiological research is needed to identify critical health endpoints, particularly among populations who are occupationally exposed. Minimizing harmful exposure to pesticides can be achieved by proper use of personal protective equipment, adequate reentry times into recently sprayed areas, and effective product labeling for hazardous substances as per FIFRA regulations. Training high-risk populations, including agricultural workers, on the proper use and storage of pesticides, can reduce the incidence of acute pesticide poisoning and potential chronic health effects associated with exposure. Continued research into the human toxic health effects of pesticides serves as a basis for relevant policies and enforceable standards that are health protective to all populations. Pesticide use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including non-target species, air, water and soil. [ 28 ] Pesticide drift occurs when pesticides suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides are one of the causes of water pollution , and some pesticides were persistent organic pollutants (now banned), which contribute to soil and flower (pollen, nectar) contamination. [ 74 ] Furthermore, pesticide use can adversely impact neighboring agricultural activity, as pests themselves drift to and harm nearby crops that have no pesticide used on them. [ 75 ] In addition, pesticide use reduces invertebrate biodiversity in streams, [ 76 ] contributes to pollinator decline , [ 77 ] [ 78 ] [ 79 ] destroys habitat (especially for birds), [ 80 ] and threatens endangered species . [ 28 ] Pests can develop a resistance to the pesticide ( pesticide resistance ), necessitating a new pesticide. Alternatively a greater dose of the pesticide can be used to counteract the resistance, although this will cause a worsening of the ambient pollution problem. The Stockholm Convention on Persistent Organic Pollutants banned all persistent pesticides, [ 81 ] [ 82 ] in particular DDT and other organochlorine pesticides, which were stable and lipophilic , and thus able to bioaccumulate [ 83 ] in the body and the food chain . and which spread throughout the planet . [ 84 ] [ 85 ] Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. [ 11 ] [ 12 ] Because the half life in soil is long (for DDT 2–15 years [ 86 ] ) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s. [ 87 ] Pesticides now have to be degradable in the environment. Such degradation of pesticides is due to both innate chemical properties of the compounds and environmental processes or conditions. [ 88 ] For example, the presence of halogens within a chemical structure often slows down degradation in an aerobic environment. [ 89 ] Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to microbial degraders. [ 90 ] Pesticide contamination in the environment can be monitored through bioindicators such as bee pollinators . [ 74 ] In one study, the human health and environmental costs due to pesticides in the United States was estimated to be $9.6 billion: offset by about $40 billion in increased agricultural production. [ 91 ] Additional costs include the registration process and the cost of purchasing pesticides: which are typically borne by agrichemical companies and farmers respectively. The registration process can take several years to complete (there are 70 types of field tests) and can cost $50–70 million for a single pesticide. [ 91 ] At the beginning of the 21st century, the United States spent approximately $10 billion on pesticides annually. [ 91 ] The use of pesticides inherently entails the risk of resistance developing. Various techniques and procedures of pesticide application can slow the development of resistance, as can some natural features of the target population and surrounding environment. [ 4 ] Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering (mostly of crops ), and methods of interfering with insect breeding. [ 28 ] Application of composted yard waste has also been used as a way of controlling pests. [ 92 ] These methods are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering reduced-risk pesticides in increasing numbers. [ citation needed ] Cultivation practices include polyculture (growing multiple types of plants), crop rotation , planting crops in areas where the pests that damage them do not live, timing planting according to when pests will be least problematic, and use of trap crops that attract pests away from the real crop. [ 28 ] Trap crops have successfully controlled pests in some commercial agricultural systems while reducing pesticide usage. [ 93 ] In other systems, trap crops can fail to reduce pest densities at a commercial scale, even when the trap crop works in controlled experiments. [ 94 ] Release of other organisms that fight the pest is another example of an alternative to pesticide use. These organisms can include natural predators or parasites of the pests. [ 28 ] Biological pesticides based on entomopathogenic fungi , bacteria and viruses causing disease in the pest species can also be used. [ 28 ] Interfering with insects' reproduction can be accomplished by sterilizing males of the target species and releasing them, so that they mate with females but do not produce offspring. [ 28 ] This technique was first used on the screwworm fly in 1958 and has since been used with the medfly , the tsetse fly , [ 95 ] and the gypsy moth . [ 96 ] This is a costly and slow approach that only works on some types of insects. [ 28 ] Other alternatives include "laserweeding" – the use of novel agricultural robots for weed control using lasers . [ 97 ] Push-pull technique : intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it. [ 98 ] Some evidence shows that alternatives to pesticides can be equally effective as the use of chemicals. A study of Maize fields in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third season of the study. [ 92 ] Additional silicon nutrition protects some horticultural crops against fungal diseases almost completely, while insufficient silicon sometimes leads to severe infection even when fungicides are used. [ 99 ] Pesticide resistance is increasing and that may make alternatives more attractive. Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals. For example, canola oil and baking soda have pesticidal applications and are considered biopesticides. Biopesticides fall into three major classes: Pesticides that are related to the type of pests are: [ 104 ] In many countries, pesticides must be approved for sale and use by a government agency. [ 107 ] [ 108 ] Worldwide, 85% of countries have pesticide legislation for the proper storage of pesticides and 51% include provisions to ensure proper disposal of all obsolete pesticides. [ 109 ] Though pesticide regulations differ from country to country, pesticides, and products on which they were used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for many countries. [ 107 ] The Code was updated in 1998 and 2002. [ 110 ] The FAO claims that the code has raised awareness about pesticide hazards and decreased the number of countries without restrictions on pesticide use. [ 6 ] Three other efforts to improve regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information on Chemicals in International Trade and the United Nations Codex Alimentarius Commission . The former seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating countries. [ 111 ] In the United States , the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA). [ 112 ] Studies must be conducted to establish the conditions in which the material is safe to use and the effectiveness against the intended pest(s). [ 113 ] The EPA regulates pesticides to ensure that these products do not pose adverse effects to humans or the environment, with an emphasis on the health and safety of children. [ 114 ] Pesticides produced before November 1984 continue to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are reviewed every 15 years to ensure they meet the proper standards. [ 112 ] During the registration process, a label is created. The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity, pesticides are assigned to a Toxicity Class . Pesticides are the most thoroughly tested chemicals after drugs in the United States; those used on food require more than 100 tests to determine a range of potential impacts. [ 114 ] Some pesticides are considered too hazardous for sale to the general public and are designated restricted use pesticides . Only certified applicators, who have passed an exam, may purchase or supervise the application of restricted use pesticides. [ 107 ] Records of sales and use are required to be maintained and may be audited by government agencies charged with the enforcement of pesticide regulations. [ 115 ] [ 116 ] These records must be made available to employees and state or territorial environmental regulatory agencies. [ 117 ] [ 118 ] In addition to the EPA, the United States Department of Agriculture (USDA) and the United States Food and Drug Administration (FDA) set standards for the level of pesticide residue that is allowed on or in crops. [ 119 ] The EPA looks at what the potential human health and environmental effects might be associated with the use of the pesticide. [ 120 ] In addition, the U.S. EPA uses the National Research Council's four-step process for human health risk assessment: (1) Hazard Identification, (2) Dose-Response Assessment, (3) Exposure Assessment, and (4) Risk Characterization. [ 121 ] In 2013 Kaua'i County (Hawai'i) passed Bill No. 2491 to add an article to Chapter 22 of the county's code relating to pesticides and GMOs. The bill strengthens protections of local communities in Kaua'i where many large pesticide companies test their products. [ 122 ] The first legislation providing federal authority for regulating pesticides was enacted in 1910. [ 60 ] EU legislation has been approved banning the use of highly toxic pesticides including those that are carcinogenic , mutagenic or toxic to reproduction, those that are endocrine-disrupting, and those that are persistent, bioaccumulative and toxic (PBT) or very persistent and very bioaccumulative (vPvB) and measures have been approved to improve the general safety of pesticides across all EU member states. [ 123 ] In 2023 The Environment Committee of European Parliament approved a decision aiming to reduce pesticide use by 50% (the most hazardous by 65%) by the year 2030 and ensure sustainable use of pesticides (for example use them only as a last resort). The decision also includes measures for providing farmers with alternatives. [ 124 ] Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. [ 125 ] The maximum residue limits (MRL) of pesticides in food are carefully set by the regulatory authorities to ensure, to their best judgement, no health impacts. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns. [ 126 ] Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. [ 127 ] [ 128 ] Because the half life in soil is long (for DDT 2–15 years [ 86 ] ) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s. [ 87 ] Residues are monitored by the authorities. In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide. [ 129 ] This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 ( license statement/permission ). Text taken from World Food and Agriculture – Statistical Yearbook 2023​ , FAO, FAO.
https://en.wikipedia.org/wiki/Pesticide
Pesticide application is the practical way in which pesticides (including herbicides , fungicides , insecticides , or nematode control agents) are delivered to their biological targets ( e.g. pest organism, crop or other plant). Public concern about the use of pesticides has highlighted the need to make this process as efficient as possible, in order to minimise their release into the environment and human exposure (including operators, bystanders and consumers of produce). [ 1 ] The practice of pest management by the rational application of pesticides is supremely multi-disciplinary , combining many aspects of biology and chemistry with: agronomy , engineering , meteorology , socio-economics and public health , [ 2 ] together with newer disciplines such as biotechnology and information science . Efficacy can be related to the quality of pesticide application , with small droplets, such as aerosols often improving performance. [ 3 ] Optical data from satellites and from aircraft are increasingly being used to inform application decisions. [ 4 ] Seed treatments can achieve exceptionally high efficiencies, in terms of effective dose-transfer to a crop. Pesticides are applied to the seed prior to planting, in the form of a seed treatment, or coating , to protect against soil-borne risks to the plant; additionally, these coatings can provide supplemental chemicals and nutrients designed to encourage growth. A typical seed coating can include a nutrient layer—containing nitrogen , phosphorus , and potassium , a rhizobial layer—containing symbiotic bacteria and other beneficial microorganisms , and a fungicide (or other chemical) layer to make the seed less vulnerable to pests. One of the most common forms of pesticide application, especially in conventional agriculture, is the use of mechanical sprayers . Hydraulic sprayers consists of a tank , a pump , a lance (for single nozzles) or boom, and a nozzle (or multiple nozzles). Sprayers convert a pesticide formulation , often containing a mixture of water (or another liquid chemical carrier, such as fertilizer) and chemical, into droplets, which can be large rain-type drops or tiny almost-invisible particles. This conversion is accomplished by forcing the spray mixture through a spray nozzle under pressure. The size of droplets can be altered through the use of different nozzle sizes, or by altering the pressure under which it is forced, or a combination of both. Large droplets have the advantage of being less susceptible to spray drift , but require more water per unit of land covered. Due to static electricity, small droplets are able to maximize contact with a target organism, but very still wind conditions are required. [ 2 ] Traditional agricultural crop pesticides can either be applied pre-emergent or post-emergent, a term referring to the germination status of the plant. Pre-emergent pesticide application, in conventional agriculture , attempts to reduce competitive pressure on newly germinated plants by removing undesirable organisms and maximizing the amount of water, soil nutrients, and sunlight available for the crop. An example of pre-emergent pesticide application is atrazine application for corn . Similarly, glyphosate mixtures are often applied pre-emergent on agricultural fields to remove early-germinating weeds and prepare for subsequent crops. Pre-emergent application equipment often has large, wide tires designed to float on soft soil, minimizing both soil compaction and damage to planted (but not yet emerged) crops. A three-wheel application machine, such as the one pictured on the right, is designed so that tires do not follow the same path, minimizing the creation of ruts in the field and limiting sub-soil damage. Post-emergent pesticide application requires the use of specific chemicals chosen to minimize harm to the desirable target organism. An example is 2,4-Dichlorophenoxyacetic acid , which will injure broadleaf weeds ( dicots ) but leave behind grasses ( monocots ). Such a chemical has been used extensively on wheat crops, for example. A number of companies have also created genetically modified organisms that are resistant to various pesticides. Examples include glyphosate-resistant soybeans and Bt maize , which change the types of formulations involved in addressing post-emergent pesticide pressure. It was important to also note that even given appropriate chemical choices, high ambient temperatures or other environmental influences, can allow the non-targeted desirable organism to be damaged during application. As plants have already germinated, post-emergent pesticide application necessitates limited field contact in order to minimize losses due to crop and soil damage. Typical industrial application equipment will utilize very tall and narrow tires and combine this with a sprayer body which can be raised and lowered depending on crop height. These sprayers usually carry the label ‘high-clearance’ as they can rise over growing crops, although usually not much more than 1 or 2 meters high. In addition, these sprayers often have very wide booms in order to minimize the number of passes required over a field, again designed to limit crop damage and maximize efficiency. In industrial agriculture , spray booms 120 feet (37 meters) wide are not uncommon, especially in prairie agriculture with large, flat fields. Related to this, aerial pesticide application is a method of top dressing a pesticide to an emerged crop which eliminates physical contact with soil and crops. Air Blast sprayers, also known as air-assisted or mist sprayers, are often used for tall crops, such as tree fruit, where boom sprayers and aerial application would be ineffective. These types of sprayers can only be used where overspray—spray drift—is less of a concern, either through the choice of chemical which does not have undesirable effects on other desirable organisms, or by adequate buffer distance. These can be used for insects, weeds, and other pests to crops, humans, and animals. Air blast sprayers inject liquid into a fast-moving stream of air, breaking down large droplets into smaller particles by introducing a small amount of liquid into a fast-moving stream of air. [ 5 ] Foggers fulfill a similar role to mist sprayers in producing particles of very small size, but use a different method. Whereas mist sprayers create a high-speed stream of air which can travel significant distances, foggers use a piston or bellows to create a stagnant area of pesticide that is often used for enclosed areas, such as houses and animal shelters. [ 6 ] In order to better understand the cause of the spray inefficiency, it is useful to reflect on the implications of the large range of droplet sizes produced by typical (hydraulic) spray nozzles. This has long been recognized to be one of the most important concepts in spray application ( e.g . Himel, 1969 [ 7 ] ), bringing about enormous variations in the properties of droplets. Historically, dose-transfer to the biological target ( i.e. the pest ) has been shown to be inefficient. [ 8 ] However, relating "ideal" deposits with biological effect is fraught with difficulty, [ 9 ] but in spite of Hislop's misgivings about detail, there have been several demonstrations that massive amounts of pesticides are wasted by run-off from the crop and into the soil, in a process called endo-drift. This is a less familiar form of pesticide drift , with exo-drift causing much greater public concern. Pesticides are conventionally applied using hydraulic atomisers , either on hand-held sprayers or tractor booms, where formulations are mixed into high volumes of water. Different droplet sizes have dramatically different dispersal characteristics, and are subject to complex macro- and micro-climatic interactions (Bache & Johnstone, 1992). Greatly simplifying these interactions in terms of droplet size and wind speed, Craymer & Boyle [ 10 ] concluded that there are essentially three sets of conditions under which droplets move from the nozzle to the target. These are where: Herbicide volatilisation refers to evaporation or sublimation of a volatile herbicide . The effect of gaseous chemical is lost at its intended place of application and may move downwind and affect other plants not intended to be affected causing crop damage. Herbicides vary in their susceptibility to volatilisation. Prompt incorporation of the herbicide into the soil may reduce or prevent volatilisation. Wind, temperature, and humidity also affect the rate of volatilisation with humidity reducing in. 2,4-D and dicamba are commonly used chemicals that are known to be subject to volatilisation [ 11 ] but there are many others. [ 12 ] Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical. [ 11 ] In the 1970s and 1980s improved application technologies such as controlled droplet application (CDA) received extensive research interest, but commercial uptake has been disappointing. By controlling droplet size, ultra-low volume (ULV) or very low volume (VLV) application rates of pesticidal mixtures can achieve similar (or sometimes better) biological results by improved timing and dose-transfer to the biological target ( i.e. pest). No atomizer has been developed able to produce uniform (monodisperse) droplets, but rotary (spinning disc and cage) atomizers usually produce a more uniform droplet size spectrum than conventional hydraulic nozzles (see: CDA & ULV application equipment ). Other efficient application techniques include: banding, baiting, specific granule placement, seed treatments and weed wiping. CDA is a good example of a rational pesticide use (RPU) technology (Bateman, 2003), but unfortunately has been unfashionable with public funding bodies since the early 1990s, with many believing that all pesticide development should be the responsibility of pesticide manufacturers. On the other hand, pesticide companies are unlikely widely to promote better targeting and thus reduced pesticide sales, unless they can benefit by adding value to products in some other way. RPU contrasts dramatically with the promotion of pesticides, and many agrochemical concerns, have equally become aware that product stewardship provides better long-term profitability than high pressure salesmanship of a dwindling number of new “silver bullet” molecules. RPU may therefore provide an appropriate framework for collaboration between many of the stake-holders in crop protection. Understanding the biology and life cycle of the pest is also an important factor in determining droplet size. The Agricultural Research Service , for example, has conducted tests to determine the ideal droplet size of a pesticide used to combat corn earworms . They found that in order to be effective, the pesticide needs to penetrate through the corn's silk, where the earworm's larvae hatch. The research concluded that larger pesticide droplets best penetrated the targeted corn silk. [ 13 ] Knowing where the pest's destruction originates is crucial in targeting the amount of pesticide needed. Ensuring quality of sprayers by testing and setting of standards for application equipment is important to ensure users get value for money. [ 14 ] Since most equipment uses various hydraulic nozzles, various initiatives have attempted to classify spray quality, starting with the BCPC system. [ 15 ] [ 16 ] Roadsides receive substantial quantities of herbicides, both intentionally applied for their maintenance and due to herbicide drift from adjacent applications. This often kills off-target plants. [ 17 ] See: aerial spraying , Ultra-low volume spray application, crop dusting and agricultural drones . Pest management in the home begins with restricting the availability to insects of three vital commodities: shelter, water and food. If insects become a problem despite such measures, it may become necessary to control them using chemical methods, targeting the active ingredient to the particular pest. [ 18 ] Insect repellent , referred to as "bug spray", comes in a plastic bottle or aerosol can. Applied to clothing, arms, legs, and other extremities, the use of these products will tend to ward off nearby insects. This is not an insecticide. Insecticide used for killing pests —most often insects , and arachnids —primarily comes in an aerosol can, and is sprayed directly on the insect or its nest as a means of killing it. Fly sprays will kill house flies , blowflies , ants , cockroaches and other insects and also spiders . Other preparations are granules or liquids that are formulated with bait that is eaten by insects. For many household pests bait traps are available that contain the pesticide and either pheromone or food baits. Crack and crevice sprays are applied into and around openings in houses such as baseboards and plumbing. Pesticides to control termites are often injected into and around the foundations of homes. Active ingredients of many household insecticides include permethrin and tetramethrin , which act on the nervous system of insects and arachnids. Bug sprays should be used in well ventilated areas only, as the chemicals contained in the aerosol and most insecticides can be harmful or deadly to humans and pets. All insecticide products including solids, baits and bait traps should be applied such that they are out of reach of wildlife, pets and children.
https://en.wikipedia.org/wiki/Pesticide_application
A Pesticide detection kit is a kit that scientific test kit detects the presence of pesticide residues . Various organizations create them, among them Defence Food Research Laboratory of India . [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pesticide_detection_kit
Pesticide drift , also known as spray drift , is the unintentional diffusion of pesticides toward nontarget species. It is one of the most negative effects of pesticide application . Drift can damage human health, environment, and crops. [ 1 ] [ 2 ] Together with runoff and leaching, drift is a mechanism for agricultural pollution . [ 3 ] Some drift results from contamination of sprayer tanks. [ 4 ] Farmers struggle to minimize pesticide drift and remain productive. [ 5 ] Research continues on developing pesticides that are more selective, [ 6 ] but the current pesticides have been highly optimized. Pesticides are commonly applied by the use of mechanical sprayers . Sprayers convert a pesticide formulation , often consisting of a mixture of water, the pesticide, and other components ( adjuvants , for example) into droplets, which are applied to the crop. Ideally, the pesticide droplets attach evenly to the targeted crop. Because components of the mist are highly mobile, spray drift can occur, especially for smaller droplets. Some pesticides mists are visible, appearing cloud-like, while others can be invisible and odorless. [ 7 ] [ 8 ] The quality of sprayer equipment affects drift problems. [ 9 ] [ 10 ] Sprayer tanks contaminated with another herbicide are one source of drift. [ 4 ] With placement (localised) spraying of broad spectrum pesticides, considerable efforts have been made to quantify and control spray drift from hydraulic nozzles . [ 11 ] Conversely, wind drift is also an efficient mechanism for moving droplets of an appropriate size range to their targets over a wide area with ultra-low volume (ULV) spraying. [ 12 ] "Drift retardants" are compounds added to the spray mixture to suppress pesticide drift. A typical retardant is polyacrylamide . These polymers suppress the formation of tiny droplets. [ 13 ] Weather conditions and timing affect the drift problem. [ 4 ] The efficiency of the spray and reach of the spray drift can be computed. [ 14 ] In addition to weather, windbreaks can mitigate the effects of drift. [ 15 ] Other ways to mitigate spray drift is to apply the pesticide directly to the desired treatment area, as well as paying attention to where surface waters, gutters, drainage ditches, and storm drains are located. This is to make sure that the pesticide is applied in a way that prevents it from getting in to these spaces. [ 16 ] Most herbicides are organic compounds of low volatility, unlike fumigants , which are usually gases. Several are salts and others have boiling points above 100 °C ( Dicamba is a solid that melts at 114°C). Thus, drift often entails mobilization of droplets, which can be very small. The contribution from their volatility, low as they are, cannot be ignored, either. [ 17 ] A distinction has been made between "exo-drift" (the transfer of spray out of the target area) and endo-drift, where the active ingredient (AI) in droplets falls into the target area, but does not reach the biological target. "Endo-drift" is volumetrically more significant and may therefore cause greater ecological contamination (e.g. where chemical pesticides pollute ground water ). [ 18 ] Since drift can be problematic, alternative weed-control technologies have evolved. A topical approach is integrated pest management , which involves fewer chemicals but often greater manual work. [ 19 ] Dicamba drift is a particular problem, as has been recognized since at least 1979. [ 20 ] The effects have been noted for many crops: grapes, tomatoes, soybeans. [ 21 ] [ 22 ] In 2017, Dicamba-resistant soybeans and cotton were approved for use in the US. This new technology worsened the drift problem because these farmers could use Dicamba more freely. [ 23 ] Although already low in volatility, as discussed above, Dicamba can be made even less volatile by conversion to various salts. The approach entails treatment of Dicamba with amines , which form ammonium salts. These salts are described by their acronyms BAPMA-Dicamba and DGA-Dicamba. Although these salts are of lower volatility in laboratory tests, in the field the situation is more complicated, and drift remains a problem. [ 17 ] Much public concern has led to research into spray drift, point source pollution (e.g. pesticides entering bodies of water following spillage of concentrate or rinsate ) can also cause environmental harm. [ 24 ] Public concern for pesticide drift is not met with regulatory response. [ 18 ] Farm workers and communities surrounding large farms are at a high risk of coming in contact with pesticides. People in agricultural areas are at risk for increased genotoxicity because of pesticide drift. [ 25 ] [ 26 ] Insecticides sprayed on crop fields can also have detrimental effects on non-human lifeforms that are important to the surrounding ecosystems like bees and other insects. [ 27 ] The seriousness of crop injury caused by dicamba drift is increasingly being recognized. For example, the American Soybean Association and various land-grant universities are cooperating in the race to find ways to preserve the usability of dicamba while ending drift injury. [ 28 ] Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical. [ 7 ] From 1998 to 2006, Environmental Health Perspectives found nearly 3,000 cases of pesticide drift; nearly half were workers on the fields treated with pesticides and 14% of cases were children under the age of 15. [ 29 ] Bystander exposure describes the event when individuals unintentionally come in contact with airborne pesticides. Bystanders include workers working in an area separate to the pesticide application area, individuals living in the surrounding areas of an application area, or individuals passing by fields as they are being treated with a pesticide. [ 30 ] Different pesticides can affect different body systems, inflicting different symptoms. [ 31 ] Pesticides can have long-term negative health impacts, including cancer, lung diseases, fertility and reproductive problems, and neurodevelopmental issues in children, when exposure levels are high enough. [ 32 ] In 2001, the United States Environmental Protection Agency published a guidance to "manufacturers, formulators, and registrants of pesticide products" (EPA 2001) [ 33 ] that stated the EPA's stance against pesticide drift as well as suggested product labelling practices. To try and reduce pesticide drift, the EPA is a part of several initiatives. The EPA has routine pesticide risk assessments to check potential drift impact on farmworkers living near or on fields where crops are grown, farmworkers, water sources, and the environment. [ 34 ] The USDA and EPA are working together to examine new studies and how to improve scientific models to estimate the exposure, risk, and drift of pesticides. [ 34 ] The EPA is also working with pesticide manufacturers to ensure labels are easy to read, contain the correct application process and DRT for that specific pesticide. [ 35 ] [ 36 ]
https://en.wikipedia.org/wiki/Pesticide_drift
A pesticide poisoning occurs when pesticides , chemicals intended to control a pest , affect non-target organisms such as humans, wildlife, plants, or bees . There are three types of pesticide poisoning. The first of the three is a single and short-term very high level of exposure which can be experienced by individuals who die by suicide, as well as pesticide formulators. The second type of poisoning is long-term high-level exposure, which can occur in pesticide formulators and manufacturers. The third type of poisoning is a long-term low-level exposure, which individuals are exposed to from sources such as pesticide residues in food as well as contact with pesticide residues in the air, water, soil, sediment, food materials, plants and animals. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In developing countries, such as Sri Lanka , pesticide poisonings from short-term very high level of exposure (acute poisoning) is the most worrisome type of poisoning. However, in developed countries, such as Canada, it is the complete opposite: acute pesticide poisoning is controlled, thus making the main issue long-term low-level exposure of pesticides. [ 5 ] The most common exposure scenarios for pesticide-poisoning cases are accidental or suicidal poisonings, occupational exposure , by-stander exposure to off-target drift, and the general public who are exposed through environmental contamination. [ 6 ] Self-poisoning with agricultural pesticides represents a major hidden public health problem accounting for approximately one-third of all suicides worldwide. [ 8 ] It is one of the most common forms of self-injury in the Global South. The World Health Organization estimates that 300,000 people die from self-harm each year in the Asia-Pacific region alone. [ 9 ] Most cases of intentional pesticide poisoning appear to be impulsive acts undertaken during stressful events, and the availability of pesticides strongly influences the incidence of self poisoning. Pesticides are the agents most frequently used by farmers and students in India to commit suicide. [ 10 ] The overall case fatality rate for suicide attempts using pesticide is about 10–20%. [ 11 ] Pesticide poisoning is an important occupational health issue because pesticides are used in a large number of industries, which puts many different categories of workers at risk. Extensive use puts agricultural workers in particular at increased risk for pesticide illnesses. [ 12 ] [ 13 ] [ 14 ] Exposure can occur through inhalation of pesticide fumes, and often occurs in settings including greenhouse spraying operations and other closed environments like tractor cabs or while operating rotary fan mist sprayers in facilities or locations with poor ventilation systems. [ 15 ] Workers in other industries are at risk for exposure as well. [ 13 ] [ 14 ] For example, commercial availability of pesticides in stores puts retail workers at risk for exposure and illness when they handle pesticide products. [ 16 ] The ubiquity of pesticides puts emergency responders such as fire-fighters and police officers at risk, because they are often the first responders to emergency events and may be unaware of the presence of a poisoning hazard. [ 17 ] The process of aircraft disinsection , in which pesticides are used on inbound international flights for insect and disease control, can also make flight attendants sick. [ 18 ] [ 19 ] Different job functions can lead to different levels of exposure. [ 6 ] Most occupational exposures are caused by absorption through exposed skin such as the face, hands, forearms, neck, and chest. This exposure is sometimes enhanced by inhalation in settings including spraying operations in greenhouses and other closed environments, tractor cabs, and the operation of rotary fan mist sprayers. [ 15 ] The majority of households in Canada use pesticides while taking part in activities such as gardening. In Canada, 96 percent of households report having a lawn or a garden. [ 20 ] 56 percent of the households who have a lawn or a garden utilize fertilizer or pesticide. [ 20 ] This form of pesticide use may contribute to the third type of poisoning, which is caused by long-term low-level exposure. [ 21 ] As mentioned before, long-term low-level exposure affects individuals from sources such as pesticide residues in food as well as contact with pesticide residues in the air, water, soil, sediment, food materials, plants and animals. [ 21 ] The organochlorine pesticides, like DDT , aldrin , and dieldrin , are extremely persistent and accumulate in fatty tissue. Through the process of bioaccumulation (lower amounts in the environment get magnified sequentially up the food chain), large amounts of organochlorines can accumulate in top species like humans. [ citation needed ] There is substantial evidence to suggest that DDT, and its metabolite DDE , act as endocrine disruptors , interfering with hormonal function of estrogen, testosterone, and other steroid hormones. [ citation needed ] Cholinesterase-inhibiting pesticides, also known as organophosphates , carbamates , and anticholinesterases, are most commonly reported in occupationally related pesticide poisonings globally. [ 22 ] Besides acute symptoms including cholinergic crisis , certain organophosphates have long been known to cause a delayed-onset toxicity to nerve cells, which is often irreversible. Several studies have shown persistent deficits in cognitive function in workers chronically exposed to pesticides. [ 23 ] Most pesticide-related illnesses have signs and symptoms that are similar to common medical conditions, so a complete and detailed environmental and occupational history is essential for correctly diagnosing a pesticide poisoning. A few additional screening questions about the patient's work and home environment, in addition to a typical health questionnaire, can indicate whether there was a potential pesticide poisoning. [ 24 ] If one is regularly using carbamate and organophosphate pesticides, it is important to obtain a baseline cholinesterase test. [ 25 ] [ 26 ] Cholinesterase is an important enzyme of the nervous system, and these chemical groups kill pests and potentially injure or kill humans by inhibiting cholinesterase . If one has had a baseline test and later suspects a poisoning, one can identify the extent of the problem by comparison of the current cholinesterase level with the baseline level. [ citation needed ] Accidental poisonings can be avoided by proper labeling and storage of containers. When handling or applying pesticides, exposure can be significantly reduced by protecting certain parts of the body where the skin shows increased absorption, such as the scrotal region, underarms, face, scalp, and hands. [ 27 ] Safety protocols to reduce exposure include the use of personal protective equipment , washing hands and exposed skin during as well as after work, changing clothes between work shifts, and having first aid trainings and protocols in place for workers. [ 28 ] [ 29 ] Personal protective equipment for preventing pesticide exposure includes the use of a respirator, goggles, and protective clothing, which have all have been shown to reduce risk of developing pesticide-induced diseases when handling pesticides. [ 28 ] A study found the risk of acute pesticide poisoning was reduced by 55% in farmers who adopted extra personal protective measures and were educated about both protective equipment and pesticide exposure risk. [ 28 ] Exposure can be significantly reduced when handling or applying pesticides by protecting certain parts of the body where the skin shows increased absorption, such as the scrotal region, underarms, face, scalp, and hands. [ 27 ] Using chemical-resistant gloves has been shown to reduce contamination by 33–86%. [ 30 ] Use of genetically modified crops led to significant reduction of pesticide poisoning as these require significantly less pesticide application. In India alone reduction of 2.4–9 million cases per year was observed after widespread adoption of Bt cotton , with similar reductions reported in China, Pakistan and other countries. [ 31 ] Specific treatments for acute pesticide poisoning are often dependent on the pesticide or class of pesticide responsible for the poisoning. However, there are basic management techniques that are applicable to most acute poisonings, including skin decontamination, airway protection, gastrointestinal decontamination, and seizure treatment. [ 24 ] Decontamination of the skin is performed while other life-saving measures are taking place. Clothing is removed, the patient is showered with soap and water, and the hair is shampooed to remove chemicals from the skin and hair. The eyes are flushed with water for 10–15 minutes. The patient is intubated and oxygen administered, if necessary. In more severe cases, pulmonary ventilation must sometimes be supported mechanically. [ a ] Seizures are typically managed with lorazepam , phenytoin and phenobarbitol , or diazepam (particularly for organochlorine poisonings). [ 24 ] Gastric lavage is not recommended to be used routinely in pesticide poisoning management, as clinical benefit has not been confirmed in controlled studies; it is indicated only when the patient has ingested a potentially life-threatening amount of poison and presents within 60 minutes of ingestion. [ 32 ] An orogastric tube is inserted and the stomach is flushed with saline to try to remove the poison. If the patient is neurologically impaired, a cuffed endotracheal tube inserted beforehand for airway protection. [ 24 ] Studies of poison recovery at 60 minutes have shown recovery of 8–32%. [ 33 ] [ 34 ] However, there is also evidence that lavage may flush the material into the small intestine, increasing absorption. [ 35 ] Lavage is contra-indicated in cases of hydrocarbon ingestion. [ 24 ] Activated charcoal is sometimes administered as it has been shown to be successful with some pesticides but its not effective for malathion poisoning. [ 36 ] Studies have shown that it can reduce the amount absorbed if given within 60 minutes, [ 37 ] though there is not enough data to determine if it is effective if time from ingestion is prolonged. Syrup of ipecac is not recommended for most pesticide poisonings because of potential interference with other antidotes and regurgitation increasing exposure of the esophagus and oral area to the pesticide. [ 38 ] Urinary alkalinisation has been used in acute poisonings from chlorophenoxy herbicides (such as 2,4-D , MCPA , 2,4,5-T and mecoprop ); however, evidence to support its use is poor. [ 39 ] Acute pesticide poisoning is a large-scale problem, especially in developing countries. "Most estimates concerning the extent of acute pesticide poisoning have been based on data from hospital admissions which would include only the more serious cases. The latest estimate by a WHO task group indicates that there may be 1 million serious unintentional poisonings each year and in addition 2 million people hospitalized for suicide attempts with pesticides. This necessarily reflects only a fraction of the real problem. On the basis of a survey of self-reported minor poisoning carried out in the Asian region, it is estimated that there could be as many as 25 million agricultural workers in the developing world suffering an episode of poisoning each year." [ 5 ] In Canada in 2007 more than 6000 cases of acute pesticide poisoning occurred. [ 40 ] Estimating the numbers of chronic poisonings worldwide is more difficult. Pesticides contain many toxic chemicals that affect farmers for many years. Farm workers are impacted greatly and though they get treatment once they are exposed they have to deal with other health issues even years after the incident. [ 41 ] The long term effects of pesticide exposure are birth defects, miscarriages , infertility in both men and women, neurological diseases such as Parkinson's disease , amyotrophic lateral sclerosis (ALS), and dementia -like diseases. [ 42 ] [ 43 ] [ 44 ] And another long-term effect is different types of cancers such as lung cancer, prostate cancer, stomach cancer, breast cancer, and kidney cancer. Farmers and everyone in surrounding areas of pesticide poisoning are exposed and at risk of all the long term effects. [ 45 ] The neurotoxicity of certain pesticides has been implicated as a potential contributing factor to the development of neurodegenerative diseases, raising concerns about their long-term impact on human health. Children are proven to be more susceptible to developmental poisons from pesticides than adults. Additionally risking greater sensitivity to pesticides from compounding stressors or other environmental factors. [ 46 ] Small pesticide exposures have been shown to have an impact on young children's neurological and behavioral development. [ 47 ] Researchers have studied the effects of pesticides on children as opposed to adults, finding children's immature organs and bodies are more susceptible to health effects. [ 47 ] As a result, it is more difficult for children to break down and remove pesticide metabolites. [ 47 ] Pesticide metabolites present in children can further negatively impact their health through their ability to hinder the bodies' ability to absorb vital nutrients from food. [ 47 ] Rachel Carson's 1962 environmental science book Silent Spring brought about the first major wave of public concern over the chronic effects of pesticides. Those who reside close to agriculture land are negatively impacted by pesticide drifting. [ 48 ] This occurs when the pesticide chemicals travel to near by areas leading to exposure to highly toxic airborne chemicals. [ 48 ] Pesticide drift is not an isolated occurrence and it happens routinely to those working in the fields and farm-working neighborhoods that reside close to industrial farming. [ 48 ] An obvious side effect of using a chemical meant to kill is that one is likely to kill more than just the desired organism. Contact with a sprayed plant or "weed" can have an effect upon local wildlife, most notably insects. A cause for concern is how pests, the reason for pesticide use, are building up a resistance. Phytophagous insects are able to build up this resistance because they are easily capable of evolutionary diversification and adaptation . [ 49 ] The problem this presents is that in order to obtain the same desired effect of the pesticides they have to be made increasingly stronger as time goes on. Repercussions of the use of stronger pesticides on vegetation has a negative result on the surrounding environment, but also would contribute to consumers' long-term low-level exposure.
https://en.wikipedia.org/wiki/Pesticide_poisoning
Pesticide residue refers to the pesticides that may remain on or in food, after they are applied to food crops . [ 1 ] The maximum allowable levels of these residues in foods are stipulated by regulatory bodies in many countries. Regulations such as pre-harvest intervals also prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. [ 2 ] A pesticide is a substance or a mixture of substances used for killing pests : organisms dangerous to cultivated plants or to animals. The term applies to various pesticides such as insecticides , fungicides , herbicides and nematocides . [ 3 ] The definition of residue of pesticide according to the World Health Organization (WHO) is: Any specified substances in or on food, agricultural commodities or animal feed resulting from the use of a pesticide. The term includes any derivatives of a pesticide, such as conversion products, metabolites, reaction products and impurities considered to be of toxicological significance. The term “pesticide residue” includes residues from unknown or unavoidable sources (e.g. environmental) as well as known uses of the chemical. The definition of a residue for compliance with maximum residue limits (MRLs) is that combination of the pesticide and its metabolites, derivatives and related compounds to which the MRL applies. [ 4 ] Prior to 1940, pesticides consisted of inorganic compounds (copper, arsenic , mercury , and lead ) and plant derived products. Most of these were abandoned because they were highly toxic and ineffective. Since World War II pesticides composed of synthetic organic compounds were the most important form of pest control . The growth in these pesticides accelerated in late 1940s after Paul Müller discovered DDT in 1939. The effects of pesticides such as aldrin , dieldrin , endrin , chlordane , parathion , captan and 2,4-D were also found at this time. [ 5 ] [ 6 ] Those pesticides were widely used due to their effective pest control. Problems with environmental issues of DDT became increasingly apparent, since it is persistent and bioaccumulates in the body and the food chain . [ 5 ] In the 1960s, Rachel Carson wrote Silent Spring to illustrate a risk of DDT and how it threatened biodiversity . [ 7 ] DDT was banned for agricultural use in 1972 and the others in 2001 . Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. [ 8 ] [ 9 ] Because the half life in soil is long (for DDT 2–15 years [ 10 ] ) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s. [ 11 ] Each country adopts their own agricultural policies and Maximum Residue Limits (MRL) and Acceptable Daily Intake (ADI). The level of food additive usage varies by country because forms of agriculture are different in regions according to their geographical or climatical factors. [ citation needed ] Pre-harvest intervals are also set to require a crop or livestock product not be harvested before a certain period after application in order to allow the pesticide residue to decrease below maximum residue limits or other tolerance levels. [ 12 ] Likewise, restricted entry intervals are the amount of time to allow residue concentrations to decrease before a worker can reenter without protective equipment an area where pesticides have been applied. [ 13 ] Some countries use the International Maximum Residue Limits - Codex Alimentarius to define the residue limits; this was established by Food and Agriculture Organization of the United Nations (FAO) and World Health Organization (WHO) in 1963 to develop international food standards, guidelines codes of practices, and recommendation for food safety. Currently the CODEX has 185 Member Countries and 1 member organization (EU). [ 14 ] The following is the list of maximum residue limits (MRLs) for spices adopted by the commission. [ 15 ] The European Union has a searchable database with the Maximum Residue Limits (MRLs) for 716 pesticides. Under the previous system, revised in 2008, certain pesticide residues were regulated by the commission; others were regulated by Member States, and others were not regulated at all. [ 16 ] Food Standards Australia New Zealand develops the standards for levels of pesticide residues in foods through a consultation process. The New Zealand Food Safety Authority publishes the maximum limits of pesticide residues for foods produced in New Zealand. [ 17 ] Monitoring of pesticide residues in the UK began in the 1950s. From 1977 to 2000 the work was carried out by the Working Party on Pesticide Residues (WPPR), until in 2000 the work was taken over by the Pesticide Residue Committee (PRC). The PRC advise the government through the Pesticides Safety Directorate and the Food Standards Agency (FSA). [ 18 ] In the US, tolerances for the amount of pesticide residue that may remain on food are set by the EPA , and measures are taken to keep pesticide residues below the tolerances. The US EPA has a web page for the allowable tolerances. [ 19 ] In order to assess the risks associated with pesticides on human health, the EPA analyzed individual pesticide active ingredients as well as the common toxic effect that groups of pesticides have, called the cumulative risk assessment. Limits that the EPA sets on pesticides before approving them includes a determination of how often the pesticide should be used and how it should be used, in order to protect the public and the environment. [ 20 ] In the US, the Food and Drug Administration (FDA) and USDA also routinely check food for the actual levels of pesticide residues. [ 21 ] A US organic food advocacy group, the Environmental Working Group , is known for creating a list of fruits and vegetables referred to as the Dirty Dozen; it lists produce with the highest number of distinct pesticide residues or most samples with residue detected in USDA data. This list is generally considered misleading and lacks scientific credibility because it lists detections without accounting for the risk of the usually small amount of each residue with respect to consumer health. [ 22 ] [ 23 ] [ 24 ] In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide. [ 21 ] In Japan, pesticide residues are regulated by the Food Safety Act . Pesticide tolerances are set by the Ministry of Health, Labour and Welfare through the Drug and Food Safety Committee. Unlisted residue amounts are restricted to 0.01ppm. [ 25 ] In China, the Ministry of Health and the Ministry of Agriculture have jointly established mechanisms and working procedures relating to maximum residue limit standards, while updating them continuously, according to the food safety law and regulations issued by the State Council . [ 26 ] [ 27 ] From GB25193-2010 [ 28 ] to GB28260-2011 , [ 29 ] from Maximum Residue Limits for 12 Pesticides to 85 pesticides, they have improved the standards in response to Chinese national needs. The maximum residue limits of pesticides in food are low, and are carefully set by the authorities to ensure, to their best judgement, no health impacts. [ 30 ] According to the American Cancer Society there is no evidence that pesticide residues in food increase the risk of people getting cancer. [ 31 ] The ACA advises washing fruit and vegetables before eating to remove both pesticide residue and other undesirable contaminants. [ 31 ] A 2009 study estimated that lifetime exposure to pesticide residues from eating fruits and vegetables results in only 4.2 and 3.2 minutes of lost life per person in Switzerland and the United States, respectively. [ 32 ] There are many studies on the health differences between consumers of organic foods vs consumers of organically grown foods . When the American Academy of Pediatrics reviewed the literature on organic foods in 2012, they found that "current evidence does not support any meaningful nutritional benefits or deficits from eating organic compared with conventionally grown foods, and there are no well-powered human studies that directly demonstrate health benefits or disease protection as a result of consuming an organic diet." [ 33 ] In China, a number of incidents have occurred where state limits were exceeded by large amounts or where the wrong pesticide was used. In August 1994, a serious incident of pesticide poisoning of sweet potato crops occurred in Shandong province, China. Because local farmers were not fully educated in the use of insecticides, they used the highly-toxic pesticide named parathion instead of trichlorphon . It resulted in over 300 cases of poisoning and 3 deaths. Also, there was a case where a large number of students were poisoned and 23 of them were hospitalized because of vegetables that contained excessive pesticide residues. [ 34 ] Many pesticides achieve their intended use of killing pests by disrupting the nervous system . Due to similarities in brain biochemistry among many different organisms, there is much speculation that these chemicals can have a negative impact on humans as well. [ 35 ] Children are especially vulnerable to exposure to pesticides, especially at critical windows of development. Infants and children consume higher amounts of food relative to their body-weight, and have a more permeable blood–brain barrier , all of which can contribute to increased risks from exposure to pesticide residues. [ 36 ] However, in 2008 the OECD report that the existing guideline represents the best available science for assessing the potential for developmental neurotoxicity in human health risk assessment. [ 37 ]
https://en.wikipedia.org/wiki/Pesticide_residue
Pesticide resistance describes the decreased susceptibility of a pest population to a pesticide that was previously effective at controlling the pest. Pest species evolve pesticide resistance via natural selection : the most resistant specimens survive and pass on their acquired heritable changes traits to their offspring. [ 1 ] If a pest has resistance then that will reduce the pesticide's efficacy – efficacy and resistance are inversely related . [ 2 ] Cases of resistance have been reported in all classes of pests ( i.e. crop diseases, weeds, rodents, etc. ), with 'crises' in insect control occurring early-on after the introduction of pesticide use in the 20th century. The Insecticide Resistance Action Committee (IRAC) definition of insecticide resistance is ' a heritable change in the sensitivity of a pest population that is reflected in the repeated failure of a product to achieve the expected level of control when used according to the label recommendation for that pest species ' . [ 3 ] Pesticide resistance is increasing. Farmers in the US lost 7% of their crops to pests in the 1940s; over the 1980s and 1990s, the loss was 13%, even though more pesticides were being used. [ 1 ] Over 500 species of pests have evolved a resistance to a pesticide. [ 4 ] Other sources estimate the number to be around 1,000 species since 1945. [ 5 ] Although the evolution of pesticide resistance is usually discussed as a result of pesticide use, it is important to keep in mind that pest populations can also adapt to non-chemical methods of control. For example, the northern corn rootworm ( Diabrotica barberi ) became adapted to a corn-soybean crop rotation by spending the year when the field is planted with soybeans in a diapause . [ 6 ] As of 2014 [update] , few new weed killers are near commercialization, and none with a novel, resistance-free mode of action. [ 7 ] Similarly, as of January 2019 [update] discovery of new insecticides is more expensive and difficult than ever. [ 8 ] Pesticide resistance probably stems from multiple factors: Resistance has evolved in multiple species: resistance to insecticides was first documented by A. L. Melander in 1914 when scale insects demonstrated resistance to an inorganic insecticide. Between 1914 and 1946, 11 additional cases were recorded. The development of organic insecticides, such as DDT , gave hope that insecticide resistance was a dead issue. However, by 1947 housefly resistance to DDT had evolved. With the introduction of every new insecticide class – cyclodienes , carbamates , formamidines , organophosphates , pyrethroids , even Bacillus thuringiensis – cases of resistance surfaced within two to 20 years. Insecticides are widely used across the world to increase agricultural productivity and quality in vegetables and grains (and to a lesser degree the use for vector control for livestock). The resulting resistance has reduced function for those very purposes, and in vector control for humans. [ 26 ] Pests becomes resistant by evolving physiological changes that protect them from the chemical. [ 12 ] One protection mechanism is to increase the number of copies of a gene , allowing the organism to produce more of a protective enzyme that breaks the pesticide into less toxic chemicals. [ 12 ] Such enzymes include esterases , glutathione transferases , aryldialkylphosphatase and mixed microsomal oxidases ( oxidases expressed within microsomes ). [ 12 ] Alternatively, the number and/or sensitivity of biochemical receptors that bind to the pesticide may be reduced. [ 12 ] Behavioral resistance has been described for some chemicals. For example, some Anopheles mosquitoes evolved a preference for resting outside that kept them away from pesticide sprayed on interior walls. [ 27 ] Resistance may involve rapid excretion of toxins, secretion of them within the body away from vulnerable tissues and decreased penetration through the body wall. [ 28 ] Mutation in only a single gene can lead to the evolution of a resistant organism. In other cases, multiple genes are involved. Resistant genes are usually autosomal. This means that they are located on autosomes (as opposed to allosomes , also known as sex chromosomes). As a result, resistance is inherited similarly in males and females. Also, resistance is usually inherited as an incompletely dominant trait. When a resistant individual mates with a susceptible individual, their progeny generally has a level of resistance intermediate between the parents. [ citation needed ] Adaptation to pesticides comes with an evolutionary cost, usually decreasing relative fitness of organisms in the absence of pesticides. Resistant individuals often have reduced reproductive output, life expectancy, mobility, etc. Non-resistant individuals sometimes grow in frequency in the absence of pesticides - but not always [ 29 ] - so this is one way that is being tried to combat resistance. [ 30 ] Blowfly maggots produce an enzyme that confers resistance to organochloride insecticides. Scientists have researched ways to use this enzyme to break down pesticides in the environment, which would detoxify them and prevent harmful environmental effects. A similar enzyme produced by soil bacteria that also breaks down organochlorides works faster and remains stable in a variety of conditions. [ 31 ] Resistance to gene drive forms of population control is expected to occur and methods of slowing its development are being studied. [ 32 ] The molecular mechanisms of insecticide resistance only became comprehensible in 1997. Guerrero et al. 1997 used the newest methods of the time to find mutations producing pyrethroid resistance in dipterans. Even so, these adaptations to pesticides were unusually rapid and may not necessarily represent the norm in wild populations, under wild conditions. Natural adaptation processes take much longer and almost always happen in response to gentler pressures. [ 33 ] In order to remediate the problem it first must be ascertained what is really wrong. Assaying of suspected pesticide resistance - and not merely field observation and experience - is necessary because it may be mistaken for failure to apply the pesticide as directed, or microbial degradation of the pesticide. [ 34 ] The United Nations ' World Health Organization established the Worldwide Insecticide resistance Network in March 2016, [ 35 ] [ 36 ] [ 37 ] [ 38 ] due to increasing need and increasing recognition, including the radical decline in function against pests of vegetables. [ 35 ] [ 36 ] [ 37 ] [ 38 ] The Integrated pest management (IPM) approach provides a balanced approach to minimizing resistance. Resistance can be managed by reducing use of a pesticide: which may also be beneficial for mitigating pest resurgence . This allows non-resistant organisms to out-compete resistant strains. They can later be killed by returning to use of the pesticide. A complementary approach is to site untreated refuges near treated croplands where susceptible pests can survive. [ 39 ] [ 40 ] When pesticides are the sole or predominant method of pest control, resistance is commonly managed through pesticide rotation. This involves switching among pesticide classes with different modes of action to delay or mitigate pest resistance. [ 41 ] The Resistance Action Committees monitor resistance across the world, and in order to do that, each maintains a list of modes of action and pesticides that fall into those categories: the Fungicide Resistance Action Committee , [ 42 ] the Weed Science Society of America [ 43 ] [ 44 ] (the Herbicide Resistance Action Committee no longer has its own scheme, and is contributing to WSSA's from now on), [ 45 ] and the Insecticide Resistance Action Committee . [ 46 ] The U.S. Environmental Protection Agency (EPA) also uses those classification schemes. [ 47 ] Manufacturers may recommend no more than a specified number of consecutive applications of a pesticide class be made before moving to a different pesticide class. [ 48 ] Two or more pesticides with different modes of action can be tankmixed on the farm to improve results and delay or mitigate existing pest resistance. [ 39 ] Glyphosate -resistant weeds are now present in the vast majority of soybean , cotton , and corn farms in some U.S. states. Weeds resistant to multiple herbicide modes of action are also on the rise. [ 7 ] Before glyphosate, most herbicides would kill a limited number of weed species, forcing farmers to continually rotate their crops and herbicides to prevent resistance. Glyphosate disrupts the ability of most plants to construct new proteins. Glyphosate-tolerant transgenic crops are not affected. [ 7 ] A weed family that includes waterhemp ( Amaranthus rudis ) has developed glyphosate-resistant strains. A 2008 to 2009 survey of 144 populations of waterhemp in 41 Missouri counties revealed glyphosate resistance in 69%. Weed surveys from some 500 sites throughout Iowa in 2011 and 2012 revealed glyphosate resistance in approximately 64% of waterhemp samples. [ 7 ] In response to the rise in glyphosate resistance, farmers turned to other herbicides—applying several in a single season. In the United States, most midwestern and southern farmers continue to use glyphosate because it still controls most weed species, applying other herbicides, known as residuals, to deal with resistance. [ 7 ] The use of multiple herbicides appears to have slowed the spread of glyphosate resistance. From 2005 through 2010 researchers discovered 13 different weed species that had developed resistance to glyphosate. From 2010 to 2014 only two more were discovered. [ 7 ] A 2013 Missouri survey showed that multiply-resistant weeds had spread. 43% of the sampled weed populations were resistant to two different herbicides, 6% to three and 0.5% to four. In Iowa a survey revealed dual resistance in 89% of waterhemp populations, 25% resistant to three and 10% resistant to five. [ 7 ] Resistance increases pesticide costs. For southern cotton, herbicide costs climbed from between $50–$75 per hectare ($20–$30/acre) a few years ago to about $370 per hectare ($150/acre) in 2014. In the South, resistance contributed to the shift that reduced cotton planting by 70% in Arkansas and 60% in Tennessee. For soybeans in Illinois, costs rose from about $25–$160 per hectare ($10–$65/acre). [ 7 ] During 2009 and 2010, some Iowa fields showed severe injury to corn producing Bt toxin Cry3Bb1 by western corn rootworm . During 2011, mCry3A corn also displayed insect damage, including cross-resistance between these toxins. Resistance persisted and spread in Iowa. Bt corn that targets western corn rootworm does not produce a high dose of Bt toxin, and displays less resistance than that seen in a high-dose Bt crop. [ 49 ] Products such as Capture LFR (containing the pyrethroid bifenthrin ) and SmartChoice (containing a pyrethroid and an organophosphate ) have been increasingly used to complement Bt crops that farmers find alone to be unable to prevent insect-driven injury. Multiple studies have found the practice to be either ineffective or to accelerate the development of resistant strains. [ 50 ]
https://en.wikipedia.org/wiki/Pesticide_resistance
The byte is a unit of digital information that most commonly consists of eight bits . Historically, the byte was the number of bits used to encode a single character of text in a computer [ 1 ] [ 2 ] and for this reason it is the smallest addressable unit of memory in many computer architectures . To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol ( RFC 791 ) refer to an 8-bit byte as an octet . [ 3 ] Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness . The size of the byte has historically been hardware -dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into the twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables [ a ] or slab , before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. [ 8 ] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. [ 9 ] Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). [ 10 ] Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte". [ 11 ] [ 12 ] The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel . The term byte was coined by Werner Buchholz in June 1956, [ 4 ] [ 13 ] [ 14 ] [ b ] during the early design phase for the IBM Stretch [ 15 ] [ 16 ] [ 1 ] [ 13 ] [ 14 ] [ 17 ] [ 18 ] computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. [ 13 ] It is a deliberate respelling of bite to avoid accidental mutation to bit . [ 1 ] [ 13 ] [ 19 ] [ c ] Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits , is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand , MIT, and IBM. [ 20 ] [ 21 ] Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31 . [ 22 ] [ 21 ] Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army ( FIELDATA ) and Navy . These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard , which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. [ 18 ] During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations [ d ] used in earlier card punches. [ 23 ] The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, [ 18 ] [ 16 ] [ 13 ] while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines . These used the eight-bit μ-law encoding . This large investment promised to reduce transmission costs for eight-bit data. In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". [ 24 ] He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized." [ 24 ] The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080 , the direct predecessor of the 8086 , could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble , also nybble , which is conveniently represented by a single hexadecimal digit. The term octet unambiguously specifies a size of eight bits. [ 18 ] [ 12 ] It is used extensively in protocol definitions. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; [ 25 ] [ 26 ] however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13 , IEEE 1541 and the Metric Interchange Format [ 10 ] as the upper-case character B. In the International System of Quantities (ISQ), B is also the symbol of the bel , a unit of logarithmic power ratio named after Alexander Graham Bell , creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French [ 27 ] and Romanian , and is also combined with metric prefixes for multiples, for example ko and Mo. More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10 , following the International System of Units (SI), which defines for example the prefix kilo as 1000 (10 3 ); other systems are based on powers of two . Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes ( kilo , mega , giga , ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes ( kibi , mebi , gibi , ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used. While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte. Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). [ 28 ] The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 1000 8 bytes. [ 29 ] The additional prefixes ronna- for 1000 9 and quetta- for 1000 10 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. [ 30 ] [ 31 ] This definition is most commonly used for data-rate units in computer networks , internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media , particularly hard drives , [ 32 ] flash -based storage, [ 33 ] and DVDs . [ citation needed ] Operating systems that use this definition include macOS , [ 34 ] iOS , [ 34 ] Ubuntu , [ 35 ] and Debian . [ 36 ] It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance . Prior art, the IBM System 360 and the related tape systems set the byte at 8 bits. [ 37 ] Early 5.25-inch disks used decimal [ dubious – discuss ] even though they used 128-byte and 256-byte sectors. [ 38 ] Hard disks used mostly 256-byte and then 512-byte before 4096-byte blocks became standard. [ 39 ] RAM was always sold in powers of 2. [ citation needed ] A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 2 10 ) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies ( BIPM , IEC , NIST ). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 1024 8 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 1024 9 ) and quebi- (Qi, 1024 10 ), but have not yet been adopted by the IEC or ISO. [ 40 ] An alternative system of nomenclature for the same units (referred to here as the customary convention ), in which 1 kilobyte (KB) is equal to 1,024 bytes, [ 41 ] [ 42 ] [ 43 ] 1 megabyte (MB) is equal to 1024 2 bytes and 1 gigabyte (GB) is equal to 1024 3 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect, [ 44 ] the customary convention is used by the Microsoft Windows operating system [ 45 ] [ better source needed ] and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone , [ 46 ] AT&T , [ 47 ] Orange [ 48 ] and Telstra . [ 49 ] For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.5 Leopard and iOS 10, after which they switched to units based on powers of 10. [ 34 ] Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word , half word , long word , quad word , slab , superword and syllable . There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for 1000 8 . When I see a disk advertised as having a capacity of one megabyte, what is this telling me? There are three plausible answers, and I wonder if anybody knows which one is correct ... Now this is not a really vital issue, as there is just under 5% difference between the smallest and largest alternatives. Nevertheless, it would [be] nice to know what the standard measure is, or if there is one. Contemporary [ e ] computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1024 is approximately 1000 . [ 27 ] This definition was popular in early decades of personal computing , with products like the Tandon 5 1 ⁄ 4 -inch DD floppy format (holding 368 640 bytes) being advertised as "360 KB", following the 1024 -byte convention. It was not universal, however. The Shugart SA-400 5 1 ⁄ 4 -inch floppy disk held 109,375 bytes unformatted, [ 51 ] and was advertised as "110 Kbyte", using the 1000 convention. [ 52 ] Likewise, the 8-inch DEC RX01 floppy (1975) held 256 256 bytes formatted, and was advertised as "256k". [ 53 ] Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of 1440 KiB , the equivalent of 1.47 MB or 1.41 MiB. In 1995, the International Union of Pure and Applied Chemistry 's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). [ 54 ] [ 55 ] In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. [ 56 ] Thus one kibibyte (1 KiB) is 1024 1 bytes = 1024 bytes, one mebibyte (1 MiB) is 1024 2 bytes = 1 048 576 bytes, and so on. In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" ( KKB ). [ 57 ] The IEC adopted the IUPAC proposal and published the standard in January 1999. [ 58 ] [ 59 ] The IEC prefixes are part of the International System of Quantities . The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. [ 60 ] Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1 000 000 000 (10 9 ) bytes (the decimal definition), rather than the binary definition (2 30 , i.e., 1 073 741 824 ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. ' " [ 61 ] Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital . [ 62 ] [ 63 ] Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. [ 62 ] Seagate was sued on similar grounds and also settled. [ 62 ] [ 64 ] Many programming languages define the data type byte . The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. [ 71 ] [ 72 ] [ f ] In addition, the C and C++ standards require that there be no gaps between two bytes. This means every bit in memory is part of a byte. [ 73 ] Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C# , define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127 , respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication a full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit , and thus its size may vary from seven to twelve bits for five to eight bits of actual data. [ 74 ] For synchronous communication the error checking usually uses bytes at the end of a frame . Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below. Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit .) A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [ fr ] computer.) Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. [...] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long. Figure 2 shows the Shift Matrix to be used to convert a 60-bit word , coming from Memory in parallel, into characters , or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to the Adder. The Adder may accept all or only some of the bits. Assume that it is desired to operate on 4 bit decimal digits , starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. [...] byte: A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character. NOTES: 1 The number of bits in a byte is fixed for a given data processing system. 2 The number of bits in a byte is usually 8. We received the following from W Buchholz, one of the individuals who was working on IBM's Project Stretch in the mid 1950s. His letter tells the story. Not being a regular reader of your magazine, I heard about the question in the November 1976 issue regarding the origin of the term "byte" from a colleague who knew that I had perpetrated this piece of jargon [see page 77 of November 1976 BYTE, "Olde Englishe"] . I searched my files and could not locate a birth certificate. But I am sure that "byte" is coming of age in 1977 with its 21st birthday. Many have assumed that byte, meaning 8 bits, originated with the IBM System/360, which spread such bytes far and wide in the mid-1960s. The editor is correct in pointing out that the term goes back to the earlier Stretch computer (but incorrect in that Stretch was the first, not the last, of IBM's second-generation transistorized computers to be developed). The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch . A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8-bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter . The first published reference to the term occurred in 1959 in a paper ' Processing Data in Bits and Pieces ' by G A Blaauw , F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers , June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch) , edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit. ) System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. Since then the term byte has generally meant 8 bits, and it has thus passed into the general vocabulary. Are there any other terms coined especially for the computer field which have found their way into general dictionaries of English language? 1956 Summer: Gerrit Blaauw , Fred Brooks , Werner Buchholz , John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership. 1956 July [ sic ]: In a report Werner Buchholz lists the advantages of a 64-bit word length for Stretch. It also supports NSA 's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo. NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956 , while Buchholz actually used the term as early as June 1956 . [...] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into a 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases. With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at the input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. [...] [...] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave the remaining bits blank. The resultant gaps can be edited out later by programming [...] I came to work for IBM , and saw all the confusion caused by the 64-character limitation. Especially when we started to think about word processing, which would require both upper and lower case. Add 26 lower case letters to 47 existing, and one got 73 -- 9 more than 6 bits could represent. I even made a proposal (in view of STRETCH , the very first computer I know of with an 8-bit byte) that would extend the number of punch card character codes to 256 [1] . Some folks took it seriously. I thought of it as a spoof. So some folks started thinking about 7-bit characters, but this was ridiculous. With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz , the man who DID coin the term "byte" for an 8-bit grouping). [2] It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was "powers of 2 are magic". And so the group I headed developed and justified such a proposal [3]. That was a little too much progress when presented to the standards group that was to formalize ASCII, so they stopped short for the moment with a 7-bit set, or else an 8-bit set with the upper half left for future work. The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. The design had 8 bits moving around in parallel. But then came a new IBM part, with 9 bits for self-checking, both inside the CPU and in the tape drives . I exposed this 9-bit byte to the press in 1973. But long before that, when I headed software operations for Cie. Bull in France in 1965-66, I insisted that 'byte' be deprecated in favor of " octet ". You can notice that my preference then is now the preferred term. It is justified by new communications methods that can carry 16, 32, 64, and even 128 bits in parallel. But some foolish people now refer to a "16-bit byte" because of this parallel transfer, which is visible in the UNICODE set. I'm not sure, but maybe this should be called a " hextet ". But you will notice that I am still correct. Powers of 2 are still magic! The word byte was coined around 1956 to 1957 at MIT Lincoln Laboratories within a project called SAGE (the North American Air Defense System), which was jointly developed by Rand , Lincoln Labs, and IBM . In that era, computer memory structure was already defined in terms of word size . A word consisted of x number of bits ; a bit represented a binary notational position in a word. Operations typically operated on all the bits in the full word. We coined the word byte to refer to a logical set of bits less than a full word size. At that time, it was not defined specifically as x bits but typically referred to as a set of 4 bits , as that was the size of most of our coded data items. Shortly afterward, I went on to other responsibilities that removed me from SAGE. After having spent many years in Asia, I returned to the U.S. and was bemused to find out that the word byte was being used in the new microcomputer technology to refer to the basic addressable memory unit. A question-and-answer session at an ACM conference on the history of programming languages included this exchange: [ John Goodenough : You mentioned that the term "byte" is used in JOVIAL . Where did the term come from? ] [ Jules Schwartz (inventor of JOVIAL): As I recall, the AN/FSQ-31 , a totally different computer than the 709 , was byte oriented. I don't recall for sure, but I'm reasonably certain the description of that computer included the word "byte," and we used it. ] [ Fred Brooks : May I speak to that? Werner Buchholz coined the word as part of the definition of STRETCH , and the AN/FSQ-31 picked it up from STRETCH, but Werner is very definitely the author of that word. ] [ Schwartz: That's right. Thank you. ]
https://en.wikipedia.org/wiki/Petabyte
Petascale computing refers to computing systems capable of performing at least 1 quadrillion (10^15) floating-point operations per second (FLOPS) . These systems are often called petaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations. Floating point operations per second (FLOPS) are one measure of computer performance . FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit ( double-precision floating-point format ) operations per second using the High Performance LINPACK (HPLinpack) benchmark . [ 1 ] [ 2 ] The metric typically refers to single computing systems, although can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition. [ 2 ] It has been recognized that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement. [ 3 ] [ 4 ] The petaFLOPS barrier was first broken on 16 September 2007 by the distributed computing Folding@home project. [ 5 ] The first single petascale system, the Roadrunner , entered operation in 2008. [ 6 ] The Roadrunner , built by IBM , had a sustained performance of 1.026 petaFLOPS. The Jaguar became the second computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update. [ 7 ] In 2020, Fugaku became the fastest supercomputer in the world, reaching 415 petaFLOPS in June 2020. Fugaku later achieved an Rmax of 442 petaFLOPS in November of the same year. By 2022, exascale computing had been reached with the development of Frontier , surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022. [ 8 ] Modern artificial intelligence (AI) systems require large amounts of computational power to train model parameters. OpenAI employed 25,000 Nvidia A100 GPUs to train GPT-4 , using a total of 133 septillion floating-point operations. [ 9 ]
https://en.wikipedia.org/wiki/Petascale_computing
The Petasis reaction (alternatively called the Petasis borono–Mannich (PBM) reaction ) is the multi-component reaction of an amine , a carbonyl , and a vinyl - or aryl - boronic acid to form substituted amines. Reported in 1993 by Nicos Petasis as a practical method towards the synthesis of a geometrically pure antifungal agent, naftifine . [ 1 ] [ 2 ] [ 3 ] In the Petasis reaction, the vinyl group of the organoboronic acid serves as the nucleophile. In comparison to other methods of generating allyl amines, the Petasis reaction tolerates a multifunctional scaffold, with a variety of amines and organoboronic acids as potential starting materials. Additionally, the reaction does not require anhydrous or inert conditions. As a mild, selective synthesis, the Petasis reaction is useful in generating α-amino acids, and is utilized in combinatorial chemistry and drug discovery . [ 4 ] [ 5 ] [ 6 ] [ 7 ] The amine is condensed with the carbonyl followed by addition of the boronic acid . [ 1 ] One of the most attractive features of the Petasis reaction is the stability of the vinyl boronic acids. With the advent of the Suzuki coupling , many are commercially available. Other methods of generating boronic acids were also reported. [ 8 ] [ 9 ] A wide variety of functional groups including alcohols, carboxylic acids, and amines are tolerated in the Petasis Reaction. Known substrates that are compatible with reaction conditions include vinylboronate esters, arylboronate esters, and potassium organotrifluoroborates . [ 10 ] [ 11 ] [ 12 ] Additionally, a variety of substituted amines can be used other than secondary amines. Tertiary aromatic amines , hydrazines , hydroxylamines , sulfonamides , and indoles have all been reported. [ 13 ] [ 14 ] [ 15 ] [ 16 ] Vinyl boronic acids react with the adducts of secondary amines and paraformaldehyde to give tertiary allylamines. The geometry of the double bond of the starting vinyl boronic acid is retained in the final product: [ 1 ] This reaction was used to synthesize naftifine [ 1 ] β,γ-unsaturated, N-substituted amino acids are prepared through the condensation of organoboronic acids, boronates, or boronic esters with amines and glyoxylic acids. The highly polar protic solvents Hexafluoroisopropanol (HFIP) can shorten reaction time and improve yield. [ 17 ] Apart from vinyl boronic acids, aryl boronic acids and other heterocyclic derivatives can also be used in Petasis multicomponent coupling. Possible substrate scope includes thienyl, pyridyl, furyl, and benzofuranyl, 1-naphthyl, and aryl groups with either electron-donating or electron-withdrawing substituent. [ 10 ] Racemic Clopidogrel, an antiplatelet agent, was synthesized in two steps using Petasis reaction. [ 18 ] The Petasis reaction exhibits high degrees of stereocontrol when a chiral amine or aldehyde is used as a substrate. When certain chiral amines, such as (S)-2-phenylglycinol, are mixed with an α-keto acid and vinyl boronic acid at room temperature, the corresponding allylamine is formed as a single diastereomer. Furthermore, enantiomeric purity can be achieved by hydrogenation of the diastereoselective product. Apart from amino-acids, PBM reaction can also be used to prepare carboxylic acids, albeit with unconventional mechanisms. In the case of N -substituted indoles as amine equivalent, the reaction begins with the nucleophilic attack of the 3-position of the "N"-substituted indole to electrophilic aldehyde, followed by formation of "ate complex" 1 via the reaction of boronic acid with the carboxylic acid. The intermediate then undergoes dehydration, followed by migration of boronate-alkyl group to furnish the final carboxylic acid product. A wide range of aryl boronic acids is tolerated, while the usage of vinyl boronic acids is not reported. "N"-unsubstituted indoles react very sluggishly under normal reaction conditions, thus confirming the mechanism below. [ 16 ] Tertiary aromatic amines can be used in the Petasis reaction as another equivalent of amine nucleophile. The mechanism is similar to the N-substituted indole case. The reaction is carried out under harsh conditions (24-hr reflux in 1,4-dioxane), but the resultant carboxylic acid is obtained in reasonable yield. Usage of α-ketoacids instead of glyoxylic acid does not diminish yields. 1,3,5-trioxygenated benzene derivatives can also be used in lieu of tertiary aromatic amines. [ 15 ] When used as nitrogen nucleophiles, amino acids can furnish various iminodicarboxylic acid derivatives. High diastereoselectivity is usually observed, and the newly formed stereocenter usually share the same configuration with the starting amino acid. This reaction works well in highly polar solvents (ex. water, ethanol, etc.). Peptides with unprotected nitrogen terminal can also be used as a nitrogen nucleophile equivalent. Petasis and coworkers prepared Enalaprilat, an ACE inhibitor, with this method. [ 19 ] When diamines are used in PBM reactions, heterocycles of various structures, such as piperazinones, benzopiperazinones, and benzodiazepinones, are efficiently prepared. Lactamization reactions are commonly employed to form the heterocycles, usually under strongly acidic conditions. [ 19 ] When a α-hydroxy aldehyde is used as a substrate in the synthesis of β-amino alcohols, a single diastereomer is generated. This reaction forms exclusively anti-product, confirmed by 1 H NMR spectroscopy. The product does not undergo racemization, and when enantiomerically pure α-hydroxy aldehydes are used, enantiomeric excess can be achieved. It is believed that the boronic acid first reacted with the chiral hydroxyl group, furnishing a nucleophilic alkenyl boronate, followed by face selective, intramolecular migration of the alkenyl group into the electrophilic iminium carbon, forming the desired C–C bond irreversibly. [ 3 ] Diastereoselectivity may arise from the reaction of the more stable (and, in this case, more reactive) conformation of the ate complex, where 1,3 allylic strain is minimized. [ 20 ] [ 21 ] [ 22 ] With dihydroxyacetone, a somewhat unconventional aldehyde equivalent, Petasis reaction give the core structure of FTY720, a potent immunosuppressive agent. [ 23 ] Petasis and coworkers reported the usage of unprotected carbohydrates as the carbonyl component in PBM reactions. It is used as the equivalent of α-hydroxyl aldehydes with pre-existing chirality, and the aminopolyol product is usually furnished with moderate to good yield, with excellent selectivity. A wide variety of carbohydrates, as well as nitrogen nucleophiles (ex. amino acids), can be used to furnish highly stereochemically enriched products. The aminopolyol products can then undergo further reactions to prepare aminosugars. Petasis used this reaction to prepare Boc-protected mannosamine from D-arabinose. [ 19 ] Generally speaking, when chiral amine is used in Petasis coupling, the stereochemical outcome of Petasis reaction is strongly correlated to the chirality of the amine, and high to excellent diastereoselectivity is observed even without the usage of bulky chiral inducing groups. Chiral benzyl amines, [ 24 ] 2-substituted pyrrolidines, [ 25 ] and 5-substituted 2-morpholinones [ 26 ] [ 27 ] have been shown to induce good to excellent diastereomeric excess under different Petasis reaction conditions. Chiral N-acyliminium ion "starting materials" are generally prepared by in-situ dehydration of cyclic hemiaminal. They also carry a chiral hydroxyl group that is in proximity with the iminium carbon; boronic acids react with such chiral hydroxyl groups to form a chiral and electron-rich boronate species, followed by side-selective and intramolecular boronate vinyl/aryl transfer into the iminium carbon. Hence, the reaction is highly diastereoselective, with cis- boronate aryl/vinyl transfer being the predominant pathway. Hydroxypyrrolidines [ 28 ] and Hydroxy-γ- and δ-lactams [ 29 ] have been shown to react very diastereoselectively, with good to excellent yield. However, such procedures are limited to the usage of vinyl- or electron-rich aryl- boronic acids only. (±)-6-Deoxycastanospermine was prepared in 7 steps from the vinyl boronic ester. The key acyclic precursor to deoxycastanospermine (A) is formed first by condensing vinyl boronic ester 1 with Cbz-protected hydroxy-pyrrolidine 2 with a PBM coupling, followed by dihydroxylation and TBS protection. A then undergo intramolecular cyclization via a one-pot imine formation and reduction sequel, followed by TBS deprotection, to afford (±)-6-deoxycastanospermine. [ 30 ] Enantioselective Petasis-type reaction transform quinolines into respective chiral 1,2-dihydroquinolines (product) using alkenyl boronic acids and chiral thiourea catalyst: [ 31 ] Chloroformates are required as electrophilic activating agents. Also, a 1,2-amino alcohol functionality is required on the catalyst for the reaction to proceed stereoselectively. Chiral α-amino acids with various functionalities are conveniently furnished by mixing alkenyl diethyl boronates, secondary amines, glyoxylates, and chiral biphenol catalyst in toluene in one-pot: [ 32 ] This reaction tolerates a wide range of functionalities, both on the sides of alkenyl boronates and the secondary amine: the electron-richness of the substrates does not affect the yield and enantioselectivity, and sterically demanding substrates (dialkylsubstituted alkenyl boronates and amines with α-stereocenter) only compromise enantioselectivity slightly. Reaction rates do vary on a case-by-case basis. [ 32 ] Under the reported condition, boronic acids substrates failed to give any enantioselectivity. Also, 3Å molecular sieve is used in the reaction system. While the authors did not provide the reason for such usage in the paper, it was speculated that 3Å molecular sieves act as water scavenger and prevent the decomposition of alkenyl diethyl boronates into their respective boronic acids. The catalyst could be recycled from the reaction and reused without compromising yield or enantioselectivity. [ 32 ] More recently, Yuan with coworkers from Chengdu Institute of Organic Chemistry, Chinese Academy of Science combined both approaches (chiral thiourea catalyst and chiral biphenol) in a single catalyst, reporting for the first time the catalytic system that is capable of performing enantioselective Petasis reaction between salicylaldehydes, cyclic secondary amines and aryl- or alkenylboronic acids: [ 33 ] In one application the Petasis reaction is used for quick access to a multifunctional scaffold for divergent synthesis . The reactants are the lactol derived from L-phenyl-lactic acid and acetone , l-phenylalanine methyl ester and a boronic acid . The reaction takes place in ethanol at room temperature to give the product, an anti-1,2-amino alcohol with a high diastereomeric excess . [ 34 ] Notice that the authors cannot assess syn-1,2-amino alcohol with this method due to intrinsic mechanistic selectivity, and the authors argue that such intrinsic selectivity hampers their ability to access the full matrix of stereoisomeric products for the usage of small molecule screening. In a recent report, Schaus and co-workers reported that syn amino alcohol can be obtained with the following reaction condition, using a chiral dibromo-biphenol catalyst their group developed: [ 35 ] Although the syn vs. anti diastereomeric ratio ranges from mediocre to good (1.5:1 to 7.5:1), the substrate scope for such reactions remain rather limited, and the diastereoselectivity is found to be dependent on the stereogenic center on the amine starting material. [ 35 ] Beau and coworkers assembled the core dihydropyran framework of zanamivir congeners via a combination of PBM reaction and Iron(III)-promoted deprotection-cyclization sequence. A stereochemically defined α-hydroxyaldehyde 2, diallylamine and a dimethylketal-protected boronic acid 1 is coupled to form the acyclic, stereochemically defined amino-alcohol 3, which then undergoes an Iron(III)-promoted cyclization to form a bicyclic dihydropyran 4. Selective opening of the oxazoline portion of the dihydropyran intermediate 4 with water or timethylsilyl azide then furnish downstream products that have structures resembling the Zanamivir family members. [ 36 ] Wong and coworkers prepared N-acetylneuraminic acid with a PBM coupling, followed by nitrone-[3+2] cycloaddition. Vinylboronic acid is first coupled with L-arabinose 1 and Bis(4-methoxyphenyl)methanamine 2 to form an stereochemically defined allyl amine 3. Afterwards, the sequence of dipolar cycloaddition, base-mediated N–O bond breakage and hydrolysis then complete the synthesis of N-acetylneuraminic acid. [ 37 ]
https://en.wikipedia.org/wiki/Petasis_reaction
The Petasis reagent , named after Nicos A. Petasis, is an organotitanium compound with the formula Cp 2 Ti(CH 3 ) 2 . [ 1 ] It is an orange-colored solid. The Petasis reagent is prepared by the salt metathesis reaction of methylmagnesium chloride or methyllithium [ 2 ] with titanocene dichloride : [ 3 ] This compound is used for the transformation of carbonyl groups to terminal alkenes . It exhibits similar reactivity to the Tebbe reagent and Wittig reaction . Unlike the Wittig reaction, the Petasis reagent can react with a wide range of aldehydes, ketones and esters. [ 4 ] The Petasis reagent is also very air stable, and is commonly used in solution with toluene or THF. The Tebbe reagent and the Petasis reagent share a similar reaction mechanism. The active olefinating reagent, Cp 2 TiCH 2 , is generated in situ upon heating. With the organic carbonyl, this titanium carbene forms a four membered oxatitanacyclobutane that releases the terminal alkene. [ 5 ] In contrast to the Tebbe reagent , homologs of the Petasis reagent are relatively easy to prepare by using the corresponding alkyllithium instead of methyllithium, allowing the conversion of carbonyl groups to alkylidenes. [ 6 ] MgCpBr (TiCp 2 Cl) 2 TiCpCl 3 TiCp 2 S 5 TiCp 2 (CO) 2 TiCp 2 Me 2 VCpCh VCp 2 Cl 2 VCp(CO) 4 (CrCp(CO) 3 ) 2 Fe(η 5 -C 5 H 4 Li) 2 ((C 5 H 5 )Fe(C 5 H 4 )) 2 (C 5 H 4 -C 5 H 4 ) 2 Fe 2 FeCp 2 PF 6 FeCp(CO) 2 I CoCp(CO) 2 NiCpNO ZrCp 2 ClH MoCp 2 Cl 2 (MoCp(CO) 3 ) 2 RuCp(PPh 3 ) 2 Cl RuCp(MeCN) 3 PF 6
https://en.wikipedia.org/wiki/Petasis_reagent
Peter Jephson Cameron FRSE (born 23 January 1947) is an Australian mathematician who works in group theory , combinatorics , coding theory , and model theory . He is currently Emeritus Professor at the University of St Andrews [ 2 ] and Queen Mary University of London . Cameron received a B.Sc. from the University of Queensland and a D.Phil. in 1971 from the University of Oxford as a Rhodes Scholar , [ 3 ] with Peter M. Neumann as his supervisor. [ 4 ] Subsequently, he was a Junior Research Fellow and later a Tutorial Fellow at Merton College , Oxford, and also lecturer at Bedford College , London. Cameron specialises in algebra and combinatorics; he has written books about combinatorics, algebra, permutation groups , and logic, and has produced over 350 academic papers. [ 5 ] In 1988, he posed the Cameron–Erdős conjecture with Paul Erdős . He was awarded the London Mathematical Society 's Whitehead Prize in 1979 and Senior Whitehead Prize in 2017, and is joint winner of the 2003 Euler Medal . In 2008, he was selected as the Forder Lecturer of the LMS and New Zealand Mathematical Society . [ 6 ] In 2018 he was elected a Fellow of the Royal Society of Edinburgh . [ 7 ]
https://en.wikipedia.org/wiki/Peter_Cameron_(mathematician)
The Peter Debye Award in Physical Chemistry is awarded annually by the American Chemical Society "to encourage and reward outstanding research in physical chemistry ". [ 1 ] The award is named after Peter Debye and granted without regard to age or nationality.
https://en.wikipedia.org/wiki/Peter_Debye_Award
Péter Gács (Hungarian pronunciation: ['pe:ter 'ga:tʃ]; born May 9, 1947), professionally also known as Peter Gacs , is a Hungarian - American mathematician and computer scientist, professor, and an external member of the Hungarian Academy of Sciences. He is well known for his work in reliable computation, randomness in computing, algorithmic complexity , algorithmic probability , and information theory . Peter Gacs attended high school in his hometown, then obtained a diploma (M.S.) at Loránd Eötvös University in Budapest in 1970. Gacs started his career as a researcher at the Applied Mathematics Institute of the Hungarian Academy of Science . [ 1 ] He obtained his doctoral degree from the Goethe University Frankfurt in 1978. Throughout his studies he had the opportunity to visit Moscow State University and work with Andrey Kolmogorov and his student Leonid A Levin . Through 1979 he was a visiting research associate at Stanford University . He was an assistant professor at University of Rochester from 1980 until 1984 when he moved to Boston University where he received tenure in 1985. He has been full professor since 1992. [ 2 ] Gacs has made contributions in many fields of computer science. It was Gács and László Lovász who first brought ellipsoid method to the attention of the international community in August 1979 by publishing the proofs and some improvements of it. [ 3 ] [ 4 ] [ 5 ] Gacs also gave contribution in the Sipser–Lautemann theorem . [ 6 ] His main contribution and research focus were centered on cellular automata and Kolmogorov complexity. His most important contribution in the domain of cellular automata besides the GKL rule (Gacs–Kurdyumov–Levin rule) is the construction of a reliable one-dimensional cellular automaton presenting thus a counterexample to the positive rates conjecture . [ 7 ] The construction that he offered is multi-scale and complex. [ 8 ] Later, the same technique was used for the construction of aperiodic tiling sets. [ 9 ] Gacs authored several important papers in the field of algorithmic information theory and on Kolmogorov complexity . Together with Leonid A. Levin , he established basic properties of prefix complexity including the formula for the complexity of pairs [ 10 ] and for randomness deficiencies including the result rediscovered later and now known as ample excess lemma . [ 11 ] [ 12 ] He showed that the correspondence between complexity and a priori probability that holds for the prefix complexity is no more true for monotone complexity and continuous a priori probability. [ 13 ] [ 14 ] In the related theory of algorithmic randomness he proved that every sequence is Turing-reducible to a random one (the result now known as Gacs–Kucera theorem, since it was independently proven by Antonin Kucera). [ 14 ] Later he (with coauthors) introduced the notion of algorithmic distance and proved its connection with conditional complexity. [ 15 ] [ 14 ] He was one a pioneer of algorithmic statistics, [ 16 ] introduced one of the quantum versions for algorithmic complexity, [ 17 ] studied the properties of algorithmic randomness for general spaces [ 18 ] [ 19 ] and general classes of measures. [ 20 ] Some of these results are covered in his surveys of algorithmic information theory. [ 21 ] [ 22 ] He also proved results on the boundary between classical and algorithmic information theory: the seminal example that shows the difference between common and mutual information (with János Körner ). [ 23 ] Together with Rudolf Ahlswede and Körner, he proved the blowing-up lemma . [ 24 ]
https://en.wikipedia.org/wiki/Peter_Gacs
Peter Malcolm Wallace Gill (born 9 November 1962) [ 1 ] is a New Zealand theoretical and computational chemist known for his contribution to density functional theory (DFT). He is an early and main contributor to the computational chemistry software Q-Chem and was the president of the company during 1998–2013. He is especially known for developing the PRISM algorithm for evaluating two-electron integrals and linear-scaling DFT, as well as self-consistent field method for excited state electronic structure . [ 2 ] [ 3 ] Gill was born in Auckland and received his BSc in 1983 and MSc in 1984 from the University of Auckland . [ 1 ] He received a PhD in 1988 from the Australian National University under the supervision of Leo Radom . During this time, he investigated hemi-bonding and the convergence of perturbation theory in quantum chemistry . [ 3 ] [ 4 ] After graduation, he conducted postdoctoral work with John Pople at Carnegie Mellon University from 1988 to 1993. Following this stint, Gill accepted a lectureship at Massey University in 1993. He became a lecturer at the University of Cambridge in 1996. In 1999, Gill became the inaugural chair of theoretical chemistry at the University of Nottingham . He moved to Australia and became a professor at the Australian National University in 2004 and later moved to the University of Sydney in 2019 as the Schofield Chair in Theoretical Chemistry. [ 3 ] In 2001, Gill wrote an essay pronouncing the demise of density functional theory thanks to the rise of hybrid functionals for exchange interactions between electrons. [ 5 ] [ 6 ] Gill is the president of the World Association of Theoretical and Computational Chemists (WATOC) and received the Dirac Medal in 1999 [ 7 ] and the Schrödinger Medal in 2011 from WATOC. [ 1 ] In 2013, Gill received the Fukui Medal from APATCC. [ 1 ] Gill was elected a Fellow of the Australian Academy of Science in 2014 [ 2 ] and received the David Craig Medal from the Australian Academy of Science in 2019. [ 8 ] In 2015 Gill was inducted to the International Academy of Quantum Molecular Science. [ 9 ]
https://en.wikipedia.org/wiki/Peter_Gill_(chemist)
Peter H. Gleick ( / ɡ l ɪ k / ; born 1956) is an American scientist working on issues related to the environment . [ 1 ] He works at the Pacific Institute in Oakland, California , which he co-founded in 1987. In 2003 he was awarded a MacArthur Fellowship for his work on water resources. Among the issues he has addressed are conflicts over water resources, [ 2 ] water and climate change , [ 3 ] development, and human health. [ 4 ] In 2006 he was elected to the U.S. National Academy of Sciences . Gleick received the International Water Resources Association (IWRA) Ven Te Chow Memorial Award in 2011, [ 5 ] and that same year he and the Pacific Institute were awarded the first U.S. Water Prize. In 2014, The Guardian newspaper listed Gleick as one of the world's top 10 "water tweeters." [ 6 ] In 2018, Gleick received the Carl Sagan Prize for Science Popularization . [ 7 ] In 2019, Boris Mints Institute of Tel Aviv University awarded Gleick its annual BMI Prize as "an exceptional individual who has devoted his/her research and academic life to the solution of a strategic global challenge." [ 8 ] In 2023, he was elected to the American Academy of Arts and Sciences . [ 9 ] Gleick received a B.S. from Yale University and an M.S. and Ph.D. in Energy and Resources from the University of California, Berkeley , with a focus on hydroclimatology. His dissertation was the first to model the regional impact of climate change on water resources. [ 10 ] [ 11 ] [ 12 ] Gleick produced some of the earliest work on the links between environmental issues, especially water and climate change, and international security , identifying a long history of conflicts over water resources and the use of water as both a weapon and target of war. [ 13 ] [ 14 ] [ 15 ] He also pioneered the concepts of the soft water path , [ 16 ] and peak water . [ 17 ] [ 18 ] Gleick worked as the Deputy Assistant for Energy and the Environment to the Governor of California from 1980 to 1982. [ 19 ] In 2003, he was awarded a MacArthur Fellowship for his work on water resources, [ citation needed ] and in 2006 he was elected to the U.S. National Academy of Sciences . [ citation needed ] His 2010, book Bottled and Sold: The Story Behind Our Obsession with Bottled Water , published by Island Press, won the Nautilus Book Award in the Conscious Media/Journalism/Investigative Reporting category. [ 20 ] [ 21 ] In 2011, Gleick received the International Water Resources Association (IWRA) Ven Te Chow Memorial Award. [ 5 ] Also in 2011, Dr. Gleick and the Pacific Institute were awarded the first U.S. Water Prize. [ 22 ] In 2012, Oxford University Press published a book written by Gleick and colleagues: "A 21st Century U.S. Water Policy," [ 23 ] and he was named one of 25 "Water Heroes" by Xylem. [ 24 ] In 2013, Gleick was honored with a Lifetime Achievement Award by the Silicon Valley Water Conservation Awards. [ 25 ] In early 2013, Gleick launched a new blog at National Geographic ScienceBlogs entitled "Significant Figures." [ 26 ] He was also a regular contributor to Huffington Post Green, [ 27 ] and now most of these essays can be found at his personal website. [ 28 ] Gleick has also been featured in a wide range of water-related documentary films, including River's End: California's Latest Water War , [ 29 ] Jim Thebaut's documentary "Running Dry", [ 30 ] the 2004 German documentary series "Der durstige Planet," [ 31 ] Irena Salina 's feature documentary Flow: For Love of Water , [ 32 ] accepted for the 2008 Sundance Film Festival , the ABC News documentary "Earth2100". [ 33 ] Jessica Yu and Elise Pearlstein's 2011 feature documentary Last Call at the Oasis from Participant Media , [ 34 ] and Pumped Dry: The Global Crisis of Vanishing Groundwater (A USA Today Network Production) USA Today . [ 35 ] He served on the scientific advisory boards of Thirst , [ citation needed ] Grand Canyon Adventure: River at Risk , [ citation needed ] and other water-related films. [ citation needed ] Peter Gleick's research addresses the cross-disciplinary connections among global environmental issues, with a focus on freshwater and climate change. In 1987, with two colleagues, Gleick started the Pacific Institute for Studies in Development, Environment, and Security, an independent non-profit policy research center currently located in Oakland , California. The mission of the Institute is "The Pacific Institute creates and advances solutions to the world's most pressing water challenges." [ 36 ] Gleick currently serves as the Institute's President Emeritus, [ 37 ] [ 38 ] having been succeeded as President by Jason Morrison. [ 39 ] Gleick’s Ph.D dissertation from the University of California, Berkeley, and his early research, focused on the impacts of human-caused climate change for freshwater resources. He was the first to link the output of large-scale general circulation models of the climate with a detailed regional hydrologic model to evaluate how changes in temperature and precipitation would alter streamflow, snowpack, and soil moisture, with a focus on the Sacramento River basin in California. [ 40 ] [ 41 ] Among other results, this work was the first to call attention to the risks that rising temperatures would lead to accelerated snowmelt and a shift to earlier runoff in mountainous areas, leading to increased winter flood risk and reduced spring and summer runoff. [ 42 ] Many of the impacts anticipated by this early work have now been observed. [ 43 ] [ 44 ] Gleick also served as co-lead author of the Water Sector Report of the first National Climate Assessment , published in 2000. [ 45 ] The National Climate Assessment (NCA) is a United States government interagency ongoing effort [ 46 ] on climate change science conducted under the auspices of the Global Change Research Act of 1990 . [ 47 ] [ 48 ] The NCA is a major product [ 49 ] of the U.S. Global Change Research Program (USGCRP) which coordinates a team of experts and receives input from a Federal Advisory Committee. As a post-doctoral fellow in 1987 and 1988 at the University of California, Berkeley, Gleick published some of the earliest work addressing the risks of environmental factors for national and international security, including both climate change and water resources. Up until this time, most academic work on international security was linked to realpolitik and superpower relationships between the United States and the Soviet Union. In the 1980s, tensions between the superpowers shifted after the collapse of the Soviet Union. Simultaneously, there was growing concern about a far broader range of threats to peace, including environmental threats associated with the political implications of resource use or large-scale pollution. By the mid-1980s, this field of study was becoming known as "environmental security" and it is now widely acknowledged that environmental factors play both direct and indirect roles in both political disputes and violent conflicts. Prominent early researchers in the field include Norman Myers , Jessica Tuchman Mathews , Michael Renner, Richard Ullman, Arthur Westing , Michael Klare , Thomas Homer Dixon , and Geoffrey Dabelko . Gleick’s 1989 paper in the journal Climatic Change addressed how climate changes could affect regional and global tensions over global food production, access to strategic minerals in the Arctic, and freshwater resources. [ 50 ] and his 1993 paper in the journal International Security focused on the threat of violence over water resources. [ 51 ] He has continued to focus on these issues and created and maintains the Water Conflict Chronology, a comprehensive online database of violence associated with water resources, published by the Pacific Institute . [ 52 ] This database goes back nearly 6,000 years, with over 1600 entries identifying where water resources or systems have been the trigger, casualty, or weapon of violence. This work has been recognized by military and intelligence community analysts and Gleick has briefed political military leaders and lectured at the U.S. Army War College and National War College in Washington D.C. [ 53 ] Gleick also did some of the earliest work defining a human right to water . In the 20th century, the early focus of human rights laws were on political and civil rights protected by the 1948 Universal Declaration of Human Rights . By the 1960s, however, scholars and human rights experts were calling attention to economic, social, and cultural rights as well, with the 1966 covenant on International Covenant on Economic, Social and Cultural Rights (ICESCR). While neither of these declarations addressed water, by the 1990s, there was growing concern about the failure to provide safe water and sanitation for hundreds of millions, and scholars were calling for explicit recognition of a human right to water. Two early efforts to define the human right to water came from law professor Stephen McCaffrey of the University of the Pacific in 1992 [ 54 ] and Gleick in 1998. [ 55 ] McCaffrey stated that "Such a right could be envisaged as part and parcel of the right to food or sustenance, the right to health , or most fundamentally, the right to life. [ 54 ] Gleick added: "that access to a basic water requirement is a fundamental human right implicitly and explicitly supported by international law, declarations, and State practice.” [ 55 ] A 1996 paper from Gleick argued for defining and quantifying a basic water requirement of 50 liters of water per person per day for drinking, cooking, cleaning, and sanitation, [ 56 ] and the United Nations cited this work in General Comment 15, drafted in 2002, which provided their clearest definition of the human right to water to that point United Nations Committee on Economic, Social and Cultural Rights in General Comment 15 drafted in 2002. [ 57 ] General Comment 15 was a non-binding interpretation that access to water was a condition for the enjoyment of the right to an adequate standard of living , inextricably related to the right to the highest attainable standard of health, and therefore a human right. It stated: "The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses." [ 57 ] In 2010, the UN General Assembly formally adopted the human right to water and sanitation in General Assembly Resolution 64/292 on 28 July 2010. [ 58 ] That Resolution recognized the right of every human being to have access to sufficient, safe, and affordable water for personal and domestic uses. In September 2010, the UN Human Rights Council adopted a resolution recognizing that the human right to water and sanitation forms part of the right to an adequate standard of living . [ 59 ] Gleick’s work on basic water requirements and human rights was also used in the Mazibuko v. City of Johannesburg court case in South Africa addressing the human right to water in Phiri, one of the oldest areas of the Soweto township. [ 60 ] The Pacific Institute contributed legal testimony for this case based on the work of Dr. Peter Gleick and the work of the Centre for Applied Legal Studies (CALS) of the University of the Witwatersrand in Johannesburg, South Africa and the Pacific Institute in Oakland, California was acknowledged with a 2008 Business Ethics Network BENNY Award. [ 61 ] [ 62 ] Gleick is the editor of the biennial series on the state of the world's water, called The World's Water , [ 4 ] published by Island Press , Washington, D.C. , regularly provides testimony to the United States Congress and state legislatures, and has published many scientific articles. The ninth volume of "The World's Water" was released in early February 2018. [ 63 ] He serves as a major source of information on water and climate issues for the media, and has been featured on CNBC, CNN, Fox Business, Fresh Air with Terry Gross , [ 64 ] NPR, in articles in The New Yorker , [ 65 ] and many other outlets. Gleick lectures dozens of times a year on global water resource challenges and solutions, climate science and policy, and the integrity of science. In 2008, he presented the Abel Wolman Distinguished Lecture at the United States National Academy of Sciences . [ 66 ] He was a 2009 Keynote Lecturer at the Nobel Conference at Gustavus Adolphus College. In 2014, Gleick published a peer-reviewed article in the American Meteorological Society journal "Weather, Climate, and Society" (WCAS) that addressed the role of drought, climate change, and water management decisions in influencing the civil war in Syria. [ 67 ] This article was the "most read" WCAS article for 2014. [ 68 ] In September 2014, Gleick gave a keynote address at the "Global Climate Negotiations: Lessons from California" Symposium, co-hosted by the USC Schwarzenegger Institute with the California Air Resources Board and the R20 Regions of Climate Action (R20) in Sacramento, which highlighted the different policies applied by the state of California facing the impact of climate change ., [ 69 ] [ 70 ] In February 2015, Gleick's work on the "Water-Energy Nexus" was highlighted in an invited keynote at the Georgetown University 2015 Annual Symposium of the Center for Contemporary Arab Studies. [ 71 ] Other recent lectures include a keynote at the 2017 Symposium on the Human Right to Water in November 2017 at McGeorge School of Law , [ 72 ] a keynote “The Beacon of Science in a Fact-Free Fog” at the 2019 SkeptiCal Conference, [ 73 ] and a 2019 presentation at the World Bank’s Water Week on “Water, Climate, and Security: Building Resilience in a Fragile World.” [ 74 ] In 2023, Gleick released a new book “The Three Ages of Water,” published by PublicAffairs/Hachette, receiving favorable reviews from David Wallace-Wells , Elizabeth Kolbert , Jerry Brown , and Greta Thunberg . [ 75 ] On February 20, 2012, Gleick announced he was responsible for the unauthorized distribution of documents from The Heartland Institute in mid-February. Gleick reported he had received "an anonymous document in the mail describing what appeared to be details of the Heartland Institute's climate program strategy", and in trying to verify the authenticity of the document, had "solicited and received additional materials directly from the Heartland Institute under someone else's name". [ 76 ] Responding to the leak, The Heartland Institute said one of the documents released, a two-page 'Strategy Memo', had been forged. [ 77 ] Gleick denied forging the document. Gleick described his actions as "a serious lapse of my own and professional judgment and ethics" and said that he "deeply regret[ted his] own actions in this case" and "offer[ed his] personal apologies to all those affected". He stated that "My judgment was blinded by my frustration with the ongoing efforts – often anonymous, well-funded, and coordinated – to attack climate science and scientists and prevent this debate, and by the lack of transparency of the organizations involved." [ 76 ] [ 78 ] On February 24 he wrote to the board of the Pacific Institute requesting a "temporary short-term leave of absence" from the Institute. [ 79 ] [ 80 ] The Board of Directors stated it was "deeply concerned regarding recent events" involving Gleick and the Heartland documents, and appointed a new Acting Executive Director on February 27. [ 81 ] Gleick was reinstated following an investigation, in which the institute found no evidence to support charges of forgery and "supported what Dr. Gleick has stated publicly regarding his interaction with the Heartland Institute." [ 82 ] [ 83 ]
https://en.wikipedia.org/wiki/Peter_Gleick
Peter Hannaford AC (born 15 July 1939) is an Australian academic and university professor. [ 1 ] He is the Director of the Centre for Atom Optics and Ultrafast Spectroscopy at Swinburne University of Technology in Melbourne , and winner of the Walter Boas Medal in 1985. [ 2 ] Hannaford studied at the University of Melbourne , where he earned his Bachelor of Science (BSc) in 1961, his Master of Science (MSc) in 1963, and his Doctor of Philosophy (Ph.D.) in 1968. [ 2 ] In 1964, Hannaford was named physics tutor at Ormond College at the University of Melbourne. In 1967, he was named research scientist at the CSIRO ( Commonwealth Scientific and Industrial Research Organization ), Division of Chemical Physics. From 1971–83, he was a senior research scientist and later principal research scientist (1974) at the CSIRO Division of Chemical Physics. [ citation needed ] From 1972–73, he was a guest scientist at the University of Reading (JJ Thomson Physical Laboratory) in the UK. From 1981–82, he was a Science Research Council senior research fellow at the University of Reading. In 1983, he was a senior principal research scientist at CSIRO, Division of Chemical Physics. In 1984, he was a William Evans Visiting Fellow at the University of Otago in New Zealand . In 1985, he was a Member of the Australian Academy of Science National Committee for Spectroscopy. In 1989, he was a Royal Society Guest Research Fellow and Visiting Fellow at Christ Church College , University of Oxford , UK. From 1989–2001, he was the chief research scientist at the CSIRO Division of Materials Science and Technology in Clayton, Victoria. From 1990–2000, he was a Professorial Associate at the University of Melbourne . In 1991, he was an Australian Academy of Science-Royal Society Exchange Fellow in Oxford, England. [ 2 ] In 1992, he was a guest scientist at the Max Planck Institute of Quantum Optics in Garching , Germany. From 1993–2003, he was the chair of the National Committee for Spectroscopy of the Australian Academy of Science . In 1997, he was a professorial fellow at Swinburne University of Technology in Hawthorn, Victoria . In 1999, he was director of the Centre for Atom Optics and Ultrafast Spectroscopy (CAOUS) at Swinburne University. From 1999–2003, he was a guest scientist at the European Laboratory for Non-Linear Spectroscopy (LENS) at the Università degli Studi di Firenze (Italy). In 2000, he was a guest professor at the University of Innsbruck (Austria). In 2002, he was a member of Commission C15 (Atomic, Molecular and Optical Physics) for the International Union of Pure and Applied Physics . In 2003, he was named a university distinguished professor at Swinburne University of Technology . [ 2 ] Hannaford is the institutional director of the Australian Research Council Centre of Excellence for Quantum-Atom Optics at Swinburne University of Technology . While at CSIRO in the 1970s to the 1990s, he pioneered new techniques in high-resolution and time-resolved laser spectroscopy, which have been important for characterization of the spectroscopic properties of a wide range of atoms. He has published over 150 papers in scientific journals and books. [ 2 ] His current projects include: BEC on a chip, magnetic lattices , quantum coherence , molecular BEC, high harmonic generation and ultrafast spectroscopy. [ 3 ]
https://en.wikipedia.org/wiki/Peter_Hannaford
Peter J. H. Scott FRSC CChem (born July 27, 1979) is a British and American chemist and radiochemist who is the Paul L. Carson Collegiate Professor of Radiology at the University of Michigan in the United States. He is also a professor of radiology, professor of pharmacology and professor of medicinal chemistry, and a core member of the Rogel Cancer Center . [ 1 ] [ 2 ] He is Chief of Nuclear Medicine and director of the University of Michigan Positron Emission Tomography ( PET ) Center, [ 3 ] and runs a research group developing new radiochemistry methodology and novel PET radiotracers . [ 4 ] Peter Scott was born and grew up in North East England and attended Whitley Bay High School . He received his undergraduate degree with first class honors in medicinal and pharmaceutical chemistry from Loughborough University in 2001, after conducting research with Raymond Jones. He subsequently obtained his PhD in organic chemistry from Durham University in 2005, where he was a member of Ustinov College , under the mentorship of Patrick G. Steel. Scott then moved to the United States to undertake postdoctoral research in organometallic chemistry at SUNY Buffalo under Huw Davies , and PET radiochemistry at the University of Michigan with Michael Kilbourn. Scott runs a research group developing new metal-catalyzed methods for incorporating fluorine-18 and carbon-11 into bioactive molecules as well as novel PET radiotracers for imaging neurodegenerative disorders . His methodology work aims to improve the synthesis of PET radiotracers and he has an active collaboration with Prof. Melanie Sanford's group [ 5 ] that is funded by NIBIB . [ 6 ] Together they have developed methods for the Cu-mediated radiofluorination [ 7 ] and radiocyanation of (mesityl)(aryl)iodonium salts, [ 8 ] boronic acids [ 9 ] [ 10 ] and stannanes, [ 10 ] [ 11 ] as well as new methods for radiofluorination of C-H bonds [ 12 ] [ 13 ] [ 14 ] and aryl halides. [ 15 ] Scott has also introduced methods for green radiochemistry, [ 16 ] for which he received the Michigan Green Chemistry Governor's Award in 2014. [ 17 ] In 2019, Prof. Scott was elected as a Fellow of the Royal Society of Chemistry ( FRSC ), [ 18 ] and received a Distinguished Investigator Award from the Academy for Radiology & Biomedical Imaging Research. [ 19 ] In 2021, he was honored as a Fellow of the Society of Nuclear Medicine and Molecular Imaging , [ 20 ] and was also recognized by SNMMI in 2023 with the Sam Gambhir Trailblazer Award, [ 21 ] created to honor the legacy and memory of the late Sanjiv Sam Gambhir . 1. Linker Strategies in Solid-phase Organic Synthesis (Editor, 2009) [ 22 ] 2. Solid-Phase Organic Syntheses, Volume 2: Solid-Phase Palladium Chemistry (Wiley Series on Solid-Phase Organic Syntheses) (Editor, 2012) [ 23 ] 3. Radiochemical Syntheses: Radiopharmaceuticals for Positron Emission Tomography, Volume 1 (Editor, 2012) [ 24 ] 4. Radiochemical Syntheses: Further Radiopharmaceuticals for Positron Emission Tomography and New Strategies for Their Production, Volume 2 (Editor, 2015) [ 25 ] 5. Handbook of Radiopharmaceuticals (2nd Edition): Methodology and Applications (Editor, 2021) [ 26 ] 6. Production and Quality Control of Fluorine-18 Labelled Radiopharmaceuticals (co-authored with International Atomic Energy Agency , 2021) [ 27 ]
https://en.wikipedia.org/wiki/Peter_J._H._Scott
Peter Koch (October 15, 1920 – February 14, 1998) was an American engineer and wood scientist who was considered an expert in the field of wood technology by his peers. [ 1 ] From 1963 to 1982, Koch led a team of US Forest Service scientists in forest products utilization research specific to forests of the southeastern US . Accomplishments by Koch and his research team included eight US patents plus hundreds of research publications. [ 1 ] Peter Koch was the youngest of 3 sons born to Elers and Gerda (Heiberg-Jurgensen) Koch in Missoula, Montana. [ 3 ] In 1942, he graduated from Montana State College of Agriculture and Engineering at Bozeman with a degree in mechanical engineering. [ 3 ] After graduation, Koch enlisted in the United States Army Air Corps as a pilot, mostly flying bombers over the hump into China from 1942 to 1946, attaining the rank of captain. [ 3 ] [ 1 ] From 1946 to 1952, Koch worked in Washington (state) at Stetson-Ross Machine Company – a company that designs lumber processing machinery. [ 3 ] [ 4 ] In 1950, Koch married Doris Ann Hagen. In 1952, he enrolled in graduate school at the University of Washington and received a PhD in wood technology in 1954. [ 3 ] [ 1 ] Afterwards, Koch taught at Michigan State University (1955–1957) and was vice-president of a hardwood lumber producer – the Champlin Company – in Rochester, New Hampshire (1957–1963). [ 5 ] [ 6 ] By the 1960s, there was concern by timber industries in the South about the lack of forest product utilization research into the use of smaller trees that had replaced the virgin pine forests. To address that concern, the US Forest Service recruited Peter Koch in 1963 to head a newly formed wood utilization research program at the Southern Forest Experiment Station in Pineville, Louisiana . [ 1 ] One of the first staff members enlisted by Koch was Chung-Yun Hse , a young graduate student of Taiwanese descent, to focus on adhesives for gluing southern pine plywood. [ 1 ] During his tenure, Koch and his staff of scientists generated the following technological advancements: In 1982, Koch returned to Montana and served as chief wood scientist at the Forest Service Intermountain Research Station in Missoula until 1985. [ 3 ] In 1985, Koch established his own corporation – Wood Science Laboratory, Inc. – in Corvallis, Montana . [ 7 ] In 1996, Koch produced his last publication – a thousand-page tome on lodgepole pine based on more than a decade of research. [ 3 ] Peter Koch died February 14, 1998, in Missoula, Montana. [ 3 ]
https://en.wikipedia.org/wiki/Peter_Koch_(wood_scientist)
Peter Koellner is professor of philosophy at Harvard University . He received his Ph.D. from MIT in 2003. His main areas of research are mathematical logic , specifically set theory , and philosophy of mathematics , philosophy of physics , analytic philosophy , and philosophy of language . [ 1 ] In 2008 Koellner was awarded a Kurt Gödel Centenary Research Prize Fellowship. Currently, Koellner serves on the American Philosophical Association 's Advisory Committee to the Eastern Division Program Committee in the area of Logic. [ 2 ] According to a review by Pierre Matet on Zentralblatt MATH , his joint paper with Hugh Woodin Incompatible Ω-Complete Theories contains an illuminating discussion of the issues involved, which makes it recommended reading for anyone interested in modern set theory. [ 3 ]
https://en.wikipedia.org/wiki/Peter_Koellner
Peter Manfred Gruber (28 August 1941, Klagenfurt – 7 March 2017, Vienna ) was an Austrian mathematician working in geometric number theory [ 1 ] as well as in convex and discrete geometry. [ 2 ] [ 3 ] Gruber obtained his PhD at the University of Vienna in 1966, under the supervision of Nikolaus Hofreiter . [ 4 ] From 1971, he was Professor at the University of Linz , and from 1976, at the TU Wien . [ 1 ] He was a member of the Austrian Academy of Sciences , [ 1 ] [ 5 ] a foreign member of the Russian Academy of Sciences, [ 1 ] [ 6 ] and a corresponding member of the Bavarian Academy of Sciences and Humanities . [ 1 ] [ 7 ] His past doctoral students include Monika Ludwig . [ 4 ]
https://en.wikipedia.org/wiki/Peter_M._Gruber
Sir Peter Mansfield FRS [ 1 ] [ 2 ] (9 October 1933 – 8 February 2017) [ 3 ] was an English physicist who was awarded the 2003 Nobel Prize in Physiology or Medicine , shared with Paul Lauterbur , for discoveries concerning Magnetic Resonance Imaging (MRI). Mansfield was a professor at the University of Nottingham . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Mansfield was born in Lambeth , London on 9 October 1933, to Sidney George (b. 1904, d. 1966) and Lillian Rose Mansfield (b. 1905, d. 1984; née Turner). Mansfield was the youngest of three sons, Conrad (b. 1925) and Sidney (b. 1927). [ 4 ] Mansfield grew up in Camberwell . During World War II he was evacuated from London, initially to Sevenoaks and then twice to Torquay , Devon, where he was able to stay with the same family on both occasions. [ 4 ] On returning to London after the war he was told by a school master to take the 11+ exam. Having never heard of the exam before, and having no time to prepare, Mansfield failed to gain a place at the local Grammar school. His mark was, however, high enough for him to go to a Central School in Peckham . At the age of 15 he was told by a careers teacher that science wasn't for him. He left school shortly afterwards to work as a printer's assistant. At the age of 18, having developed an interest in rocketry, Mansfield took up a job with the Rocket Propulsion Department of the Ministry of Supply in Westcott, Buckinghamshire . Eighteen months later he was called up for National Service . After serving in the army for two years, Mansfield returned to Westcott and started studying for A-levels at night school. Two years later he was admitted to study physics at Queen Mary College, University of London . Mansfield graduated with a BSc from Queen Mary in 1959. His final-year project, supervised by Jack Powles, was to construct a portable, transistor-based spectrometer to measure the Earth's magnetic field . Towards the end of this project Powles offered Mansfield a position in his NMR (Nuclear Magnetic Resonance) research group. Powles' interest was in studying molecular motion, mainly liquids. Mansfield's project was to build a pulsed NMR spectrometer to study solid polymer systems. He received his PhD in 1962; his thesis was titled Proton magnetic resonance relaxation in solids by transient methods . [ 10 ] Following his PhD, Mansfield was invited to postdoctoral research with Charlie Slichter at the University of Illinois at Urbana–Champaign , where he carried out an NMR study of doped metals. In 1964, Mansfield returned to England to take up a place as a lecturer at Nottingham University where he could continue his studies in multiple-pulse NMR. He was successively appointed Senior Lecturer in 1968 and Reader in 1970. During this period his team developed the MRI equipment with the help of grants from the Medical Research Council . It was not until the 1970s with Paul Lauterbur 's and Mansfield's developments that NMR could be used to produce images of the body. In 1979 Mansfield was appointed Professor of the Department of Physics until his retirement in 1994. Mansfield is credited with inventing 'slice selection' for MRI - i.e. the method by which a localised axial slice of a subject can be selectively imaged, rather than the entire subject [ 11 ] - and understanding how the radio signals from MRI can be mathematically analysed, making interpretation of the signals into a useful image a possibility. He is also credited with discovering how fast imaging could be possible by developing the MRI protocol called echo-planar imaging . Echo-planar imaging allows T2* weighted images to be collected many times faster than previously possible. It also has made functional magnetic resonance imaging (fMRI) feasible. Whilst working at Nottingham University, Mansfield tested the first full body prototype, installed just before Christmas, 1978. Mansfield was so keen, that he volunteered to test it himself and produced the first scan of a live patient. [ 12 ] The prototype machine is now an exhibit, in the Medical Section of the Science Museum . [ 13 ] Mansfield married Jean Margaret Kibble (b. 1935) on 1 September 1962. [ 16 ] He had two daughters. Mansfield died in Nottingham on 8 February 2017, aged 83. [ 17 ]
https://en.wikipedia.org/wiki/Peter_Mansfield
Peter Nemes is a Hungarian-American chemist, who is active in the fields of bioanalytical chemistry, mass spectrometry , cell/developmental biology, neuroscience, and biochemistry . Nemes has been an associate professor at the University of Maryland, College Park (UMD) since January 2018. Prior to his appointment there, he was an assistant professor at the Department of Chemistry at George Washington University (Washington, DC) , where he taught bioanalytical chemistry . [ 1 ] Nemes graduated with summa cum laude with a Master's of Science (M.Sc.) from the Eötvös Loránd University in 2004. His original thesis research was conducted in the Department of Mass Spectrometry at the Hungarian Academy of Sciences, Budapest, Hungary. Under mentorship by Vekey Karoly, Nemes studied the formation of amino acid clusters in the gas phase upon electrospray ionization, 1 such as the magic serine clusters that preferentially incorporate amino acids and sugars of certain chirality matching those enriched on Earth. During his PhD in Akos Vertes ’ laboratory at the Department of Chemistry, The George Washington University (GWU; Washington, DC) between 2005 and 2009, he established the significance of spraying modes/regimes during electrospray ionization (ESI) mass spectrometry (MS) in efficient and soft ion generation 2 and the confirmational state of proteins 3 . He also invented and patented laser ablation electrospray ionization (LAESI) 4 mass spectrometry for in situ and in vivo analysis 4 of tissues and single cells 5 2- and 3-dimensional molecular imaging MS 6 at ambient conditions for biological samples. He completed postdoctoral training in analytical neuroscience with Jonathan V. Sweedler at the University of Illinois Urbana–Champaign , IL. There, he developed capillary electrophoresis ESI MS instruments 7 and built a unique matrix-assisted laser desorption ionization (MALDI–C 60 secondary ionization mass spectrometry (SIMS) dual-ion source mass spectrometer 8 to enable the analysis of small and large molecules in single cells. 9 In 2011, Nemes became a Staff Fellow and then also Laboratory Leader at the US Food and Drug Administration (2011–2013). An independent investigator, Nemes developed a high-throughput approach 10 based on Direct analysis in real time to enable the rapid differentiation of heparin from glycosaminoglycans , including authentic adulterated products confiscated by the FDA during the 2008 heparin crisis . Nemes also established the mass spectrometry facility at the White Oak Headquarters of the US FDA with several mass spectrometers to support regulatory science. As a professor since 2013, Nemes has conducted cutting-edge research at the interface of bioanalytical instrumental chemistry and neurodevelopmental biology and taught courses in analytical chemistry and mass spectrometry. In Fall 2013, Nemes became an assistant professor at the Department of Chemistry at George Washington University (Washington, DC) , where he taught analytical chemistry . [ 1 ] In January 2018, Nemes became an associate professor at the Department of Chemistry & Biochemistry, the University of Maryland, College Park (UMD), where he has been teaching instrumental analytical chemistry and biological mass spectrometry. Research in the Nemes Laboratory develops ultrasensitive and microanalytical platforms for high-resolution MS to study metabolic and proteomic processes with implications in cell and neurodevelopmental biology and health research. Using custom-built single-cell MS instruments, his research group has uncovered previously unknown metabolomic 11 and proteomic 12 differences between single embryonic cells that are fated to give rise to different types of tissues during vertebrate development. Their highly sensitive bottom-up proteomic approach enabled the detection intra-cell type cell heterogeneity in the embryo. 13 Further, the group has also discovered molecules that are able to alter normal cell fate decisions in the embryo. 11 The investigators next developed microprobe technologies that enabled the direct, in vivo analysis of these small 14 and large 15 molecules in cells in X. laevis embryos undergoing normal development. These results challenge basic understanding of molecular processes that are necessary for normal embryonic body and brain development and raise important implications to help understand, promote, and protect the health of humans and animals. 16 Nemes has authored 46 peer-reviewed publications, 6 book chapters, and ~200 presentations. In 2015, Nemes was named a Beckman Young Investigator by the Arnold and Mabel Beckman Foundation and received the Arthur F. Findeis Award for Achievements by a Young Analytical Chemist by the Division of Analytical Chemistry of the American Chemical Society . In 2017, he received the DuPont Young Professor Award, the Robert J. Cotter New Investigator Award by the US Human Proteome Organization , and a Research Award from the American Society for Mass Spectrometry (ASMS). In 2018, Nemes was awarded the Georges Guiochon Faculty Fellowship from HPLC, Inct. Research in the Nemes Lab has been continuously funded by professional societies, companies, and federal funding agencies. Nemes holds a CAREER award from the Directorate of Biological Research of the National Science Foundation (NSF) and an Outstanding Research Award (R35) from the National Institute of General Medical Sciences (NIGMS). 1 P. Nemes, G. Schlosser, and K. Vekey*, Amino acid cluster formation studied by electrospray ionization mass spectrometry, J. Mass Spectrom. 2005, 40, 43, https://doi.org/10.1002/jms.771 2 P. Nemes, I. Marginean, and A. Vertes*, Spraying mode effect on droplet formation and ion chemistry in electrosprays, Anal. Chem. 2007, 79, 3105, https://doi.org/10.1021/ac062382i 3 P. Nemes , S. Goyal, and A. Vertes*, Conformational and noncovalent complexation changes in proteins during electrospray ionization, Anal. Chem. 2008, 80 , 387 – 395, https://doi.org/10.1021/ac0714359 4 P. Nemes and A. Vertes*, Laser ablation electrospray ionization for atmospheric pressure, in vivo, and imaging mass spectrometry, Anal. Chem. 2007, 79, 8098, https://doi.org/10.1021/ac071181r 5 B. Shrestha, P. Nemes, and A. Vertes*, Ablation and analysis of small cell populations and single cells by consecutive laser pulses, Appl. Phys. A 2010, https://doi.org/ 10.1007/s00339-010-5781-2 6 P. Nemes , A. A. Barton, and A. Vertes*, Three-dimensional imaging of metabolites in tissues under native conditions by laser ablation electrospray ionization mass spectrometry, Anal. Chem. 2009, 81, 6668, https://doi.org/10.1021/ac900745e 7 P. Nemes, S. S. Rubakhin, J. Aerts, and J. V. Sweedler*, Qualitative and quantitative metabolomic investigation of single neurons by capillary electrophoresis electrospray ionization mass spectrometry, Nat. Protoc. 2013, 8, 783, https://doi.org/10.1038/nprot.2013.035 8 E. J. Lanni, S. J. B. Dunham, P. Nemes, S. S. Rubakhin, J. V. Sweedler*, Biomolecular imaging with a C 60 -SIMS/MALDI dual ion source hybrid mass spectrometer: Instrumentation, matrix enhancement and single cell analysis, J. Am. Soc. Mass Spectrom. 2014, 11, 1897–1907, https://doi.org/10.1007/s13361-014-0978-9 9 S. S. Rubakhin, E. V. Romanova, P. Nemes , and J. V. Sweedler, Profiling metabolites and peptides in single cells, Nat. Methods 2011, 8, S20 – S29, https://doi.org/10.1038/nmeth.1549 10 P. Nemes*, W. J. Hoover, and D. A. Keire, High-throughput differentiation of heparin from other glycosaminoglycans by pyrolysis mass spectrometry, Anal. Chem. 2013, 85, 7405–7412, https://doi.org/10.1021/ac401318q 11 R. M. Onjiko, S. A. Moody, and P. Nemes*, Single-cell mass spectrometry reveals small molecules that affect cell fates in the 16-cell embryo, Proc. Natl. Acad. Sci. USA 2015, 112, 6545, https://doi.org/10.1073/pnas.1423682112 12 C. Lombard-Banek, S. A. Moody, and P. Nemes*, Single-cell mass spectrometry for discovery proteomics: quantifying translational cell heterogeneity in the 16-cell frog ( Xenopus ) embryo, Angew. Chem. Int. Ed. 2016, 55, 2454, https://doi.org/10.1002/anie.201510411 13 C. Lombard-Banek, Sally A. Moody, and P. Nemes * , Label-free quantification of proteins in single embryonic cells with neural fate in the cleavage-stage frog ( Xenopus laevis ) embryo using capillary electrophoresis electrospray ionization high resolution mass spectrometry (CE-ESI-HRMS), Mol. Cell. Prot. 2016, 15, 2756, https://doi.org/10.1074/mcp.M115.057760 14 R. M. Onjiko, E. P. Portero, S. A. Moody, and P. Nemes*, In situ microprobe single-cell capillary electrophoresis mass spectrometry: Metabolic reorganization in single differentiating cells in the live vertebrate ( X. laevis ) embryo, Anal. Chem. 2017, 89, 7069, https://doi.org/10.1021/acs.analchem.7b00880 15 C. Lombard-Banek, S. A. Moody, M. Chiara Manzini, and P. Nemes*, Microsampling capillary electrophoresis mass spectrometry enables single-cell proteomics in complex tissues: developing cell clones in live Xenopus laevis and zebrafish embryos, Anal. Chem. 2019, 91, 4797, https://doi.org/10.1021/acs.analchem.9b00345 16 C. Lombard-Banek, E. P. Portero, R. M. Onjiko, and P. Nemes*, New-generation mass spectrometry expands the toolbox of cell and developmental biology, genesis 2016, 55, e23012, https://doi.org/10.1002/dvg.23012
https://en.wikipedia.org/wiki/Peter_Nemes
Peter P. Chen Award is an annually presented award to honor one individual for their contributions to the field of conceptual modeling . Named after the computer scientist Peter Chen , the award was started in 2008 by the publisher Elsevier as a means of celebrating the 25th anniversary of the journal Data & Knowledge Engineering . It is presented at the Entity Relationship (ER) International Conference on Conceptual Modeling. Winners are given a plaque, a cash prize, and are invited to give a keynote speech. [ 1 ] There are five criteria for selecting the winner; research, how the nominee has contributed to advance the field of conceptual modeling; service, organizational contributions for related meetings, conferences, and editorial boards; education, mentoring of doctoral students in the field; contribution to practice, contributions to technology transfer, commercialization, and industrial projects; and international reputation. [ 2 ] The selection committee is composed of the Steering Committee chair, two Program Committee members that have been appointed by the Steering Committee chair, and recipients of the last two years. [ 3 ]
https://en.wikipedia.org/wiki/Peter_P._Chen_Award
Peter Howard Rheinstein (born September 7, 1943) is an American physician , lawyer , author , and administrator (both private and governmental). He was an official of the Food and Drug Administration (FDA) from 1974 to 1999. Rheinstein, a General Motors Scholar, received a B.A. with high honors from Michigan State University in 1963, an M.S. in mathematics from Michigan State University in 1964, an M.D. from Johns Hopkins University in 1967, and a J.D. from the University of Maryland School of Law in 1973. At Michigan State University , Rheinstein was noted for his facility in mathematics. [ 2 ] [ 3 ] Rheinstein was director of the Drug Advertising and Labeling Division, Food and Drug Administration, Rockville (1974-1982). [ 4 ] [ 5 ] He was acting deputy director Office of Drugs (1982–83), acting director Office of Drugs (1983–84), director Office of Drug Standards (1984–90), [ 6 ] [ 7 ] and director medicine staff Office Health Affairs (1990-99). [ 8 ] [ 9 ] While at the FDA, Rheinstein developed precedents for Food and Drug Administration regulation of prescription drug promotion, initiated FDA’s first patient medication information program, implemented the Drug Price Competition and Patent Term Restoration Act of 1984, and authored medication goals for Healthy People 2000 and 2010. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] Judy Woodruff interviewed Rheinstein about generic drug safety on the McNeil-Lehrer NewsHour on December 11, 1985. [ 16 ] Stone Phillips interviewed Rheinstein about drug labeling on Dateline NBC on March 31, 1992. [ 17 ] From 1999 to 2004, Rheinstein was senior vice president for medical and clinical affairs at Cell Works, Inc., in Baltimore. Among other projects, Cell Works wanted to develop a blood test for anthrax, similar to a system for cancer cells it produced. "It's something that companies like ours can incorporate into our diagnostic technology," Rheinstein told the Washington Times . Biodefense projects "create new technologies, the spin-offs of which can be commercialized into some pretty good things." [ 18 ] In 2000 Rheinstein became president of Severn Health Solutions in Severna Park, Maryland. In 2010 Rheinstein was named president of the Academy of Physicians in Clinical Research [ 19 ] and in 2011 was named chairman of the American Board of Legal Medicine . Rheinstein was named chairman of the United States Adopted Names Council in 2012. Rheinstein is a member of Phi Kappa Phi [ 20 ] and vice president of the Intercultural Friends Foundation. [ 21 ] Rheinstein is publisher of Discovery Medicine and chairman of MedData Foundation. [ 22 ] He is past president of the Academy of Medicine of Washington, DC. [ 23 ] Sarah Gonzalez interviewed Rheinstein for Planet Money , This Is Your Brain on Drug Ads , on September 8, 2021. [ 24 ]
https://en.wikipedia.org/wiki/Peter_Rheinstein
Peter Rona , born as Peter Rosenfeld (* 13. May 1871 in Budapest ; † February or March 1945) was a Hungarian German Jewish physician and physiologist . [ 1 ] This article about a biochemist is a stub . You can help Wikipedia by expanding it . This article about a German scientist is a stub . You can help Wikipedia by expanding it . This article about a Hungarian scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Peter_Rona_(physician)
Peter Michael Rosenthal (June 1, 1941 – May 25, 2024) was an American-Canadian mathematician, lawyer, and activist who was Professor of Mathematics at the University of Toronto , [ 5 ] and an adjunct professor of Law at the University of Toronto Law School . [ 6 ] [ 7 ] Rosenthal grew up in a Jewish family in Flushing , Queens , New York with his parents, Harold (1913–1983) and Esther (1914–1985), and two younger brothers, Erik and Walter. [ 8 ] [ 9 ] Rosenthal described himself as a " red diaper baby ". [ 4 ] His father was a high school math teacher and his mother was a left-wing activist [ 4 ] who had been a member of the Communist Party in her youth. [ 10 ] His maternal grandmother, Sonia, had immigrated to New York from Russia after the failed 1905 Russian Revolution and was a supporter of the Bolsheviks . [ 11 ] Rosenthal himself was also a committed activist and in 1960 participated in protests at the Woolworth 's in Flushing in solidarity with the sit-ins at Woolworth's in Greensboro, North Carolina protesting racial segregation. [ 10 ] Rosenthal had poor grades in high school and barely graduated, but after nearly failing in college due to the time he spent attending civil rights and anti-nuclear protests, he began to focus on his studies at Queens College , excelling in math. [ 4 ] Erik Rosenthal is an emeritus professor of mathematics at the University of New Haven . [ 12 ] Their youngest brother, Walter (Wally) Rosenthal, is a community activist and trade unionist in New York City who taught at York College after retiring from the United States Postal Service . Both Erik and Wally were civil rights and anti-war activists in the 1960s. [ 13 ] Rosenthal graduated from Queens College, City University of New York with a B.S. in Mathematics in 1962. [ 14 ] In 1963 he obtained an MA in Mathematics and in 1967 a Ph.D. in Mathematics from the University of Michigan ; [ 14 ] his Ph.D. thesis advisor was Paul Halmos . [ 15 ] His thesis, "On lattices of invariant subspaces" [ 15 ] concerns operators on Hilbert space , and most of his subsequent research was in operator theory and related fields. Much of his work was related to the invariant subspace problem , the still-unsolved problem of the existence of invariant subspaces for bounded linear operators on Hilbert space. He made substantial contributions to the development of reflexive and reductive operator algebras and to the study of lattices of invariant subspaces, composition operators on the Hardy-Hilbert space and linear operator equations. His publications include many with his long-time collaborator Heydar Radjavi, [ 4 ] including the book Invariant Subspaces (Springer-Verlag, 1973; second edition 2003). In 1967, Rosenthal moved to Canada to accept an assistant professorship at the University of Toronto where he remained for the rest of his career, eventually becoming a full professor and retiring as a professor emeritus . [ 3 ] Rosenthal supervised the Ph.D. theses of fifteen students [ 15 ] and the research work of a number of post-doctoral fellows. In parallel with his career in mathematics, Rosenthal pursued a career in law. While teaching at the University of Toronto in 1969, Rosenthal was arrested while giving a speech at an anti-Vietnam War demonstration outside of the US consulate in Toronto . [ 4 ] Representing himself in court, he was acquitted of obstructing police but convicted of causing a disturbance, but was able to have his conviction overturned on appeal. With his newfound interest in the law, Rosenthal began volunteering as a paralegal representing friends and activists who had been arrested and charged with minor criminal offences at protests or for civil disobedience or other activist-related offences, particularly related to civil rights or anti-racist activity. [ 16 ] [ 8 ] Rosenthal was threatened by the Law Society of Upper Canada for practicing law without a license and he hired Charles Roach to represent him before the law society. The law society abandoned its action after Roach moved a motion to move the disciplinary proceeding to court. [ 4 ] In the 1980s, Rosenthal worked with Roach representing 21 peace activists who had been charged in relation to protests against Litton Industries and their work on manufacturing components for cruise missiles , with Rosenthal arguing that Litton executives were endangering the safety of Canadians through its products. [ 8 ] Rosenthal was also involved in a campaign to protest an invitation to South Africa ambassador Glenn Babb to speak at the University of Toronto in defence of South African's apartheid regime. Rosenthal was one of four University of Toronto professors who sought an injunction to stop Babb along with a declaration by the court that apartheid was a crime against humanity . While this effort was unsuccessful it helped lead to a later decision by the university to divest from South Africa . [ 8 ] Roach encouraged Rosenthal to go to law school so that he could represent clients in more serious cases, and he was admitted to University of Toronto Law School in 1987 at the age of 46. [ 17 ] [ 4 ] He went on to obtain an LL.B. in 1990 and was called to the Ontario bar in 1992. [ 6 ] Rosenthal joined Roach's firm as a partner. [ 8 ] He was a major figure in the Toronto legal community, and was profiled by Toronto Life , [ 17 ] The Globe and Mail , [ 18 ] and the Toronto Star [ 4 ] In 2006, Now Magazine named Rosenthal Toronto's "Best activist lawyer". [ 19 ] In May 2016, he was awarded a Law Society Medal by the Law Society of Upper Canada. [ 20 ] Rosenthal provided legal services for various leftist causes and marginalized clients for free. He was also active in civil law suing police and public officials, [ 16 ] and participated in inquests into the police shootings of several Black men, representing the families of the deceased. [ 8 ] Rosenthal represented Miguel Figueroa , the leader of the Communist Party of Canada , in the case Figueroa v. Canada before the Supreme Court of Canada . [ 18 ] The court ruled in Figueroa's favor, striking down a law that prohibited small political parties from obtaining the same tax benefits as large parties. Rosenthal represented many activists who faced charges as a result of political protests, including Shawn Brant , John Clarke and the Ontario Coalition Against Poverty , Vicki Monague of Stop Dump Site 41, Dudley Laws and the Black Action Defence Committee , and Jaggi Singh and others arrested at the 2010 G20 Toronto summit protests , and wrote articles about some of those cases. [ 21 ] In 2006, Rosenthal represented Indigenous activists at the Ipperwash Crisis and cross-examined former Premier of Ontario Mike Harris over allegedly saying ""I want the fucking Indians out of the park." [ 22 ] Rosenthal married his first wife, Helen Black (1942–2017), in 1960 when he was 19 and she was 18. Both of them were social activists and would become mathematicians at the University of Toronto. [ 3 ] They divorced in 1979, but remained friends. [ 3 ] Rosenthal married his second wife, Carol Kitai, a medical doctor, in 1985. [ 4 ] Rosenthal was a lifelong Marxist and political activist. He was a red diaper baby ; his mother was active in the civil rights and anti-war movements. [ 4 ] [ 16 ] Rosenthal told the Globe and Mail : "I regard myself as a Marxist, but not one affiliated with any particular parties... I have a very strong hatred of racism and the grotesque economic inequalities such as exist in the world. It is very deeply embedded in my bones." [ 16 ] Rosenthal died in Toronto on May 25, 2024, at the age of 82. [ 1 ] He had suffered from heart disease and Parkinson’s disease , and died due to complications from COVID-19 . [ 8 ] The song "A Little Rain (A Song for Pete)" (2016), by the alternative rock band the Arkells , was inspired by Rosenthal. It was written by Arkells' lead singer Max Kerman, a friend of Rosenthal and his family. [ 11 ]
https://en.wikipedia.org/wiki/Peter_Rosenthal
Peter Richard Schreiner (born 17 November 1965 in Nuremberg , Germany) is a German chemist who is a professor at Justus Liebig University Giessen . As of 2022 [update] , his h-index is 73. [ 1 ] Schreiner studied at the University of Erlangen-Nuremberg , where he received his diploma in 1992 (with Paul von Ragué Schleyer ). [ 2 ] [ 3 ] He obtained his doctorate in organic chemistry in 1995 from the University of Georgia . [ 3 ] From 1996 to 1999 he was a Liebig Fellow at the University of Göttingen . While there he received the ADUC Prize [ 4 ] for his work. [ 5 ] From 1999 to 2002, he was associate professor of Organic Chemistry at the University of Georgia . [ 5 ] Since 2002 he has been a professor of Organic Chemistry at the University of Giessen. From 2012 to 2015 he was vice president for Research and Promotion of Young Researchers at the University of Giessen . From 2006 to 2009 he was Dean of the Faculty of Biology and Chemistry. He has been a visiting professor at the Lorand Eötvös University in Budapest, at Technion in Haifa, at the University of Bordeaux, and at Stanford University. [ 5 ] Schreiner was German Chemical Society (GDCh) President from 2020 to 2021. [ 6 ] [ 7 ] [ 8 ] His research interests include organocatalysis, nanodiamonds (diamondoids), green chemistry, organic electronics, matrix isolation of reactive intermediates such as carbenes, and computational chemistry . He discovered the mechanism of tunnel control of reactions and demonstrated their diffusion, thus establishing a third driver of chemical reactions besides thermodynamic (energetically most favorable) and kinetic control (least barrier) (published in Science, 2011). He is one of the pioneers of organocatalysis, in which metal-containing catalysts are replaced by more environmentally friendly customized organic catalysts. Schreiner found a way to integrate nanodiamonds, which naturally occur in natural gas and petroleum but have nanoscale dimensions, into a coatings. In 1997 he helped develop the thiourea organocatalysis . Schreiner has been a member of the Academy of Sciences Leopoldina since 2013. [ 2 ] In 2015 he was elected to the North Rhine-Westphalian Academy of Sciences and Arts. He is an honorary member of the Polish and Israeli Chemical Societies. In 2003 he received the Dirac Medal of the World Association of Theoretical and Computational Chemists and the Science Prize of the German Technion Society. For 2017 he was awarded the Adolf von Baeyer Medal of the GDCh. From 1995 to 1996 he was Project Coordinator of the Encyclopedia of Computational Chemistry. From 2011 he has been Associate Editor of the Beilstein Journal of Organic Chemistry, from 2000 he has been Editor of the Journal of Computational Chemistry and since 2008 he has been Principal Editor of review journal WIRES-Computational Molecular Sciences .
https://en.wikipedia.org/wiki/Peter_Schreiner_(chemist)
Peter Schwerdtfeger (born 1 September 1955) is a German scientist. He holds a chair in theoretical chemistry at Massey University in Auckland , New Zealand, serves as director of the Centre for Theoretical Chemistry and Physics, is the head of the New Zealand Institute for Advanced Study, and is a former president of the Alexander von Humboldt Foundation . Schwerdtfeger took his first degree in chemical engineering at Aalen University in 1976, after finishing a degree as chemical-technical assistant at the Institute Dr. Flad in Stuttgart in 1973. He studied chemistry, physics and mathematics at Stuttgart University where he received his PhD in theoretical chemistry in 1986. He received a Feodor-Lynen fellowship of the Alexander von Humboldt Foundation to join the chemistry department and later the School of Engineering at University of Auckland in 1987. After a two years research fellowship at the Research School of Chemistry ( Australian National University ), he returned to Auckland University in 1991 for a lectureship in chemistry. [ citation needed ] He received his habilitation and venia legendi (Privatdozent) in 1995 from the Philipps University of Marburg . He held a personal chair in physical chemistry for five years until moving to Massey University Albany in 2004, where he established the Centre for Theoretical Chemistry and Physics. [ citation needed ] He became a founding member of the New Zealand Institute for Advanced Study in 2007. [ citation needed ] In 2007 he received the Royal Society Australasian Chemistry Lectureship, and was the Källen Lecturer in Physics at Lund University (Sweden) in 2015. [ citation needed ] From 2017-2018 he was member of the Centre for Advanced Study at the Norwegian Academy of Science and Letters . [ citation needed ] He has published 350 papers in international journals. He was awarded eight consecutive Marsden awards by the Royal Society of New Zealand . [ citation needed ] One of Schwedtfeger's notable doctoral students is Patricia Hunt , professor at Victoria University of Wellington . [ 1 ]
https://en.wikipedia.org/wiki/Peter_Schwerdtfeger
Peter Trefonas (born 1958) is a retired DuPont Fellow (a senior scientist) at DuPont , where he had worked on the development of electronic materials . He is known for innovations in the chemistry of photolithography , particularly the development of anti-reflective coatings and polymer photoresists that are used to create circuitry for computer chips. This work has supported the patterning of smaller features during the lithographic process, increasing miniaturization and microprocessor speed. [ 1 ] [ 2 ] Peter Trefonas is a son of Louis Marco Trefonas, also a chemist, and Gail Thames. [ 3 ] He was inspired by Star Trek and the writings of Isaac Asimov , and created his own chemistry lab at home. [ 1 ] Trefonas attended the University of New Orleans , receiving his Bachelor of Science in chemistry in 1980. [ 4 ] While an undergraduate, Trefonas earned money by writing video games for early personal computers . These included Worm , a clone of the 1976 arcade video game Blockade , and a clone of the arcade game Hustle (1977), which itself was based on Blockcade . Worm was the first of what would become many games in the snake video game genre for home computers. [ 5 ] [ 6 ] Trefonas also wrote a game based on Dungeons & Dragons . [ 7 ] Trefonas studied at the University of Wisconsin-Madison with Robert West , [ 4 ] completing a Ph.D. in inorganic chemistry in late 1984. [ 1 ] Trefonas became interested in electronic materials after working with West and chip makers from IBM to create organosilicon bilayer photoresists. [ 1 ] His thesis topic was "Synthesis, properties and chemistry of organosilane and organogermane high polymers" (1985). [ 8 ] Trefonas joined MEMC Electronic Materials in late 1984. In 1986, he and others co-founded Aspect Systems Inc., utilizing photolithography technology acquired from MEMC. [ 1 ] Trefonas worked at Aspect from 1986-1989. Then, through a succession of company acquisitions, he moved to Shipley Company (1990-2000), Rohm and Haas (1997-2008), to The Dow Chemical Company (2008-2019), and finally to DuPont (2019-current). [ 9 ] [ 10 ] [ 11 ] [ 1 ] Trefonas has published at least 137 journal articles and technical publications. He has received 132 US patents. [ 12 ] [ 13 ] Throughout his career, Trefonas has focused on materials science and the chemistry of photolithography . By understanding the chemistry of photoresists used in lithography, he has been able to develop anti-reflective coatings and polymer photoresists that support finely-tuned etching used in the production of integrated circuits . These materials and techniques make it possible to fit more circuits into a given area. [ 13 ] [ 2 ] Over time, lithographic technologies have developed to allow lithography to use smaller wavelengths of light. Trefonas has helped to overcome a number of apparent limits to the sizes that are achievable, developing photoresists that are responsive to 436-nm and 365-nm ultraviolet light, and as small as 193 nm deep. [ 14 ] [ 15 ] In 1989, Trefonas and others at Aspect Systems Inc. reported on extensive studies of polyfunctional photosensitive groups in positive photoresists. They studied diazonaphthoquinone (DNQ), a chemical compound used for dissolution inhibition of novolak resin in photomask creation. They mathematically modeled effects, predicted possible optimizations, and experimentally verified their predictions. They found that chemically bonding together three of the molecules of DNQ to create a new molecule containing three dissolution inhibitors in a single molecule, led to a better feature contrast, with better resolution and miniaturization. [ 16 ] These modified DNQs became known as "polyfunctional photoactive components" (PACs). This approach, which they termed polyphotolysis, [ 17 ] [ 18 ] [ 19 ] has also been referred to as the "Trefonas Effect." [ 14 ] [ 20 ] The technology of trifunctional diazonaphthoquinone PACs has become the industry standard in positive photoresists. [ 20 ] Their mechanism has been elucidated and relates to a cooperative behavior of each of the three DNQ units in the new trifunctional dissolution inhibitor molecule. Phenolic strings from the acceptor groups of PACs that are severed from their anchors may reconnect to living strings, replacing two shorter polarized strings with one longer polarized string. [ 21 ] Trefonas has also been a leader in the development of fast etch organic Bottom Antireflective Coating (BARC) [ 22 ] BARC technology minimizes the reflection of light from the substrate when imaging the photoresist. Light that is used to form the latent image in the photoresist film can reflect back from the substrate and compromise feature contrast and profile shape. Controlling interference from reflected light results in the formation of a sharper pattern with less variability and a larger process window. [ 23 ] In 2014, Trefonas and others at Dow were named Heroes of Chemistry by the American Chemical Society , for the development of Fast Etch Organic Bottom Antireflective Coatings (BARCs). [ 22 ] In 2016, Trefonas was recognized with The SCI Perkin Medal for outstanding contributions to industrial chemistry. In 2018, Trefonas was named as a Fellow of the SPIE for "achievements in design for manufacturing & compact modeling." Peter Trefonas was elected to the National Academy of Engineering in 2018 for the "invention of photoresist materials and microlithography methods underpinning multiple generations of microelectronics". DuPont Company in 2019 recognized Trefonas with its top recognition, the Lavoisier Medal , for "commercialized electronic chemicals which enabled customers to manufacture integrated circuits with higher density and faster speeds".
https://en.wikipedia.org/wiki/Peter_Trefonas
Peter W. Heller (born 5 September 1957) is a former deputy mayor of the City of Freiburg im Breisgau in Germany, economist, investor and venture philanthropist. Peter W. Heller was born in Frankfurt am Main , the son of Wolfgang Heller, entrepreneur in the printing industry, and Ingeborg Heller. He studied economics and philosophy at the university of St. Gallen , university of Lausanne (Switzerland) and university of Freiburg (Germany) where he obtained his PhD (Dr. rer.pol.) in 1988 with a thesis on "The Problem of Environmental Degradation in Economic Theory". In 1988, he married Micaela Heller. They have two children, Julia (*1989) and Jakob (*1991), and two grandchildren, Romeo (*2019) and Milo (*2023). 1984 Heller was elected to the City Council of Freiburg and served on the council's committees for the municipal budget, environment, and culture. He left the council in 1990 after his election as deputy mayor. 1990 the council of the City of Freiburg im Breisgau elected him as deputy mayor to head the local administration's environmental protection department, [ 1 ] the first of its kind in Southern Germany (until 1996). It comprised the offices of municipal environmental law and regulations, waste management, urban park and forestry services, local energy management and climate policy. 1993 the executive committee of ICLEI – Local Governments for Sustainability elected him as chairman (until 1997). ICLEI is a major global network and agency of local governments committed to environmental protection and local sustainability. [ 2 ] 1997 Heller founded the investment company forseo GmbH which holds equity positions in companies of the solar PV, windpower, and energy efficiency industry. He co-founded solar PV and windpower development companies in Chile, Brazil and France, and has been a board member of several supervisory and advisory boards. Heller is the co-founder and executive director of the Canopus Foundation , a private non-profit organisation which became one of the pioneers of venture philanthropy in Germany. Since 2000 the foundation has been providing grants and business development assistance to social enterprises active in the field of rural energy access for low income communities in developing countries. In 2001 he co-founded the "Basel Agency for Sustainable Energy (BASE)". Until 2016, the Canopus Foundation coordinated the international Solar for All initiative, founded jointly with Ashoka (organization) in 2008. [ 3 ] [ 4 ] Since 2014 the foundation has been promoting organisations in the field of new economic thinking towards more plurality in economic science, and encourages deliberation about the preconditions of a sustainable economy and society. In 2024 he was appointed to the advisory board of the Thales Academy [2] in Freiburg i.Br.
https://en.wikipedia.org/wiki/Peter_W._Heller
Peters four-step chemistry is a systematically reduced mechanism for methane combustion , named after Norbert Peters , who derived it in 1985. [ 1 ] [ 2 ] [ 3 ] The mechanism reads as [ 4 ] The mechanism predicted four different regimes where each reaction takes place. The third reaction, known as radical consumption layer, where most of the heat is released, and the first reaction, also known as fuel consumption layer, occur in a narrow region at the flame. The fourth reaction is the hydrogen oxidation layer, whose thickness is much larger than the former two layers. Finally, the carbon monoxide oxidation layer is the largest of them all, corresponding to the second reaction, and oxidizes very slowly. [ 5 ] [ 6 ] A three-step mechanism was derived in 1987 by Peters and Forman A. Williams by assuming steady-state approximation for the hydrogen radical. [ 7 ] Then,
https://en.wikipedia.org/wiki/Peters_four-step_chemistry
In mathematics, the Peters polynomials s n ( x ) are polynomials studied by Peters ( 1956 , 1956b ) given by the generating function ( Roman 1984 , 4.4.6), ( Boas & Buck 1958 , p.37). They are a generalization of the Boole polynomials . This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Peters_polynomials
The Petersen matrix is a comprehensive description of systems of biochemical reactions used to model reactors for pollution control (engineered decomposition ) as well as in environmental systems . It has as many columns as the number of relevant involved components ( chemicals , pollutants , biomasses , gases ) and as many rows as the number of involved processes (biochemical reactions and physical degradation). One further column is added to host the description of the kinetics of each transformation ( rate equation ). [ 1 ] [ 2 ] The mass conservation principle for each process is expressed in the rows of the matrix. If all components are included (none omitted) then the mass conservation principle states that, for each process: where ρ j ˙ {\displaystyle {\dot {\rho _{j}}}} is the density rate of each component. This can also be seen as the process stoichiometric relation . Moreover, the rate of variation of each component for all processes simultaneous effect can be easily assessed by summing the columns: where r i {\displaystyle r_{i}} are the reaction rates of each process. A system of a third order reaction followed by a Michaelis–Menten enzyme reaction. where the reagents A and B combine forming the substrate S (S = AB 2 ), which with the help of enzyme E is transformed into the product P. Production rates for each substance is: Therefore, the Petersen matrix reads as The Petersen matrix can be used to write the system's rate equation
https://en.wikipedia.org/wiki/Petersen_matrix
In geometry , the Petersen–Morley theorem states that, if a , b , c are three general skew lines in space, if a ′ , b ′ , c ′ are the lines of shortest distance respectively for the pairs (b,c) , (c,a) and (a,b) , and if p , q and r are the lines of shortest distance respectively for the pairs (a,a ′ ) , (b,b ′ ) and (c,c ′ ) , then there is a single line meeting at right angles all of p , q , and r . The theorem is named after Johannes Hjelmslev (who published his work on this result under his original name Johannes Trolle Petersen) and Frank Morley .
https://en.wikipedia.org/wiki/Petersen–Morley_theorem
The Peterson olefination (also called the Peterson reaction ) is the chemical reaction of α-silyl carbanions ( 1 in diagram below) with ketones (or aldehydes ) to form a β-hydroxysilane ( 2 ) which eliminates to form alkenes ( 3 ). [ 1 ] Several reviews have been published. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] One attractive feature of the Peterson olefination is that it can be used to prepare either cis- or trans-alkenes from the same β-hydroxysilane. Treatment of the β-hydroxysilane with acid will yield one alkene, while treatment of the same β-hydroxysilane with base will yield the alkene of opposite stereochemistry. The action of base upon a β-hydroxysilane ( 1 ) results in a concerted syn elimination of ( 2 ) or ( 3 ) to form the desired alkene. The penta-coordinate silicate intermediate ( 3 ) is postulated, but no proof exists to date. [ when? ] Potassium alkoxides eliminate quickly, while sodium alkoxides generally require heating. Magnesium alkoxides only eliminate in extreme conditions. The order of reactivity of alkoxides, K > Na >> Mg, is consistent with higher electron density on oxygen , hence increasing the alkoxide nucleophilicity. The treatment of the β-hydroxysilane ( 1 ) with acid results in protonation and an anti elimination to form the desired alkene. When the α-silyl carbanion contains only alkyl , hydrogen , or electron-donating substituents , the stereochemical outcome of the Peterson olefination can be controlled, [ 7 ] because at low temperature the elimination is slow and the intermediate β-hydroxysilane can be isolated. Once isolated, the diastereomeric β-hydroxysilanes are separated. One diastereomer is treated with acid, while the other is treated with base, thus converted the material to an alkene with the required stereochemistry. [ 4 ] When the α-silyl carbanion contains electron-withdrawing substituents, the Peterson olefination directly forms the alkene. The intermediate β-hydroxysilane cannot be isolated as it eliminates in-situ . The basic elimination pathway has been postulated in these cases. Unlike the Wittig reaction , Peterson-type olefinations tolerate nitriles . [ 8 ] Acidic elimination conditions are sometimes not feasible as the acid also promotes double bond isomerization . Additionally, elimination using sodium or potassium hydride may not be feasible due to incompatible functional groups . Chan et al. have found that acylation of the intermediate silylcarbinol with either acetyl chloride or thionyl chloride gives a β-silyl ester that will eliminate spontaneously at 25 °C giving the desired alkene. [ 9 ] Corey and co-workers developed a method (sometimes dubbed the Corey-Peterson olefination [ 10 ] ) using a silylated imine to yield an α,β-unsaturated aldehyde from a carbonyl compound in one step. [ 11 ] For an example for its use in total synthesis see: Kuwajima Taxol total synthesis
https://en.wikipedia.org/wiki/Peterson_olefination
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ] Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ: The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes. In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below: For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ] In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ] The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ] Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ] It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ] Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ] The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension. When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ] About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ] Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ] Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization . In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ] A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ] To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ] As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ] The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list: The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear: These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed If bit 1 is inverted, blocks of 2 codewords change order: If bit 2 is inverted, blocks of 4 codewords reverse order: Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values. The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ] On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits. 0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221 There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings. For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ): There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}} A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ] For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ] whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ] We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form { g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}} where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by If we define the transition multiplicities m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|} to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}} The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ] Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ] Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e. V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} . An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}} otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots } The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below. These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines . Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 ‍ N , where N is the number of vertices in the middle-level subgraph. [ 70 ] Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ] Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ] The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ] Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022. An STGC for P = 360 and n = 9 is reproduced here: Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ] Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ] If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ] There are a number of binary codes similar to Gray codes, including: The following binary-coded decimal (BCD) codes are Gray code variants as well:
https://en.wikipedia.org/wiki/Petherick_code
petite (ρ–) is a mutant first discovered in the yeast Saccharomyces cerevisiae . Due to the defect in the respiratory chain, 'petite' yeast are unable to grow on media containing only non-fermentable carbon sources (such as glycerol or ethanol) and form small colonies when grown in the presence of fermentable carbon sources (such as glucose). The petite phenotype can be caused by the absence of, or mutations in, mitochondrial DNA (termed "cytoplasmic Petites"), or by mutations in nuclear-encoded genes involved in oxidative phosphorylation. [ 1 ] [ 2 ] A neutral petite produces all wild type progeny when crossed with wild type. petite mutations can be induced using a variety of mutagens, including DNA intercalating agents, as well as chemicals that can interfere with DNA synthesis in growing cells. [1] [ 2 ] Mutagens that create Petites are implicated in increased rates of degenerative diseases and in the aging process. A mutation that produces small (petite" > petite) anaerobic-like colonies had shown first in Yeast Saccharomyces cerevisiae and described by Boris Ephrussi and his co-workers in 1949 in Gif-sur-Yvette, France. [ 3 ] The cells of petite colonies were smaller than those of wild-type colonies, but the term “petite” refers only to colony size and not the individual cell size. [ 3 ] Over 50 years ago, in a lab in France, Ephrussi, et al. discovered a non-Mendelian inherited factor that is essential to respiration in the yeast, Saccharomyces cerevisiae . S. cerevisiae without this factor, known as the ρ-factor, is described by the development of small colonies when compared to the wild-type yeast. [ 4 ] These smaller colonies were dubbed petite colonies.  These petite mutants were observed to be spontaneously produced naturally at a rate of 0.1%-1.0% every generation. [ 4 ] [ 5 ] They also found that treatment of wild-type S. cerevisiae with DNA-intercalating agents would more rapidly produce this mutation. [ 4 ] Schatz identified a region of the yeast's nuclear DNA that was associated with the mitochondria in 1964. Later, it was discovered that mutants without the ρ-factor had no mitochondrial DNA (called ρ 0 isolates), or that they possessed a difference in density or amount of the mitochondrial DNA (called ρ − isolates). The use of electron microscopy to view the DNA in the mitochondrial matrix helped to verify the actuality of the mitochondrial genome. [ 4 ] [ 5 ] [ 6 ] S. cerevisiae has since become a useful model for aging. It has been shown that as yeast ages, it loses functional mitochondrial DNA, which leads to replicative senescence, or the inability to further replicate. [ 4 ] It has been suggested that there is a link between mitochondrial DNA loss and replicative life span (RLS), or the number of times a cell can reproduce before it dies, as it has been found that an increase in RLS is established with the same changes in the genome that enhance the propagation of cells that do not contain mitochondrial DNA.  Genetic screens for replicative life span associated genes and pathways could be made easier and quicker by selecting genetic suppressors of the petite negative mutants. [ 4 ] The petite is characterized by a deficiency in cytochromes (a, a3 + b) and a lack of respiratory enzymes which engage in respiration in mitochondria. [ 7 ] Due to the error in the respiratory chain pathway, 'petite' yeast is incapable of growing on media containing only non-fermentable carbon sources (such as glycerol or ethanol) and forming small colonies when grown in the presence of fermentable carbon sources (such as glucose). [ 8 ] The absence of mitochondria can cause the petite phenotype, or by deletion mutations in mitochondrial DNA (termed "cytoplasmic Petites") which is a deletion mutation, or by mutations in nuclear-encoded genes involved in oxidative phosphorylation. Petite mutants can be generated in the laboratory by using high-efficiency treatments such as acriflavine, ethidium bromide, and other intercalating agents. [ 9 ] Their mechanisms work to break down and cause the eventual loss of mitochondrial DNA: if the treatment time increases, the amount of mitochondrial DNA will decrease. After prolonged treatment, petites containing no detectable mitochondrial DNA were obtained. [ 7 ] It is useful approach to illustrate the function of mitochondrial DNA in yeast growth. The inheritance pattern of genes existing in the cell organelles such as mitochondria which named cytoplasmic inheritance differs from nuclear genes pattern. Petite mutants show extranuclear inheritance.The inheritance pattern varying with the type of petite involved. Segregational petites (pet–): mutants are created by nuclear mutations and exhibit Mendelian 1:1 segregation. [ 9 ] Neutral petites (rho–N): Neutral petite when crossed to wild-type, all offspring are wild-type. It has inherited normal mitochondrial DNA from wild-type parent, which is replicated in the offspring. [ 3 ] Suppressive petites (rho–S): crosses between petite and wild-type, all offspring are petite, showing "dominant" behavior to suppress wild-type mitochondrial function. [ 3 ] Most petite mutants of S. cerevisiae are of a suppressive type, and they differ from neutral petite by affecting the wild-type, although both are a mutation in mitochondrial DNA. Mitochondrial genome of yeast will be the first eukaryotic genome to be understood in terms of both structure and function and this should smooth the way to understand the evolution of organelle genomes and its relationship with nuclear genomes.It is evident that Ephrussi's work not only opened the field of extrachromosomal genetics, but also provide a fantastic incentive for the investigations which followed up to this day. [ 3 ] Though S. cerevisiae has been extensively studied in this and other areas, it is difficult to say if the molecular mechanisms of this process in the mitochondrial DNA are conserved across other yeast species.  Other yeast species, such as Kluyveromyces lactis, Saccharomyces castellii, and Candida albicans have all shown to produce petite negative mutants.  Potentially, these yeasts have a different inheritance system in place for their mitochondrial genome than S. cerevisiae does. [ 4 ] [ 5 ] The frequency at which S. castellii spontaneously produces petites is similar to that of S. cerevisiae , with the mitochondrial DNA of those petites being highly altered via deletion and rearrangement. Suppressive petites of S. cerevisiae are the most commonly observed spontaneously created mutants, whereas in S. castellii, the most commonly observed spontaneous mutant is the neutral petite, further leading to speculation that the transference of this mutation differs between species. [ 1 ] [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Petite_mutation
The Petkau effect is an early counterexample to linear-effect assumptions usually made about radiation exposure. It was found by Dr. Abram Petkau at the Atomic Energy of Canada Whiteshell Nuclear Research Establishment , Manitoba and published in Health Physics March 1972. [ 1 ] The Petkau effect was coined by Swiss nuclear hazards commentator Ralph Graeub in 1985 in this book Der Petkau-Effekt und unsere strahlende Zukunft (The Petkau effect and our Radiating Future). [ 2 ] Petkau had been measuring, in the usual way, the radiation dose that would rupture a simulated artificial cell membrane . He found that 3500 rads delivered in 2 + 1 ⁄ 4 hours (26 rad/min = 15.5 Sv/h) would do it. [ 3 ] Then, almost by chance, Petkau repeated the experiment with much weaker radiation and found that 0.7 rad delivered in 11 + 1 ⁄ 2 hours (1 millirad/min = 0.61 mSv/h) also ruptured the membrane. This was counter to the prevailing assumption of a linear relationship between total dose or dose rate and the consequences. [ 4 ] The radiation was of ionizing nature, and produced negative oxygen ions (free radicals). Those ions were more damaging to the simulated membrane in lower concentrations than higher (a somewhat counter-intuitive result in itself) because in the latter, they more readily recombine with each other instead of interfering with the membrane. The ion concentration directly correlated with the radiation dose rate and the composition had non-monotonic consequences. Petkau conducted further experiments with simulated cells in 1976 and found that the enzyme superoxide dismutase protected the cells from free radicals generated by ionizing radiation, obviating the effects seen in his earlier experiment. [ 5 ] [ 6 ] Petkau also discovered that superoxide dismutase was elevated in the leukocytes (white blood cells) in a sub-population of nuclear workers occupationally exposed to elevated radiation ( ca . 10 mSv in 6 months), further supporting the hypothesis that superoxide dismutase is a radioprotective agent. [ 7 ] Thus, Petkau's original 1972 experiment apparently revealed the potential effects of ionizing radiation on cells without natural radioprotective mechanisms in place.
https://en.wikipedia.org/wiki/Petkau_effect
Petko Stoyanov Dimitrov ( Bulgarian : Петко Стоянов Димитров ) (16 September 1944 – 29 April 2023) was a Bulgarian marine geologist and oceanographer from the Institute of Oceanology - Bulgarian Academy of Sciences in Varna . He has been an early proponent of the Black Sea deluge hypothesis which gained public notoriety at the end of the XXc. Born on September 16, 1944, in the village of Novachene, Sofia Province . In 1969 he graduated from Sofia University "St. Kliment Ohridski” , Faculty of Geology and Geography, specialty geology-geochemistry. From 1969 to 1975 he worked in the uranium mine "Eleshnitsa" as a deputy director. In 1975 he won a competition for a research associate and was employed at the Institute of Oceanology - BAS. In 1979 he defended his dissertation on "Genesis of marine sediments in the peripheral region of the western part of the Black Sea shelf in the Quaternary" under the guidance of Academician Yastrebov and Prof. Aksenov at the Shirshov Institute of Oceanology , Moscow. [ 1 ]
https://en.wikipedia.org/wiki/Petko_Dimitrov
Petkovšek's algorithm (also Hyper ) is a computer algebra algorithm that computes a basis of hypergeometric terms solution of its input linear recurrence equation with polynomial coefficients . Equivalently, it computes a first order right factor of linear difference operators with polynomial coefficients. This algorithm was developed by Marko Petkovšek in his PhD-thesis 1992. [ 1 ] The algorithm is implemented in all the major computer algebra systems. Let K {\textstyle \mathbb {K} } be a field of characteristic zero. A nonzero sequence y ( n ) {\textstyle y(n)} is called hypergeometric if the ratio of two consecutive terms is rational , i.e. y ( n + 1 ) / y ( n ) ∈ K ( n ) {\textstyle y(n+1)/y(n)\in \mathbb {K} (n)} . The Petkovšek algorithm uses as key concept that this rational function has a specific representation, namely the Gosper-Petkovšek normal form . Let r ( n ) ∈ K [ n ] {\textstyle r(n)\in \mathbb {K} [n]} be a nonzero rational function. Then there exist monic polynomials a , b , c ∈ K [ n ] {\textstyle a,b,c\in \mathbb {K} [n]} and 0 ≠ z ∈ K {\textstyle 0\neq z\in \mathbb {K} } such that r ( n ) = z a ( n ) b ( n ) c ( n + 1 ) c ( n ) {\displaystyle r(n)=z{\frac {a(n)}{b(n)}}{\frac {c(n+1)}{c(n)}}} and This representation of r ( n ) {\textstyle r(n)} is called Gosper-Petkovšek normal form. These polynomials can be computed explicitly. This construction of the representation is an essential part of Gosper's algorithm . [ 2 ] Petkovšek added the conditions 2. and 3. of this representation which makes this normal form unique. [ 1 ] Using the Gosper-Petkovšek representation one can transform the original recurrence equation into a recurrence equation for a polynomial sequence c ( n ) {\textstyle c(n)} . The other polynomials a ( n ) , b ( n ) {\textstyle a(n),b(n)} can be taken as the monic factors of the first coefficient polynomial p 0 ( n ) {\textstyle p_{0}(n)} resp. the last coefficient polynomial shifted p r ( n − r + 1 ) {\textstyle p_{r}(n-r+1)} . Then z {\textstyle z} has to fulfill a certain algebraic equation . Taking all the possible finitely many triples ( a ( n ) , b ( n ) , z ) {\textstyle (a(n),b(n),z)} and computing the corresponding polynomial solution of the transformed recurrence equation c ( n ) {\textstyle c(n)} gives a hypergeometric solution if one exists. [ 1 ] [ 3 ] [ 4 ] In the following pseudocode the degree of a polynomial p ( n ) ∈ K [ n ] {\textstyle p(n)\in \mathbb {K} [n]} is denoted by deg ⁡ ( p ( n ) ) {\textstyle \deg(p(n))} and the coefficient of n d {\textstyle n^{d}} is denoted by coeff ( p ( n ) , n d ) {\textstyle {\text{coeff}}(p(n),n^{d})} . If one does not end if a solution is found it is possible to combine all hypergeometric solutions to get a general hypergeometric solution of the recurrence equation, i.e. a generating set for the kernel of the recurrence equation in the linear span of hypergeometric sequences. [ 1 ] Petkovšek also showed how the inhomogeneous problem can be solved. He considered the case where the right-hand side of the recurrence equation is a sum of hypergeometric sequences. After grouping together certain hypergeometric sequences of the right-hand side, for each of those groups a certain recurrence equation is solved for a rational solution. These rational solutions can be combined to get a particular solution of the inhomogeneous equation. Together with the general solution of the homogeneous problem this gives the general solution of the inhomogeneous problem. [ 1 ] The number of signed permutation matrices of size n × n {\textstyle n\times n} can be described by the sequence y ( n ) {\textstyle y(n)} which is determined by the recurrence equation 4 ( n + 1 ) 2 y ( n ) + 2 y ( n + 1 ) + ( − 1 ) y ( n + 2 ) = 0 {\displaystyle 4(n+1)^{2}\,y(n)+2\,y(n+1)+(-1)\,y(n+2)=0} over Q {\textstyle \mathbb {Q} } . Taking a ( n ) = n + 1 , b ( n ) = 1 {\textstyle a(n)=n+1,b(n)=1} as monic divisors of p 0 ( n ) = 4 ( n + 1 ) 2 , p 2 ( n ) = − 1 {\textstyle p_{0}(n)=4(n+1)^{2},p_{2}(n)=-1} respectively, one gets z = ± 2 {\textstyle z=\pm 2} . For z = 2 {\textstyle z=2} the corresponding recurrence equation which is solved in Petkovšek's algorithm is 4 ( n + 1 ) 2 c ( n ) + 4 ( n + 1 ) c ( n + 1 ) − 4 ( n + 1 ) ( n + 2 ) c ( n + 2 ) = 0. {\displaystyle 4(n+1)^{2}\,c(n)+4(n+1)\,c(n+1)-4(n+1)(n+2)\,c(n+2)=0.} This recurrence equation has the polynomial solution c ( n ) = c 0 {\textstyle c(n)=c_{0}} for an arbitrary c 0 ∈ Q {\textstyle c_{0}\in \mathbb {Q} } . Hence r ( n ) = 2 ( n + 1 ) {\textstyle r(n)=2(n+1)} and y ( n ) = 2 n n ! {\textstyle y(n)=2^{n}\,n!} is a hypergeometric solution. In fact it is (up to a constant) the only hypergeometric solution and describes the number of signed permutation matrices. [ 5 ] Given the sum coming from Apéry 's proof of the irrationality of ζ ( 3 ) {\displaystyle \zeta (3)} , Zeilberger 's algorithm computes the linear recurrence Given this recurrence, the algorithm does not return any hypergeometric solution, which proves that a ( n ) {\displaystyle a(n)} does not simplify to a hypergeometric term . [ 3 ]
https://en.wikipedia.org/wiki/Petkovšek's_algorithm
The Petrenko-Kritschenko reaction is a classic multicomponent- name reaction [ 1 ] that is closely related to the Robinson–Schöpf tropinone synthesis, but was published 12 years earlier. In the original publication [ 2 ] diethyl-α-ketoglurate, a derivative of acetonedicarboxylic acid , is used in combination with ammonia and benzaldehyde. The relative stereochemistry was not elucidated in the original publication, structural analysis using X-rays or NMR was not available in these days. In the absence of ammonia or ammonium salts a 4-oxotetrahydropyran is formed. [ 3 ] In contrast to the Robinson synthesis, it does not employ dialdehydes like succinaldehyde or glutaraldehyde but simpler aldehydes like benzaldehyde . Therefore, the product of the reaction is not a bicyclic structure (see tropinone and pseudopelletierine ) but a 4-piperidone. The synthesis of tropinone can be seen as a variation of the Petrenko-Kritschenko reaction in which the two aldehyde functions are covalently linked in a single molecule. Apart from the Hantzsch synthesis the Petrenko-Kritschenko reaction is one of the few examples in which a symmetric pyridine precursor can be obtained in a multicomponent ring-condensation reaction followed by an oxidation. The oxidation by chromium trioxide in acetic acid leads to a symmetrically substituted 4-pyridone, decarboxylation yields the 3,5-unsubstituted derivative. [ 2 ] Acetoacetate can be used instead of diethyl-α-ketoglurate in the presence of indium salts. [ 4 ] The use of aniline has also been reported in the original Publication. [ 2 ] The product of this reaction shows transoid configuration of the phenyl groups at C-2 and C-6. The reaction has been used to prepare precoccinellin , an alkaloid found in certain ladybugs . [ 1 ] When benzaldehyde is substituted with 2-pyridinecarboxaldehyde the reaction can be used to prepare precursors for bispidone-ligands. [ 5 ] Essentially this method is based on two subsequent Petrenko-Kritschenko reactions. These ligands can be used to prepare compounds containing high-valent iron , that are able to oxidize cyclohexane in the presence of hydrogen peroxide .
https://en.wikipedia.org/wiki/Petrenko-Kritschenko_piperidone_synthesis
Petri AG was an automotive parts company based in Germany . It was acquired by the Japanese company Takata Corporation in 2000, forming the European subsidiary Takata-Petri. In 2018, operations became part of the new Joyson Safety Systems as Joyson Safety Systems Aschaffenburg GmbH , after the 2017 bankruptcy of Takata and subsequent purchase by Key Safety Systems . [ 1 ] [ 2 ] Petri AG was founded in Aschaffenburg in 1899 by Richard Petri and started to produce steering wheels and interior trims. Richard Petri founded the first plant (Plant 01) in Aschaffenburg , which is now the European Headquarters of Takata-Petri . Later, Petri's firm expanded further and added other plants for the production of plastic parts, like the Plant 02 in Nilkheim, near Aschaffenburg and Plant 03 in Albertshausen, near Bad Kissingen . Steering wheel production is still located in Aschaffenburg. Other plastic components and interior trims are mostly produced in Albertshausen. Aside from these plants, Petri also founded some research and assembly plants later on. In 1981, Petri AG developed and produced the first airbag system in collaboration with Mercedes-Benz . In 2000, Japanese competitor Takata Corporation acquired Petri AG, forming the subsidiary Takata-Petri, renamed Takata AG in early 2012. [ 3 ] Takata AG was a global leader in the production of steering wheels and plastic parts, not only in the automotive industry. After the 2017 Takata bankruptcy, the major assets were acquired by Key Safety Systems which was renamed in 2018 to Joyson Safety Systems . [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Petri_AG
A Petri dish (alternatively known as a Petri plate or cell-culture dish ) is a shallow transparent lidded dish that biologists use to hold growth medium in which cells can be cultured , [ 1 ] [ 2 ] originally, cells of bacteria , fungi and small mosses . [ 3 ] The container is named after its inventor, German bacteriologist Julius Richard Petri . [ 4 ] [ 5 ] [ 6 ] It is the most common type of culture plate . The Petri dish is one of the most common items in biology laboratories and has entered popular culture . The term is sometimes written in lower case, especially in non-technical literature. [ 7 ] [ 8 ] What was later called Petri dish was originally developed by German physician Robert Koch in his private laboratory in 1881, as a precursor method. Petri, as assistant to Koch, at Berlin University made the final modifications in 1887 as used today. Penicillin , the first antibiotic, was discovered in 1929 when Alexander Fleming noticed that penicillium mold contaminating a bacterial culture in a Petri dish had killed the bacteria around it. The Petri dish was developed by German physician Julius Richard Petri (after whom the name is given) while working as an assistant to Robert Koch at Berlin University . Petri did not invent the culture dish himself; rather, it was a modified version of Koch's invention [ 9 ] which used an agar medium that was developed by Walther Hesse. [ 10 ] Koch had published a precursor dish in a booklet in 1881 titled " Zur Untersuchung von Pathogenen Organismen " ( Methods for the Study of Pathogenic Organisms ), [ 11 ] which has been known as the "Bible of Bacteriology". [ 12 ] [ 13 ] He described a new bacterial culture method that used a glass slide with agar and a container (basically a Petri dish, a circular glass dish of 20 × 5 cm with matching lid) which he called feuchte Kammer ("moist chamber"). A bacterial culture was spread on the glass slide, then placed in the moist chamber with a small wet paper. Bacterial growth was easily visible. [ 14 ] Koch publicly demonstrated his plating method at the Seventh International Medical Congress in London in August 1881. There, Louis Pasteur exclaimed, " C'est un grand progrès, Monsieur! " ("What a great progress, Sir!") [ 15 ] It was using this method that Koch discovered important pathogens of tuberculosis ( Mycobacterium tuberculosis ), anthrax ( Bacillus anthracis ), and cholera ( Vibrio cholerae ). For his research on tuberculosis, he was awarded the Nobel Prize in Physiology or Medicine in 1905. [ 16 ] His students also made important discoveries. Friedrich Loeffler discovered the bacteria of glanders ( Burkholderia mallei ) in 1882 and diphtheria ( Corynebacterium diphtheriae ) in 1884; and Georg Theodor August Gaffky , the bacterium of typhoid ( Salmonella enterica ) in 1884. [ 17 ] Petri made changes in how the circular dish was used. It is often asserted that Petri developed a new culture plate, [ 18 ] [ 19 ] [ 20 ] but this is incorrect. Instead of using a separate glass slide or plate on which culture media were placed, Petri directly placed media into the glass dish, eliminating unnecessary steps such as transferring the culture media, using the wet paper, and reducing the chance of contamination. [ 9 ] He published the improved method in 1887 as " Eine kleine Modification des Koch’schen Plattenverfahrens " ("A minor modification of the plating technique of Koch"). [ 6 ] Although it could have been named "Koch dish", [ 14 ] the final method was given an eponymous name Petri dish. [ 21 ] Petri dishes are usually cylindrical , mostly with diameters ranging from 30 to 200 millimetres (1.2 to 7.9 in), [ 22 ] [ 23 ] and a height to diameter ratio ranging from 1:10 to 1:4. [ 24 ] Four sided versions are also available. [ 25 ] [ 26 ] Petri dishes were traditionally reusable and made of glass ; often of heat-resistant borosilicate glass for proper sterilization at 120–160 °C . [ 22 ] Since the 1960s, plastic dishes, usually disposable, are also common. [ 27 ] The dishes are often covered with a shallow transparent lid, resembling a slightly wider version of the dish itself. The lids of glass dishes are usually loose-fitting. [ 22 ] Plastic dishes may have close-fitting covers that delay the drying of the contents. [ 28 ] Alternatively, some glass or plastic versions may have small holes around the rim, or ribs on the underside of the cover, to allow for air flow over the culture and prevent water condensation . [ 29 ] Some Petri dishes, especially plastic ones, usually feature rings and/or slots on their lids and bases so that they are less prone to sliding off one another when stacked or sticking to a smooth surface by suction. [ 28 ] Small dishes may have a protruding base that can be secured on a microscope stage for direct examination. [ 30 ] Some versions may have grids printed on the bottom to help in measuring the density of cultures. [ 31 ] [ 25 ] [ 26 ] A microplate is a single container with an array of flat-bottomed cavities, each being essentially a small Petri dish. It makes it possible to inoculate and grow dozens or hundreds of independent cultures of dozens of samples at the same time. Besides being much cheaper and convenient than separate dishes, the microplate is also more amenable to automated handling and inspection. Some plates are separated into different mediums known as biplates, triplates, and quadplates. Petri dishes are widely used in biology to cultivate microorganisms such as bacteria , yeasts , and molds . It is most suited for organisms that thrive on a solid or semisolid surface. The culture medium is often an agar plate , a layer a few mm thick of agar or agarose gel containing whatever nutrients the organism requires (such as blood , salts , carbohydrates , amino acids ) and other desired ingredients (such as dyes , indicators , and medicinal drugs ). The agar and other ingredients are dissolved in warm water and poured into the dish and left to cool down. Once the medium solidifies, a sample of the organism is inoculated ("plated"). The dishes are then left undisturbed for hours or days while the organism grows, possibly in an incubator . They are usually covered, or placed upside-down, to lessen the risk of contamination from airborne spores . Virus or phage cultures require that a population of bacteria be grown in the dish first, which then becomes the culture medium for the viral inoculum. While Petri dishes are widespread in microbiological research, smaller dishes tend to be used for large-scale studies in which growing cells in Petri dishes can be relatively expensive and labor-intensive. [ 32 ] [ 33 ] Petri dishes can be used to visualize the location of contamination on surfaces, such as kitchen counters and utensils, [ 34 ] clothing, food preparation equipment, or animal and human skin. [ 35 ] [ 36 ] For this application, the Petri dishes may be filled so that the culture medium protrudes slightly above the edges of the dish to make it easier to take samples on hard objects. Shallow Petri dishes prepared in this way are called Replicate Organism Detection And Counting (RODAC) plates and are available commercially. [ 37 ] [ 38 ] Petri dishes are also used for cell cultivation of isolated cells from eukaryotic organisms, such as in immunodiffusion studies, on solid agar or in a liquid medium. Petri dishes may be used to observe the early stages of plant germination , and to grow plants asexually from isolated cells. Petri dishes may be convenient enclosures to study the behavior of insects and other small animals. Due to their large open surface, Petri dishes are effective containers to evaporate solvents and dry out precipitates , either at room temperature or in ovens and desiccators . Petri dishes also make convenient temporary storage for samples, especially liquid, granular, or powdered ones, and small objects such as insects or seeds. Their transparency and flat profile allows the contents to be inspected with the naked eye, magnifying glass , or low-power microscope without removing the lid. The Petri dish is one of a small number of laboratory equipment items whose name entered popular culture. It is often used metaphorically, e.g. for a contained community that is being studied as if they were microorganisms in a biology experiment, or an environment where original ideas and enterprises may flourish. [ 7 ] [ 8 ] [ 39 ] Unicode has a Petri dish emoji , " 🧫 ", which has the code point U+1F9EB ( HTML entity "&#129515;" or "&#x1F9EB;", UTF-8 "0xF0 0x9F 0xA7 0xAB"). [ 40 ]
https://en.wikipedia.org/wiki/Petri_dish
The Petrie Prize Lecture is an award given in alternate years by the Canadian Astronomical Society to an outstanding astrophysicist . The award commemorates the contributions to astrophysical research of the Canadian astronomer Robert M. Petrie . [ 1 ] Source: Canadian Astronomical Society
https://en.wikipedia.org/wiki/Petrie_Prize_Lecture
In geology , petrifaction or petrification (from Ancient Greek πέτρα ( pétra ) ' rock, stone ' ) is the process by which organic material becomes a fossil through the replacement of the original material and the filling of the original pore spaces with minerals . Petrified wood typifies this process, but all organisms, from bacteria to vertebrates, can become petrified (although harder, more durable matter such as bone, beaks, and shells survive the process better than softer remains such as muscle tissue, feathers, or skin). Petrification takes place through a combination of two similar processes: permineralization and replacement. These processes create replicas of the original specimen that are similar down to the microscopic level. [ 1 ] One of the processes involved in petrifaction is permineralization. The fossils created through this process tend to contain a large amount of the original material of the specimen. This process occurs when groundwater containing dissolved minerals (most commonly quartz , calcite , apatite (calcium phosphate), siderite (iron carbonate), and pyrite ), [ 2 ] fills pore spaces and cavities of specimens, particularly bone, shell or wood. [ 3 ] The pores of the organisms' tissues are filled when these minerals precipitate out of the water. Two common types of permineralization are silicification and pyritization. Silicification is the process in which organic matter becomes saturated with silica . A common source of silica is volcanic material. Studies have shown that in this process, most of the original organic matter is destroyed. [ 4 ] [ 5 ] Silicification most often occurs in two environments—either the specimen is buried in sediments of deltas and floodplains or organisms are buried in volcanic ash. Water must be present for silicification to occur because it reduces the amount of oxygen present and therefore reduces the deterioration of the organism by fungi, maintains organism shape, and allows for the transportation and deposition of silica. The process begins when a specimen is permeated with an aqueous silica solution. The cell walls of the specimen are progressively dissolved and silica is deposited into the empty spaces. In wood samples, as the process proceeds, cellulose and lignin, two components of wood, are degraded and replaced with silica. The specimen is transformed to stone (a process called lithification) as water is lost. For silicification to occur, the geothermic conditions must include a neutral to slightly acidic pH [ 6 ] and a temperature and pressure similar to shallow-depth sedimentary environments. Under ideal natural conditions, silicification can occur at rates approaching those seen in artificial petrification. [ 7 ] Pyritization is a process similar to silicification, but instead involves the deposition of iron and sulfur in the pores and cavities of an organism. Pyritization can result in both solid fossils as well as preserved soft tissues. In marine environments, pyritization occurs when organisms are buried in sediments containing a high concentration of iron sulfides. Organisms release sulfide, which reacts with dissolved iron in the surrounding water, when they decay. This reaction between iron and sulfides forms pyrite (FeS 2 ). Carbonate shell material of the organism is then replaced with pyrite due to a higher concentration of pyrite and a lower concentration of carbonate in the surrounding water. Pyritization occurs to a lesser extent in plants in clay environments. [ 3 ] Replacement, the second process involved in petrifaction, occurs when water containing dissolved minerals dissolves the original solid material of an organism, which is then replaced by minerals. This can take place extremely slowly, replicating the microscopic structure of the organism. The slower the rate of the process, the better defined the microscopic structure will be. The minerals commonly involved in replacement are calcite , silica , pyrite , and hematite . [ 3 ] Biotic remains preserved by replacement alone (as opposed to in combination with permineralization ) are rarely found, but these fossils present significance to paleontology because they tend to be more detailed. [ 8 ] [ unreliable fringe source? ] Not only are the fossils produced through the process of petrifaction used for paleontological study, but they have also been used as both decorative and informative pieces. Petrified wood is used in several ways. Slabs of petrified wood can be crafted into tabletops, or the slabs themselves are sometimes displayed in a decorative fashion. Also, larger pieces of the wood have been carved into sinks and basins. Other large pieces can also be crafted into chairs and stools. Petrified wood and other petrified organisms have also been used in jewelry, sculpture, clock-making, ashtrays and fruit bowls, and landscape and garden decorations. Petrified wood has also been used in construction. The Petrified Wood Gas Station, [ 9 ] located on Main St Lamar, Colorado , was built in 1932 and consists of walls and floors constructed from pieces of petrified wood. The structure, built by W.G. Brown, has since been converted to office space and a used car dealership. [ 10 ] Glen Rose, Texas provides even more examples of the use of fossilized wood in architecture. Beginning in the 1920s, the farmers of Somervell County, Texas began uncovering petrified trees. Local craftsmen and masons then built over 65 structures from this petrified wood, 45 of which were still standing as of June 2009. These structures include gas stations, flowerbeds, cottages, restaurants, fountains and gateposts. [ 11 ] Glen Rose, Texas is also noted for Dinosaur Valley State Park and the Glen Rose Formation , where fossilized dinosaur footprints from the Cretaceous period can be viewed. [ 12 ] Another example of the use of petrified wood in construction is the Agate House Pueblo in the Petrified Forest National Park in Arizona. Built by ancestral Pueblo people about 990 years ago, this eight-room building was constructed almost entirely out of petrified wood and is believed to have served as either a family home or meeting place. [ 13 ] Scientists attempted to artificially petrify organisms as early as the 18th century, when Girolamo Segato claimed to have supposedly "petrified" human remains. His methods were lost, but the bulk of his "pieces" are on display at the Museum of the Department of Anatomy in Florence , Italy. [ 14 ] More recent attempts have been both successful and documented, but should be considered as semi-petrifaction or incomplete petrifaction or at least as producing some novel type of wood composite, as the wood material remains to a certain degree; the constituents of wood (cellulose, lignins, lignans, oleoresins, etc.) have not been replaced by silicate, but have been infiltrated by specially formulated acidic solutions of aluminosilicate salts that gel in contact with wood matter and form a matrix of silicates within the wood after being left to react slowly for a given period of time in the solution or heat-cured for faster results. Hamilton Hicks of Greenwich, Connecticut , received a patent for his "recipe" for rapid artificial petrifaction of wood under US patent 4,612,050 in 1986. [ 15 ] Hicks' recipe consists of highly mineralized water and a sodium silicate solution combined with a dilute acid with a pH of 4.0-5.5. Samples of wood are penetrated with this mineral solution through repeated submersion and applications of the solution. Wood treated in this fashion is - according to the claims in the patent - incapable of being burned and acquires the features of petrified wood. Some uses of this product as suggested by Hicks include use by horse breeders who desire fireproof stables constructed of nontoxic material that would also be resistant to chewing of the wood by horses. [ 16 ] In 2005 scientists at the Pacific Northwest National Laboratory (PNNL) reported that they had successfully petrified wood samples artificially. Unlike natural petrification, though, they infiltrated samples in acidic solutions, diffused them internally with titanium and carbon and fired them in a high-temperature oven (circa 1400 °C) in an inert atmosphere to yield a man-made ceramic matrix composite of titanium carbide and silicon carbide still showing the initial structure of wood. Future uses could see these artificially petrified wood-ceramic materials eventually used in the tool industry. Other vegetal matter could be treated in a similar process and yield abrasive powders. [ 17 ]
https://en.wikipedia.org/wiki/Petrifaction
The Neogen Petrifilm plate is an all-in-one plating system made by the Food Safety Division of the Neogen Corporation . They are heavily used in many microbiology -related industries and fields to culture various micro-organisms and are meant to be a more efficient method for detection and enumeration compared to conventional plating techniques. A majority of its use is for the testing of foodstuffs. Petrifilm plates are designed to be as accurate as conventional plating methods. Ingredients usually vary from plate to plate depending on what micro-organism is being cultured, but generally a Petrifilm comprises a cold-water-soluble gelling agent, nutrients, and indicators for activity and enumeration. A typical Petrifilm plate has a 10 cm(H) × 7.5 cm(W) bottom film which contains a foam barrier accommodating the plating surface, the plating surface itself (a circular area of about 20 cm 2 ), and a top film which encloses the sample within the Petrifilm. A 1 cm × 1 cm yellow grid is printed on the back of the plate to assist enumeration. A plastic “spreader” is also used to spread the inoculum evenly. Petrifilm plates have become widely used because of their cost-effectiveness, simplicity, convenience, and ease of use. For example, conventional plating would require preparing agar for pour plating, or using agar plates and vial inoculum loops for streak plating; but for Petrifilm plates, the agar is completely housed in a single unit so that only the sample has to be added, which saves time. For incubation, Petrifilm plates can be safely stacked and incubated just like Petri dishes. Since they are paper thin, more plates can be stacked together than Petri dishes (although it is recommended that Petrifilms be stacked no higher than 20). For enumeration, Petrifilm plates can be used on any colony counter for enumeration just like a Petri dish. Various enumeration experiments have shown very little or no variance between counts obtained through Petrifilm and standard agar counts. [ 1 ] [ 2 ] In some cases, Petrifilms were more sensitive in detection than standard microbiology methods, other than the case that a higher sensitivity could possibly lead to an increased risk of false positive results. [ 3 ] First, a sample must be prepared through standard weighing and serial dilution, with stomaching and pH adjustment if necessary. Next, to inoculate, the top layer is lifted to expose the plating surface, and with a pipette, 1mL of the diluted sample is added. The top film is then slowly rolled down and the “spreader” is used for even distribution. It takes a minute for gelling to occur. After the required incubation period, colonies appear either as splotches, spots which are surrounded by bubbles, or a combination of both, which differ from micro-organism to micro-organism. Enumeration can be done on a standard colony counter. Picking out individual colonies for interpretation can also be done because the top film can be lifted quite effortlessly to expose the gel. Unfortunately, if a sample is too dark in colour (e.g. chocolate or hot chocolate), enumeration becomes more difficult or impossible, since the stained colonies are less visible.
https://en.wikipedia.org/wiki/Petrifilm
Petrobactin is a bis- catechol siderophore found in M. hydrocarbonoclasticus , A. macleodii , and the anthrax -producing B. anthracis . [ 2 ] Like other siderophores petrobactin is a highly specific iron(III) transport ligand, contributing to the marine microbial uptake of environmental iron . [ 2 ] [ 3 ] The iron- chelated petrobactin complex readily undergoes a photolytic oxidative decarboxylation due to its α-hydroxy carboxylate group, converting iron(III) to the more biologically useful iron(II) . [ 4 ] Like other siderophores, petrobactin is secreted by an animal pathogenic bacterium. B. anthracis uses petrobactin to acquire iron from its host. Interestingly, while the 3,4-catecholate ends of petrobactin do not improve iron(III) affinity relative to hydroxamate ends, they speed up iron removal from human diferric transferrin . [ 5 ] Petrobactin in its ferric and iron-free forms is bound selectively by YclQ (an isogenic disruption mutant in the transporter encoded by the yclNOPQ operon in Bacillus subtilis ), as is petrobactin's precursor protocatechuic acid and the ferric petrobactin photoproduct. The yclNOPQ operon is required for the utilization of petrobactin and yclNOPQ orthologs likely contribute to the pathogenicity of Bacilli . [ 6 ] In B. anthracis, petrobactin is produced by a nonribosomal peptide synthetase independent siderophore (NIS) synthetase pathway. [ 7 ] The enzyme sequences used are anthrax siderophore biosynthesis (Asb) A through F, in alphabetical order. These gene clusters are identical to those used in M. hydrocarbonclasticus biosynthesis of petrobactin. In A. macleodii only the first three gene clusters, AsbA through AsbC, are identical to B. anthracis; then a longer AsbD and AsbF is next, followed by two hypothetical protein domains and a PepSY domain-containing gene. A. macleodii ends its sequence with AsbE. [ 2 ] The biosynthesis of petrobactin in B. anthracis can progress in order AsbA- AsbB-AsbE -AsbE or AsbA- AsbE-AsbB -AsbE. [ 8 ] If the enzymation reactions in this pathway proceed generally, in domains AsbA and AsbB the phosphorylation of a carboxylic acid forms an acylphosphate intermediate, which is then dephosphorylated by a primary amine in spermidine . In domain AsbE the lone pair of electrons on a primary amine allows for a nucleophilic attack on the electrophilic hydroxyl carbon. The sulfur on AsbE is protonated to form a thiol and the amide nitrogen is deprotonated. [ 9 ] The dehydration of 3-dehydroshikimic acid might proceed as a modified, enzyme-catalyzed dienol benzene rearrangement and reduction, leading to aromatization of the ring. [ 10 ]
https://en.wikipedia.org/wiki/Petrobactin
The Petrochemical Heritage Award was established in 1997, "to recognize individuals who made outstanding contributions to the petrochemical community ." [ 1 ] The award is intended to inspire achievement and to promote public understanding. The award winner is chosen annually by the Founders Club and the Science History Institute (formerly the Chemical Heritage Foundation). The award is traditionally presented at the International Petrochemical Conference hosted by the American Fuel and Petrochemical Manufacturers (AFPM), formerly known as NPRA, the National Petrochemical & Refiners Association. [ 2 ] The following people have received the Petrochemical Heritage Award: [ 2 ]
https://en.wikipedia.org/wiki/Petrochemical_Heritage_Award
The petrochemical industry is concerned with the production and trade of petrochemicals . [ according to whom? ] A major part is constituted by the plastics (polymer) industry . [ according to whom? ] It directly interfaces with the petroleum industry , especially the downstream sector. [ according to whom? ] The top global petrochemical companies based on different KPIs: This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Petrochemical_industry
The emergence of oil production in the territory now known as Romania dates back to 1857, [ 1 ] with oil facilities gaining strategic military significance in 1916 during World War I . Throughout World War II , the Kingdom of Romania held the position as the largest oil producer in Europe, second only to the USSR, whose primary oil source was located in Azerbaijan. The oil extracted from Romania played a pivotal role in Axis military operations, a fact underscored in Adolf Hitler's 1942 conversation with Gustaf Emil Mannerheim . [ 2 ] The Romanian petrochemical industry, particularly centered around Ploiești, became a focal point for Allied bombing raids, notably during Operation Tidal Wave . [ 3 ] The Soviet Red Army later occupied the Romanian oilfields in August 1944. Post-World War II, extensive reconstruction and expansion initiatives were undertaken under the communist regime. Following the events of 1989, a significant portion of the industry underwent privatization. Present-day Romania boasts significant oil-refining capabilities, demonstrating a notable interest in the Central Asia-Europe pipelines while actively cultivating relations with select Arab States of the Persian Gulf. With a total of 10 refineries and an overall refining capacity of approximately 504,000 barrels per day (80,100 cubic meters per day), Romania stands as the leading nation in the eastern European region in terms of refining industry scale. [ 4 ] Romania's extensive refining capacity surpasses its domestic demand for refined petroleum products, enabling the country to engage in substantial exports of various oil products and petrochemicals. This includes, but is not limited to, lubricants, bitumen, and fertilizers, distributed across the eastern European region. By 2017, the number of refineries possessing the capability to produce had dwindled to just five, with the overall capacity experiencing a decline to 13.7 million metric tons per year. [ 5 ] This is an incomplete list of oil refineries in Romania : Dormant refineries: Closed refineries: Romania has closed down the majority of the petrochemical processing platforms. Those remaining are:
https://en.wikipedia.org/wiki/Petrochemical_industry_in_Romania
Petrol piracy also sometimes called oil piracy or petro-piracy , is an act of piracy that specifically involves petroleum resources , or their transportation, consumption, and regulation. It should not be confused with the term oil war , as although both involve petroleum , petrol piracy always involves at least one of the aggressors being ship or boat-borne. [ 1 ] Although, it may seem not as prevalent in today's modern society due to plummeting oil prices and lower attack rates, a number of specific incidents have still occurred in-addition to the fact that since the start of COVID-19 there has been an unprecedented resurgence in piracy incidents (petrol piracy-included). [ 2 ] In contrast to traditional piracy, petroleum ships are generally targeted over merchant, as it serves as a means to fight back against 'resource control' within the region. [ 3 ] In the most recent copy of the IMB's piracy report, signs show of piracy activity doubling in areas with previously very low numbers. [ 2 ] This is attributed due to a stronger economic downturn then usual, as a result of COVID-19 . [ 7 ] Current hot-spots include areas like the Gulf of Aden and the Western African nation of Guinea , an affluent jewel when it comes to illicit petroleum, due to its geographical positioning in relation to several sources of oil along the coast. [ 8 ]
https://en.wikipedia.org/wiki/Petrol_piracy
Petroleomics is the identification of the totality of the constituents of naturally occurring petroleum and crude oil using high resolution mass spectrometry . [ 1 ] [ 2 ] [ 3 ] In addition to mass determination, petroleomic analysis sorts the chemical compounds into heteroatom class ( nitrogen , oxygen and sulfur ), type ( degree of unsaturation , and carbon number ). [ 4 ] The name is a combination of petroleum and - omics (collective chemical characterization and quantification ). Mass spectrometry characterization of petroleum has been performed since the first commercial mass spectrometers were introduced in the 1940s. [ 5 ] [ 6 ] Early mass spectrometry was limited to relatively low molecular weight nonpolar species accessed mainly by electron ionization with mass analysis with sector mass spectrometers . By the end of the 20th century, separations combined with mass spectrometric techniques such as gas chromatography-mass spectrometry and liquid chromatography mass spectrometry have characterizated petroleum distillates such as gasoline , diesel , and gas oil . [ 7 ] The first petroleum analysis with electrospray ionization was demonstrated in 2000 by Zhan and Fenn , who studied the polar species in petroleum distillates with low-resolution MS. [ 8 ] Electrospray ionization was coupled with high-resolution FT-ICR by Marshall and coworkers. [ 1 ] To date, many studies on petroleomic analysis of crude oils have been published. Most work has been done by the group of Marshall at the National High Magnetic Field Laboratory (NHMFL) and Florida State University . [ 2 ] Ionization of nonpolar petroleum components can be achieved by field desorption ionization and atmospheric pressure photoionization (APPI). [ 9 ] field desorption FT-ICR MS has enabled the identification of a large number of nonpolar components in crude oils that are not accessible by electrospray, such as benzo - and dibenzothiophenes , furans , cycloalkanes , and polycyclic aromatic hydrocarbons (PAHs). A drawback of field desorption is that it is slow, mainly due to the need of ramping the current to the emitter in order to volatilize and ionize molecules. APPI can ionize both polar and nonpolar species, [ 10 ] and an APPI spectrum can be generated in just a few seconds. However, APPI ionizes a broad range of compound classes and produces both protonated and molecular ion peaks, resulting in a complex mass spectrum. [ 2 ] High mass resolution data analysis is usually undertaken by converting the mass spectra to the Kendrick mass scale, in which the mass of a methylene unit is set to exactly 14 (CH 2 = 14.0000 instead of 14.01565 daltons ). [ 11 ] This rescaling of the data aids in the identification of homologous series according to alkylation, class (number of heteroatoms), and type (double bond equivalent, DBE, also called rings plus double bonds or degree of unsaturation). The scaled data is then used to obtain the Kendrick mass defect (KMD), which is given by where the nominal Kendrick is the Kendrick mass rounded to the nearest integer. Double bond equivalent (DBE) is calculated according to where C = number of carbons, H = number of hydrogens, X= number of halogens and N = number of nitrogens. [ 12 ] O Compounds with the same DBE have the same mass defect. Therefore, Kendrick normalization yields a set of series with identical mass defect that appear as horizontal rows in a plot of DBE versus Kendrick mass. The data can also be plotted as a 3D heat-map to indicate the relative intensity of the mass spectral peaks. From the Kendrick plot, the species with peaks in the mass spectrum can be sorted into compound classes by the number of nitrogen, oxygen and sulfur heteroatoms. The data can also be represented with a Van Krevelen diagram . [ 13 ]
https://en.wikipedia.org/wiki/Petroleomics
Petroleum , also known as crude oil or simply oil , is a naturally occurring, yellowish-black liquid chemical mixture found in geological formations , consisting mainly of hydrocarbons . [ 1 ] The term petroleum refers both to naturally occurring unprocessed crude oil, as well as to petroleum products that consist of refined crude oil. Petroleum is a fossil fuel formed over millions of years from anaerobic decay of organic materials from buried prehistoric organisms , particularly planktons and algae , and 70% of the world's oil deposits were formed during the Mesozoic . [ 2 ] Conventional reserves of petroleum are primarily recovered by drilling , which is done after a study of the relevant structural geology , analysis of the sedimentary basin , and characterization of the petroleum reservoir . There are also unconventional reserves such as oil sands and oil shale which are recovered by other means such as fracking . Once extracted, oil is refined and separated, most easily by distillation , into innumerable products for direct use or use in manufacturing . Petroleum products include fuels such as gasoline (petrol), diesel , kerosene and jet fuel ; bitumen , paraffin wax and lubricants ; reagents used to make plastics ; solvents , textiles , refrigerants , paint , synthetic rubber , fertilizers , pesticides , pharmaceuticals , and thousands of other petrochemicals . Petroleum is used in manufacturing a vast variety of materials essential for modern life, [ 3 ] and it is estimated that the world consumes about 100 million barrels (16 million cubic metres ) each day. Petroleum production played a key role in industrialization and economic development , [ 4 ] especially after the Second Industrial Revolution . Some petroleum-rich countries, known as petrostates , gained significant economic and international influence during the latter half of the 20th century due to their control of oil production and trade. Petroleum is a non-renewable resource , and exploitation can be damaging to both the natural environment , climate system and human health (see Health and environmental impact of the petroleum industry ). Extraction , refining and burning of petroleum fuels reverse the carbon sink and release large quantities of greenhouse gases back into the Earth's atmosphere , so petroleum is one of the major contributors to anthropogenic climate change . Other negative environmental effects include direct releases, such as oil spills , as well as air and water pollution at almost all stages of use. Oil access and pricing have also been a source of domestic and geopolitical conflicts, leading to state-sanctioned oil wars , diplomatic and trade frictions , energy policy disputes and other resource conflicts . Production of petroleum is estimated to reach peak oil before 2035 [ 5 ] as global economies lower dependencies on petroleum as part of climate change mitigation and a transition towards more renewable energy and electrification . [ 6 ] The word petroleum comes from Medieval Latin petroleum (literally 'rock oil'), which comes from Latin petra 'rock' (from Greek pétra πέτρα ) and oleum 'oil' (from Greek élaion ἔλαιον ). [ 7 ] [ 8 ] The origin of the term stems from monasteries in southern Italy where it was in use by the end of the first millennium as an alternative for the older term " naphtha ". [ 9 ] After that, the term was used in numerous manuscripts and books, such as in the treatise De Natura Fossilium , published in 1546 by the German mineralogist Georg Bauer , also known as Georgius Agricola. [ 10 ] After the advent of the oil industry, during the second half of the 19th century, the term became commonly known for the liquid form of hydrocarbons. Petroleum, in one form or another, has been used since ancient times. More than 4300 years ago, bitumen was mentioned when the Sumerians used it to make boats. A tablet of the legend of the birth of Sargon of Akkad mentions a basket which was closed by straw and bitumen. More than 4000 years ago, according to Herodotus and Diodorus Siculus , asphalt was used in the construction of the walls and towers of Babylon ; there were oil pits near Ardericca and Babylon, and a pitch spring on Zakynthos . [ 11 ] Great quantities of it were found on the banks of the river Issus , one of the tributaries of the Euphrates . Ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. The use of petroleum in ancient China dates back to more than 2000 years ago. The I Ching , one of the earliest Chinese writings, cites that oil in its raw state, without refining, was first discovered, extracted, and used in China in the first century BCE. In addition, the Chinese were the first to record the use of petroleum as fuel as early as the fourth century BCE. [ 12 ] [ 13 ] [ 14 ] By 347 CE, oil was produced from bamboo-drilled wells in China. [ 15 ] [ 16 ] In the 7th century, petroleum was among the essential ingredients for Greek fire , an incendiary projectile weapon that was used by Byzantine Greeks against Arab ships, which were then attacking Constantinople . [ 17 ] Crude oil was also distilled by Persian chemists , with clear descriptions given in Arabic handbooks such as those of Abu Bakr al-Razi (Rhazes). [ 18 ] The streets of Baghdad were paved with tar , derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku , Azerbaijan . These fields were described by the Persian geographer Abu Bakr al-Razi in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. [ 19 ] Arab and Persian chemists also distilled crude oil to produce flammable products for military purposes. Through Islamic Spain , distillation became available in Western Europe by the 12th century. [ 20 ] It has also been present in Romania since the 13th century, being recorded as păcură. [ 21 ] Sophisticated oil pits, 4.5 to 6 metres (15 to 20 ft) deep, were dug by the Seneca people and other Iroquois in Western Pennsylvania as early as 1415–1450. The French General Louis-Joseph de Montcalm encountered Seneca using petroleum for ceremonial fires and as a healing lotion during a visit to Fort Duquesne in 1750. [ 22 ] Early British explorers to Myanmar documented a flourishing oil extraction industry based in Yenangyaung that, in 1795, had hundreds of hand-dug wells under production. [ 23 ] Merkwiller-Pechelbronn is said to be the first European site where petroleum has been explored and used. The still active Erdpechquelle, a spring where petroleum appears mixed with water has been used since 1498, notably for medical purposes. There was activity in various parts of the world in the mid-19th century. A group directed by Major Alexeyev of the Bakinskii Corps of Mining Engineers hand-drilled a well in the Baku region of Bibi-Heybat in 1846. [ 24 ] There were engine-drilled wells in West Virginia in 1859, the same year as Drake's well. [ 25 ] An early commercial well was hand dug in Poland in 1853, and another in nearby Romania in 1857. At around the same time the world's first, small, oil refinery was opened at Jasło in Poland (then Austria), with a larger one opened at Ploiești in Romania shortly after. Romania (then being a vassal of the Ottoman Empire) is the first country in the world to have had its annual crude oil output officially recorded in international statistics: 275 tonnes for 1857. [ 26 ] [ 27 ] In 1858, Georg Christian Konrad Hunäus found a significant amount of petroleum while drilling for lignite in Wietze , Germany. Wietze later provided about 80% of German consumption in the Wilhelmine Era. [ 28 ] The production stopped in 1963, but Wietze has hosted a Petroleum Museum since 1970. [ 29 ] Oil sands have been mined since the 18th century. [ 30 ] In Wietze in lower Saxony, natural asphalt/bitumen has been explored since the 18th century. [ 31 ] Both in Pechelbronn as in Wietze, the coal industry dominated the petroleum technologies. [ 32 ] Chemist James Young in 1847 noticed a natural petroleum seepage in the coal mine at riddings Alfreton , Derbyshire from which he distilled a light thin oil suitable for use as lamp oil, at the same time obtaining a more viscous oil suitable for lubricating machinery. In 1848, Young set up a small business refining crude oil. [ 33 ] Young eventually succeeded, by distilling cannel coal at low heat, in creating a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. Young found that by slow distillation he could obtain several useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. [ 33 ] The production of these oils and solid paraffin wax from coal formed the subject of his patent dated October 17, 1850. In 1850, Young & Meldrum and Edward William Binney entered into partnership under the title of E.W. Binney & Co. at Bathgate in West Lothian and E. Meldrum & Co. at Glasgow; their works at Bathgate were completed in 1851 and became the first truly commercial oil-works in the world with the first modern oil refinery. [ 34 ] [ clarification needed ] The world's first oil refinery was built in 1856 by Ignacy Łukasiewicz in Austria. [ 35 ] His achievements also included the discovery of how to distill kerosene from seep oil, the invention of the modern kerosene lamp (1853), the introduction of the first modern street lamp in Europe (1853), and the construction of the world's first modern oil "mine" (1854). [ 36 ] at Bóbrka , near Krosno (still operational as of 2020). The demand for petroleum as a fuel for lighting in North America and around the world quickly grew. [ 37 ] The first oil well in the Americas was drilled in 1859 by Edwin Drake at what is now called the Drake Well in Cherrytree Township, Pennsylvania . There also was a company associated with it, and it sparked a major oil drilling boom. [ 38 ] The first commercial oil well in Canada became operational in 1858 at Oil Springs, Ontario (then Canada West ). [ 39 ] Businessman James Miller Williams dug several wells between 1855 and 1858 before discovering a rich reserve of oil four metres below ground. [ 40 ] [ specify ] Williams extracted 1.5 million litres of crude oil by 1860, refining much of it into kerosene lamp oil. Williams's well became commercially viable a year before Drake's Pennsylvania operation and could be argued to be the first commercial oil well in North America. [ 41 ] The discovery at Oil Springs touched off an oil boom which brought hundreds of speculators and workers to the area. Advances in drilling continued into 1862 when local driller Shaw reached a depth of 62 metres using the spring-pole drilling method. [ 42 ] On January 16, 1862, after an explosion of natural gas , Canada's first oil gusher came into production, shooting into the air at a recorded rate of 480 cubic metres (3,000 bbl) per day. [ 43 ] By the end of the 19th century the Russian Empire, particularly the Branobel company in Azerbaijan , had taken the lead in production. [ 44 ] Access to oil was and still is a major factor in several military conflicts of the 20th century, including World War II , during which oil facilities were a major strategic asset and were extensively bombed . [ 45 ] The German invasion of the Soviet Union included the goal to capture the Baku oilfields , as it would provide much-needed oil supplies for the German military which was suffering from blockades. [ 46 ] Oil exploration in North America during the early 20th century later led to the U.S. becoming the leading producer by mid-century. As petroleum production in the U.S. peaked during the 1960s, the United States was surpassed by Saudi Arabia and the Soviet Union in total output. [ 47 ] [ 48 ] [ 49 ] In 1973 , Saudi Arabia and other Arab nations imposed an oil embargo against the United States, United Kingdom, Japan and other Western nations which supported Israel in the Yom Kippur War of October 1973. [ 50 ] The embargo caused an oil crisis . This was followed by the 1979 oil crisis , which was caused by a drop in oil production in the wake of the Iranian Revolution and caused oil prices to more than double. The two oil price shocks had many short- and long-term effects on global politics and the global economy. [ 51 ] They led to sustained reductions in demand as a result of substitution to other fuels, especially coal and nuclear, and improvements in energy efficiency , facilitated by government policies. [ 52 ] High oil prices also induced investment in oil production by non-OPEC countries, including Prudhoe Bay in Alaska, the North Sea offshore fields of the United Kingdom and Norway, the Cantarell offshore field of Mexico, and oil sands in Canada. [ 53 ] About 90 percent of vehicular fuel needs are met by oil. Petroleum also makes up 40 percent of total energy consumption in the United States , but is responsible for only one percent of electricity generation. [ 54 ] Petroleum's worth as a portable, dense energy source powering the vast majority of vehicles and as the base of many industrial chemicals makes it one of the world's most important commodities . The top three oil-producing countries as of 2018 are the United States, Russia , and Saudi Arabia . [ 55 ] In 2018, due in part to developments in hydraulic fracturing and horizontal drilling , the United States became the world's largest producer. [ 56 ] About 80 percent of the world's readily accessible reserves are located in the Middle East , with 62.5 percent coming from the Arab five: Saudi Arabia , United Arab Emirates , Iraq , Qatar , and Kuwait . A large portion of the world's total oil exists as unconventional sources, such as bitumen in Athabasca oil sands and extra heavy oil in the Orinoco Belt . While significant volumes of oil are extracted from oil sands, particularly in Canada, logistical and technical hurdles remain, as oil extraction requires large amounts of heat and water, making its net energy content quite low relative to conventional crude oil. Thus, Canada's oil sands are not expected to provide more than a few million barrels per day in the foreseeable future. [ 57 ] [ 58 ] [ 59 ] Petroleum consists of a variety of liquid, gaseous, and solid components. Lighter hydrocarbons are the gases methane , ethane , propane and butane . Otherwise, the bulk of the liquid and solids are largely heavier organic compounds, often hydrocarbons (C and H only). The proportion of light hydrocarbons in the petroleum mixture varies among oil fields . [ 60 ] An oil well produces predominantly crude oil. Because the pressure is lower at the surface than underground, some of the gas will come out of solution and be recovered (or burned) as associated gas or solution gas . A gas well produces predominantly natural gas . However, because the underground temperature is higher than at the surface, the gas may contain heavier hydrocarbons such as pentane, hexane , and heptane (" natural-gas condensate ", often shortened to condensate. ) Condensate resembles gasoline in appearance and is similar in composition to some volatile light crude oils . [ 61 ] [ 62 ] The hydrocarbons in crude oil are mostly alkanes , cycloalkanes and various aromatic hydrocarbons , while the other organic compounds contain nitrogen , oxygen , and sulfur , and traces of metals such as iron, nickel, copper and vanadium . Many oil reservoirs contain live bacteria. [ 63 ] The exact molecular composition of crude oil varies widely from formation to formation but the proportion of chemical elements varies over fairly narrow limits as follows: [ 64 ] Four different types of hydrocarbon appear in crude oil. The relative percentage of each varies from oil to oil, determining the properties of each oil. [ 60 ] The alkanes from pentane (C 5 H 12 ) to octane (C 8 H 18 ) are refined into gasoline, the ones from nonane (C 9 H 20 ) to hexadecane (C 16 H 34 ) into diesel fuel , kerosene and jet fuel . Alkanes with more than 16 carbon atoms can be refined into fuel oil and lubricating oil . At the heavier end of the range, paraffin wax is an alkane with approximately 25 carbon atoms, while asphalt has 35 and up, although these are usually cracked in modern refineries into more valuable products. The lightest fraction, the so-called petroleum gases are subjected to diverse processing depending on cost. These gases are either flared off , sold as liquefied petroleum gas , or used to power the refinery's own burners. During the winter, butane (C 4 H 10 ), is blended into the gasoline pool at high rates, because its high vapour pressure assists with cold starts. The aromatic hydrocarbons are unsaturated hydrocarbons that have one or more benzene rings . They tend to burn with a sooty flame, and many have a sweet aroma. Some are carcinogenic . These different components are separated by fractional distillation at an oil refinery to produce gasoline, jet fuel, kerosene, and other hydrocarbon fractions. The components in an oil sample can be determined by gas chromatography and mass spectrometry . [ 66 ] Due to the large number of co-eluted hydrocarbons within oil, many cannot be resolved by traditional gas chromatography. This unresolved complex mixture (UCM) of hydrocarbons is particularly apparent when analysing weathered oils and extracts from tissues of organisms exposed to oil. Crude oil varies greatly in appearance depending on its composition. It is usually black or dark brown (although it may be yellowish, reddish, or even greenish). In the reservoir it is usually found in association with natural gas, which being lighter forms a "gas cap" over the petroleum, and saline water which, being heavier than most forms of crude oil, generally sinks beneath it. Crude oil may also be found in a semi-solid form mixed with sand and water, as in the Athabasca oil sands in Canada, where it is usually referred to as crude bitumen . In Canada, bitumen is considered a sticky, black, tar-like form of crude oil which is so thick and heavy that it must be heated or diluted before it will flow. [ 67 ] Venezuela also has large amounts of oil in the Orinoco oil sands , although the hydrocarbons trapped in them are more fluid than in Canada and are usually called extra heavy oil . These oil sands resources are called unconventional oil to distinguish them from oil which can be extracted using traditional oil well methods. Between them, Canada and Venezuela contain an estimated 3.6 trillion barrels (570 × 10 ^ 9 m 3 ) of bitumen and extra-heavy oil, about twice the volume of the world's reserves of conventional oil. [ 68 ] Petroleum is a fossil fuel derived from fossilized organic materials , such as zooplankton and algae . [ 71 ] [ 72 ] Vast amounts of these remains settled to sea or lake bottoms where they were covered in stagnant water (water with no dissolved oxygen ) or sediments such as mud and silt faster than they could decompose aerobically . Approximately 1 m below this sediment, water oxygen concentration was low, below 0.1 mg/L, and anoxic conditions existed. Temperatures also remained constant. [ 72 ] As further layers settled into the sea or lake bed, intense heat and pressure built up in the lower regions. This process caused the organic matter to change, first into a waxy material known as kerogen , found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons via a process known as catagenesis . Formation of petroleum occurs from hydrocarbon pyrolysis in a variety of mainly endothermic reactions at high temperatures or pressures, or both. [ 72 ] [ 73 ] These phases are described in detail below. In the absence of plentiful oxygen, aerobic bacteria were prevented from decaying the organic matter after it was buried under a layer of sediment or water. However, anaerobic bacteria were able to reduce sulfates and nitrates among the matter to H 2 S and N 2 respectively by using the matter as a source for other reactants. Due to such anaerobic bacteria, at first, this matter began to break apart mostly via hydrolysis : polysaccharides and proteins were hydrolyzed to simple sugars and amino acids respectively. These were further anaerobically oxidized at an accelerated rate by the enzymes of the bacteria: e.g., amino acids went through oxidative deamination to amino acids , which in turn reacted further to ammonia and α-keto acids . Monosaccharides in turn ultimately decayed to CO 2 and methane . The anaerobic decay products of amino acids, monosaccharides, phenols and aldehydes combined into fulvic acids . Fats and waxes were not extensively hydrolyzed under these mild conditions. [ 72 ] Some phenolic compounds produced from previous reactions worked as bactericides and the Actinomycetales order of bacteria also produced antibiotic compounds (e.g., streptomycin ). Thus the action of anaerobic bacteria ceased at about 10 m below the water or sediment. The mixture at this depth contained fulvic acids, unreacted and partially reacted fats and waxes, slightly modified lignin , resins and other hydrocarbons. [ 72 ] As more layers of organic matter settled into the sea or lake bed, intense heat and pressure built up in the lower regions. [ 73 ] As a consequence, compounds of this mixture began to combine in poorly understood ways to kerogen . Combination happened in a similar fashion as phenol and formaldehyde molecules react to urea-formaldehyde resins, but kerogen formation occurred in a more complex manner due to a bigger variety of reactants. The total process of kerogen formation from the beginning of anaerobic decay is called diagenesis , a word that means a transformation of materials by dissolution and recombination of their constituents. [ 72 ] Kerogen formation continued to a depth of about 1 km from the Earth's surface where temperatures may reach around 50 °C . Kerogen formation represents a halfway point between organic matter and fossil fuels : kerogen can be exposed to oxygen, oxidize and thus be lost, or it could be buried deeper inside the Earth's crust and be subjected to conditions which allow it to slowly transform into fossil fuels like petroleum. The latter happened through catagenesis in which the reactions were mostly radical rearrangements of kerogen. These reactions took thousands to millions of years and no external reactants were involved. Due to the radical nature of these reactions, kerogen reacted towards two classes of products: those with low H/C ratio ( anthracene or products similar to it) and those with high H/C ratio ( methane or products similar to it); i.e., carbon-rich or hydrogen-rich products. Because catagenesis was closed off from external reactants, the resulting composition of the fuel mixture was dependent on the composition of the kerogen via reaction stoichiometry . Three types of kerogen exist: type I (algal), II (liptinic) and III (humic), which were formed mainly from algae , plankton and woody plants (this term includes trees , shrubs and lianas ) respectively. [ 72 ] Catagenesis was pyrolytic despite the fact that it happened at relatively low temperatures (when compared to commercial pyrolysis plants) of 60 to several hundred °C. Pyrolysis was possible because of the long reaction times involved. Heat for catagenesis came from the decomposition of radioactive materials of the crust, especially 40 K , 232 Th , 235 U and 238 U . The heat varied with geothermal gradient and was typically 10–30 °C per km of depth from the Earth's surface. Unusual magma intrusions, however, could have created greater localized heating. [ 72 ] Geologists often refer to the temperature range in which oil forms as an "oil window" . [ 74 ] [ 75 ] [ 72 ] Below the minimum temperature oil remains trapped in the form of kerogen. Above the maximum temperature the oil is converted to natural gas through the process of thermal cracking . Sometimes, oil formed at extreme depths may migrate and become trapped at a much shallower level. The Athabasca oil sands are one example of this. [ 72 ] An alternative mechanism to the one described above was proposed by Russian scientists in the mid-1850s, the hypothesis of abiogenic petroleum origin (petroleum formed by inorganic means), but this is contradicted by geological and geochemical evidence. [ 76 ] Abiogenic sources of oil have been found, but never in commercially profitable amounts. "The controversy isn't over whether abiogenic oil reserves exist," said Larry Nation of the American Association of Petroleum Geologists. "The controversy is over how much they contribute to Earth's overall reserves and how much time and effort geologists should devote to seeking them out." [ 77 ] Three conditions must be present for oil reservoirs to form: The reactions that produce oil and natural gas are often modeled as first order breakdown reactions, where hydrocarbons are broken down to oil and natural gas by a set of parallel reactions, and oil eventually breaks down to natural gas by another set of reactions. The latter set is regularly used in petrochemical plants and oil refineries . Petroleum has mostly been recovered by oil drilling (natural petroleum springs are rare). Drilling is carried out after studies of structural geology (at the reservoir scale), sedimentary basin analysis, and reservoir characterisation (mainly in terms of the porosity and permeability of geologic reservoir structures). [ 78 ] [ 79 ] Wells are drilled into oil reservoirs to extract the crude oil. "Natural lift" production methods that rely on the natural reservoir pressure to force the oil to the surface are usually sufficient for a while after reservoirs are first tapped. In some reservoirs, such as in the Middle East, the natural pressure is sufficient over a long time. The natural pressure in most reservoirs, however, eventually dissipates. Then the oil must be extracted using " artificial lift " means. Over time, these "primary" methods become less effective and "secondary" production methods may be used. A common secondary method is "waterflood" or injection of water into the reservoir to increase pressure and force the oil to the drilled shaft or "wellbore." Eventually "tertiary" or "enhanced" oil recovery methods may be used to increase the oil's flow characteristics by injecting steam, carbon dioxide and other gases or chemicals into the reservoir. In the United States, primary production methods account for less than 40 percent of the oil produced on a daily basis, secondary methods account for about half, and tertiary recovery the remaining 10 percent. Extracting oil (or "bitumen") from oil/tar sand and oil shale deposits requires mining the sand or shale and heating it in a vessel or retort, or using "in-situ" methods of injecting heated liquids into the deposit and then pumping the liquid back out saturated with oil. Oil-eating bacteria biodegrade oil that has escaped to the surface. Oil sands are reservoirs of partially biodegraded oil still in the process of escaping and being biodegraded, but they contain so much migrating oil that, although most of it has escaped, vast amounts are still present—more than can be found in conventional oil reservoirs. The lighter fractions of the crude oil are destroyed first, resulting in reservoirs containing an extremely heavy form of crude oil, called crude bitumen in Canada, or extra-heavy crude oil in Venezuela . These two countries have the world's largest deposits of oil sands. [ 80 ] On the other hand, oil shales are source rocks that have not been exposed to heat or pressure long enough to convert their trapped hydrocarbons into crude oil. Technically speaking, oil shales are not always shales and do not contain oil, but are fined-grain sedimentary rocks containing an insoluble organic solid called kerogen . The kerogen in the rock can be converted into crude oil using heat and pressure to simulate natural processes. The method has been known for centuries and was patented in 1694 under British Crown Patent No. 330 covering, "A way to extract and make great quantities of pitch, tar, and oil out of a sort of stone." Although oil shales are found in many countries, the United States has the world's largest deposits. [ 81 ] The petroleum industry generally classifies crude oil by the geographic location it is produced in (e.g., West Texas Intermediate , Brent , or Oman ), its API gravity (an oil industry measure of density), and its sulfur content. Crude oil may be considered light if it has low density, heavy if it has high density, or medium if it has a density between that of light and heavy . [ 82 ] Additionally, it may be referred to as sweet if it contains relatively little sulfur or sour if it contains substantial amounts of sulfur. [ 83 ] The geographic location is important because it affects transportation costs to the refinery. Light crude oil is more desirable than heavy oil since it produces a higher yield of gasoline, while sweet oil commands a higher price than sour oil because it has fewer environmental problems and requires less refining to meet sulfur standards imposed on fuels in consuming countries. Each crude oil has unique molecular characteristics which are revealed by the use of crude oil assay analysis in petroleum laboratories. [ 84 ] Barrels from an area in which the crude oil's molecular characteristics have been determined and the oil has been classified are used as pricing references throughout the world. Some of the common reference crudes are: [ 85 ] There are declining amounts of these benchmark oils being produced each year, so other oils are more commonly what is actually delivered. While the reference price may be for West Texas Intermediate delivered at Cushing, the actual oil being traded may be a discounted Canadian heavy oil – Western Canadian Select – delivered at Hardisty , Alberta , and for a Brent Blend delivered at Shetland, it may be a discounted Russian Export Blend delivered at the port of Primorsk . [ 88 ] Once extracted, oil is refined and separated, most easily by distillation , into numerous products for direct use or use in manufacturing, such as gasoline (petrol), diesel and kerosene to asphalt and chemical reagents ( ethylene , propylene , butene , acrylic acid , para-xylene [ 89 ] ) used to make plastics , pesticides and pharmaceuticals . [ 90 ] In terms of volume, most petroleum is converted into fuels for combustion engines. In terms of value, petroleum underpins the petrochemical industry, which includes many high value products such as pharmaceuticals and plastics. Petroleum is used mostly, by volume, for refining into fuel oil and gasoline, both important primary energy sources. 84% by volume of the hydrocarbons present in petroleum is converted into fuels, including gasoline, diesel, jet, heating, and other fuel oils, and liquefied petroleum gas . [ 91 ] Due to its high energy density , easy transportability and relative abundance , oil has become the world's most important source of energy since the mid-1950s. Petroleum is also the raw material for many chemical products, including pharmaceuticals , solvents , fertilizers , pesticides , and plastics; the 16 percent not used for energy production is converted into these other materials. Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust . There is also petroleum in oil sands (tar sands) . Known oil reserves are typically estimated at 190 km 3 (1.2 trillion (short scale) barrels ) without oil sands, [ 92 ] or 595 km 3 (3.74 trillion barrels) with oil sands. [ 93 ] Consumption is currently around 84 million barrels (13.4 × 10 ^ 6 m 3 ) per day, or 4.9 km 3 per year, yielding a remaining oil supply of only about 120 years, if current demand remains static. [ 94 ] More recent studies, however, put the number at around 50 years. [ 95 ] [ 96 ] Closely related to fuels for combustion engines are Lubricants , greases , and viscosity stabilizers. All are derived from petroleum. Many pharmaceuticals are derived from petroleum, albeit via multistep processes. [ citation needed ] Modern medicine depends on petroleum as a source of building blocks, reagents , and solvents . [ 97 ] Similarly, virtually all pesticides - insecticides , herbicides, etc. - are derived from petroleum. Pesticides have profoundly affected life expectancies by controlling disease vectors and by increasing yields of crops. Like pharmaceuticals, pesticides are in essence petrochemicals. Almost all plastics and synthetic polymers are derived from petroleum, which is the source of monomers. Alkenes (olefins) are one important class of these precursor molecules. The petroleum industry , also known as the oil industry, includes the global processes of exploration , extraction , refining , transportation (often by oil tankers and pipelines ), and marketing of petroleum products . The largest volume products of the industry are fuel oil and gasoline (petrol). Petroleum is also the raw material for many chemical products , including pharmaceuticals , solvents , fertilizers , pesticides , synthetic fragrances , and plastics . The industry is usually divided into three major components: upstream , midstream , and downstream . Upstream regards exploration and extraction of crude oil, midstream encompasses transportation and storage of crude, and downstream concerns refining crude oil into various end products . Petroleum is vital to many industries, and is necessary for the maintenance of industrial civilization in its current configuration, making it a critical concern for many nations. Oil accounts for a large percentage of the world's energy consumption , ranging from a low of 32% for Europe and Asia, to a high of 53% for the Middle East. In the 1950s, shipping costs made up 33 percent of the price of oil transported from the Persian Gulf to the United States, [ 102 ] but due to the development of supertankers in the 1970s, the cost of shipping dropped to only 5 percent of the price of Persian oil in the US. [ 102 ] Due to the increase in the value of crude oil during the last 30 years, the share of the shipping cost on the final cost of the delivered commodity was less than 3% in 2010. Crude oil is traded as a future on both the NYMEX and ICE exchanges. [ 105 ] Futures contracts are agreements in which buyers and sellers agree to purchase and deliver specific amounts of physical crude oil on a given date in the future. A contract covers any multiple of 1000 barrels and can be purchased up to nine years into the future. [ 106 ] According to the US Energy Information Administration (EIA) estimate for 2021, the world consumes 97.26 million barrels of oil each day. [ 108 ] This table orders the amount of petroleum consumed in 2011 in thousand barrels (1,000 bbl) per day and in thousand cubic metres (1,000 m 3 ) per day: [ 109 ] [ 110 ] Source: US Energy Information Administration [ 111 ] Population Data: [ 112 ] 1 peak production of oil already passed in this state 2 This country is not a major oil producer In petroleum industry parlance, production refers to the quantity of crude extracted from reserves, not the literal creation of the product. In order of net exports in 2011, 2009 and 2006 in thousand bbl / d and thousand m 3 /d: Source: US Energy Information Administration [ 115 ] 1 peak production already passed in this state 2 Canadian statistics are complicated by the fact it is both an importer and exporter of crude oil, and refines large amounts of oil for the U.S. market. It is the leading source of U.S. imports of oil and products, averaging 2,500,000 bbl/d (400,000 m 3 /d) in August 2007. [ 116 ] Total world production/consumption (as of 2005) is approximately 84 million barrels per day (13,400,000 m 3 /d). In order of net imports in 2011, 2009 and 2006 in thousand bbl / d and thousand m 3 /d: Source: US Energy Information Administration [ 117 ] Countries whose oil production is 10% or less of their consumption. Source: CIA World Factbook [ failed verification ] As of 2018 [update] , about a quarter of annual global greenhouse gas emissions is the carbon dioxide from burning petroleum (plus methane leaks from the industry). [ 119 ] [ 120 ] [ a ] Along with the burning of coal, petroleum combustion is the largest contributor to the increase in atmospheric CO 2 . [ 121 ] [ 122 ] Atmospheric CO 2 has risen over the last 150 years to current levels of over 415 ppmv , [ 123 ] from the 180–300 ppmv of the prior 800 thousand years . [ 124 ] [ 125 ] [ 126 ] The rise in Arctic temperature has reduced the minimum Arctic ice pack to 4,320,000 km 2 (1,670,000 sq mi), a loss of almost half since satellite measurements started in 1979. [ 127 ] Ocean acidification is the increase in the acidity of the Earth's oceans caused by the uptake of carbon dioxide (CO 2 ) from the atmosphere .The saturation state of calcium carbonate decreases with the uptake of carbon dioxide in the ocean. [ 128 ] This increase in acidity inhibits all marine life—having a greater effect on smaller organisms as well as shelled organisms (see scallops ). [ 129 ] Oil extraction is simply the removal of oil from the reservoir (oil pool). There are many methods on extracting the oil from the reservoirs for example; mechanical shaking, [ 130 ] water-in-oil emulsion, and specialty chemicals called demulsifiers that separate the oil from water. Oil extraction is costly and often environmentally damaging. Offshore exploration and extraction of oil disturb the surrounding marine environment. [ 131 ] Crude oil and refined fuel spills from tanker ship accidents have damaged natural ecosystems and human livelihoods in Alaska , the Gulf of Mexico , the Galápagos Islands , France and many other places . The quantity of oil spilled during accidents has ranged from a few hundred tons to several hundred thousand tons (e.g., Deepwater Horizon oil spill , SS Atlantic Empress , Amoco Cadiz ). Smaller spills have already proven to have a great impact on ecosystems, such as the Exxon Valdez oil spill . Oil spills at sea are generally much more damaging than those on land, since they can spread for hundreds of nautical miles in a thin oil slick which can cover beaches with a thin coating of oil. This can kill sea birds, mammals, shellfish, and other organisms it coats. Oil spills on land are more readily containable if a makeshift earth dam can be rapidly bulldozed around the spill site before most of the oil escapes, and land animals can avoid the oil more easily. Control of oil spills is difficult, requires ad hoc methods, and often a large amount of manpower. The dropping of bombs and incendiary devices from aircraft on the SS Torrey Canyon wreck produced poor results; [ 132 ] modern techniques would include pumping the oil from the wreck, like in the Prestige oil spill or the Erika oil spill. [ 133 ] Though crude oil is predominantly composed of various hydrocarbons, certain nitrogen heterocyclic compounds, such as pyridine , picoline , and quinoline are reported as contaminants associated with crude oil, as well as facilities processing oil shale or coal, and have also been found at legacy wood treatment sites. These compounds have a very high water solubility, and thus tend to dissolve and move with water. Certain naturally occurring bacteria, such as Micrococcus , Arthrobacter , and Rhodococcus have been shown to degrade these contaminants. [ 134 ] Because petroleum is a naturally occurring substance, its presence in the environment does not need to be the result of human causes such as accidents and routine activities ( seismic exploration, drilling , extraction, refining and combustion). Phenomena such as seeps [ 135 ] and tar pits are examples of areas that petroleum affects without man's involvement. A tarball is a blob of crude oil (not to be confused with tar , which is a human-made product derived from pine trees or refined from petroleum) which has been weathered after floating in the ocean. Tarballs are an aquatic pollutant in most environments, although they can occur naturally, for example in the Santa Barbara Channel of California [ 136 ] [ 137 ] or in the Gulf of Mexico off Texas. [ 138 ] Their concentration and features have been used to assess the extent of oil spills . Their composition can be used to identify their sources of origin, [ 139 ] [ 140 ] and tarballs themselves may be dispersed over long distances by deep sea currents. [ 137 ] They are slowly decomposed by bacteria, including Chromobacterium violaceum , Cladosporium resinae , Bacillus submarinus , Micrococcus varians , Pseudomonas aeruginosa , Candida marina and Saccharomyces estuari . [ 136 ] James S. Robbins has argued that the advent of petroleum-refined kerosene saved some species of great whales from extinction by providing an inexpensive substitute for whale oil , thus eliminating the economic imperative for open-boat whaling , [ 141 ] but others say that fossil fuels increased whaling with most whales being killed in the 20th century. [ 142 ] In 2018 road transport used 49% of petroleum, aviation 8%, and uses other than energy 17%. [ 143 ] Electric vehicles are the main alternative for road transport and biojet for aviation. [ 144 ] [ 145 ] [ 146 ] Single-use plastics have a high carbon footprint and may pollute the sea, but as of 2022 the best alternatives are unclear. [ 147 ] Control of petroleum production has been a significant driver of international relations during much of the 20th and 21st centuries. [ 148 ] Organizations like OPEC have played an outsized role in international politics. Some historians and commentators have called this the " Age of Oil " [ 148 ] With the rise of renewable energy and addressing climate change some commentators expect a realignment of international power away from petrostates . [ citation needed ] "Oil rents" have been described as connected with corruption in political literature. [ 149 ] A 2011 study suggested that increases in oil rents increased corruption in countries with heavy government involvement in the production of oil. The study found that increases in oil rents "significantly deteriorates political rights". The investigators say that oil exploitation gave politicians "an incentive to extend civil liberties but reduce political rights in the presence of oil windfalls to evade redistribution and conflict". [ 150 ] Petroleum production has been linked with conflict for many years, leading to thousands of deaths. [ 151 ] Petroleum deposits are in hardly any countries around the world; mainly in Russia and some parts of the middle east. [ 152 ] [ 153 ] Conflicts may start when countries refuse to cut oil production in which other countries respond to such actions by increasing their production causing a trade war as experienced during the 2020 Russia–Saudi Arabia oil price war . [ 154 ] Other conflicts start due to countries wanting petroleum resources or other reasons on oil resource territory experienced in the Iran–Iraq War . [ 155 ] The Organization of the Petroleum Exporting Countries ( OPEC / ˈ oʊ p ɛ k / OH -pek ) is an organization enabling the co-operation of leading oil-producing and oil-dependent countries in order to collectively influence the global oil market and maximize profit . It was founded on 14 September 1960 in Baghdad by the first five members: Iran , Iraq , Kuwait , Saudi Arabia , and Venezuela . The organization, which currently comprises 12 member countries, accounted for 38 percent of global oil production , according to a 2022 report. [ 156 ] [ 157 ] Additionally, it is estimated that 79.5 percent of the world's proven oil reserves are located within OPEC nations, with the Middle East alone accounting for 67.2 percent of OPEC's total reserves. [ 158 ] [ 159 ] In a series of steps in the 1960s and 1970s, OPEC restructured the global system of oil production in favor of oil-producing states and away from an oligopoly of dominant Anglo-American oil firms (the " Seven Sisters "). [ 160 ] In the 1970s, restrictions in oil production led to a dramatic rise in oil prices with long-lasting and far-reaching consequences for the global economy. Since the 1980s, OPEC has had a limited impact on world oil-supply and oil-price stability, as there is frequent cheating by members on their commitments to one another, and as member commitments reflect what they would do even in the absence of OPEC. [ 161 ] The formation of OPEC marked a turning point toward national sovereignty over natural resources . OPEC decisions have come to play a prominent role in the global oil market and in international relations . Economists have characterized OPEC as a textbook example of a cartel [ 162 ] (a group whose members cooperate to reduce market competition ) but one whose consultations may be protected by the doctrine of state immunity under international law. [ 163 ] Consumption in the twentieth and twenty-first centuries has been abundantly pushed by automobile sector growth. The 1985–2003 oil glut even fueled the sales of low fuel economy vehicles in OECD countries. The 2008 economic crisis seems to have had some impact on the sales of such vehicles; still, in 2008 oil consumption showed a small increase. In 2016 Goldman Sachs predicted lower demand for oil due to emerging economies concerns, especially China. [ 166 ] The BRICS (Brasil, Russia, India, China, South Africa) countries might also kick in, as China briefly had the largest automobile market in December 2009. [ 167 ] In the long term, uncertainties linger; the OPEC believes that the OECD countries will push low consumption policies at some point in the future; when that happens, it will definitely curb oil sales, and both OPEC and the Energy Information Administration (EIA) kept lowering their 2020 consumption estimates during the past five years. [ 168 ] A detailed review of International Energy Agency oil projections have revealed that revisions of world oil production, price and investments have been motivated by a combination of demand and supply factors. [ 169 ] All together, Non-OPEC conventional projections have been fairly stable the last 15 years, while downward revisions were mainly allocated to OPEC. Upward revisions are primarily a result of US tight oil . Production will also face an increasingly complex situation; while OPEC countries still have large reserves at low production prices, newly found reservoirs often lead to higher prices; offshore giants such as Tupi , Guara and Tiber demand high investments and ever-increasing technological abilities. Subsalt reservoirs such as Tupi were unknown in the twentieth century, mainly because the industry was unable to probe them. Enhanced Oil Recovery (EOR) techniques (example: DaQing , China [ 170 ] ) will continue to play a major role in increasing the world's recoverable oil. The expected availability of petroleum resources has always been around 35 years or even less since the start of the modern exploration. The oil constant , an insider pun in the German industry, refers to that effect. [ 171 ] A growing number of divestment campaigns from major funds pushed by newer generations who question the sustainability of petroleum may hinder the financing of future oil prospection and production. [ 172 ] Peak oil is a term applied to the projection that future petroleum production, whether for individual oil wells, entire oil fields, whole countries, or worldwide production, will eventually peak and then decline at a similar rate to the rate of increase before the peak as these reserves are exhausted. [ citation needed ] [ 173 ] The peak of oil discoveries was in 1965, and oil production per year has surpassed oil discoveries every year since 1980. [ 174 ] It is difficult to predict the oil peak in any given region, due to the lack of knowledge and/or transparency in the accounting of global oil reserves. [ 175 ] Based on available production data, proponents have previously predicted the peak for the world to be in the years 1989, 1995, or 1995–2000. Some of these predictions date from before the recession of the early 1980s, and the consequent lowering in global consumption, the effect of which was to delay the date of any peak by several years. Just as the 1971 U.S. peak in oil production was only clearly recognized after the fact, a peak in world production will be difficult to discern until production clearly drops off. [ 176 ] In 2020, according to BP's Energy Outlook 2020 , peak oil had been reached, due to the changing energy landscape coupled with the economic toll of the COVID-19 pandemic . While there has been much focus historically on peak oil supply, the focus is increasingly shifting to peak demand as more countries seek to transition to renewable energy. The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former oil exporters are expected to lose power, while the positions of former oil importers and countries rich in renewable energy resources is expected to strengthen. [ 177 ] Unconventional oil is petroleum produced or extracted using techniques other than the conventional methods. The calculus for peak oil has changed with the introduction of unconventional production methods. In particular, the combination of horizontal drilling and hydraulic fracturing has resulted in a significant increase in production from previously uneconomic plays. [ 178 ] Certain rock strata contain hydrocarbons but have low permeability and are not thick from a vertical perspective. Conventional vertical wells would be unable to economically retrieve these hydrocarbons. Horizontal drilling, extending horizontally through the strata, permits the well to access a much greater volume of the strata. Hydraulic fracturing creates greater permeability and increases hydrocarbon flow to the wellbore. On Saturn's largest moon, Titan , lakes of liquid hydrocarbons comprising methane, ethane, propane and other constituents, occur naturally. Data collected by the space probe Cassini–Huygens yield an estimate that the visible lakes and seas of Titan contain about 300 times the volume of Earth's proven oil reserves. [ 179 ] [ 180 ] Drilled samples from the surface of Mars taken in 2015 by the Curiosity rover's Mars Science Laboratory have found organic molecules of benzene and propane in 3-billion-year-old rock samples in Gale Crater . [ 181 ]
https://en.wikipedia.org/wiki/Petroleum
Petroleum Remediation Product (PRP) is a registered trade name of United Remediation Technology for a line of biodegradable wax-based hydrocarbon adsorbents and bioremediation agents. PRP was created in the 1990s by NASA ’s Jet Propulsion Laboratory and has been used to assist in remediating oil spills such as the 2010 Deepwater Horizon oil spill . [ 1 ] [ 2 ] PRP is a powder composed of microscopic hollow spheres of wax up to 150 microns in size. [ 3 ] PRP is a loose dry powder composed of hollow microspheres of beeswax , soy wax, and other natural waxes. PRP microspheres range in size from 10 microns to 150 microns. [ 4 ] The waxes that comprise PRP are natural hydrocarbons making them oleophilic (having a strong affinity for oils rather than water) and hydrophobic . [ 5 ] The high surface area and oleophilic properties of PRP allow the microspheres to adsorb at least twice PRP’s weight in hydrocarbons such as crude oil or diesel . [ 6 ] PRP stimulates the growth of hydrocarbon-metabolizing microorganisms such as Yarrowia lipolytica that are included along with additional nutrients in the wax microspheres. [ 7 ] In scientific studies PRP has been found to biodegrade hydrocarbons several times faster than natural attenuation . [ 8 ] In the early 1980s, NASA engineers and researchers from the Jet Propulsion Laboratory (JPL) as well as the Marshall Space Flight Center were researching methods for the creation of hollow, spherical, latex microcapsules capable of containing live cells for use in time-released antibiotics or targeted doses of medication. Due to Earth's gravity, NASA’s initial experimentation failed to produce spherical microcapsules larger than 10 microns. Subsequent experiments aboard the Space Shuttle yielded microspheres up to 30 microns in size. [ 3 ] [ 9 ] Later in the early 1990s, independent researchers proposed that beeswax and other natural waxes could be used instead of latex, and that they may have oil adsorbing properties. This led to NASA’s earlier experiments being refined by independent researchers and Pittsburgh based company “PetrolRem” in partnership with JPL and Marshal Space Flight Center scientists. [ 1 ] These experiments were able to improve on earlier NASA techniques and developed proprietary methods to counteract the effects of gravity that yielded microspheres up to 500 microns in size. In the mid 1990s PRP was tested and evaluated by the National Environmental Technology Applications Corporation in partnership with the EPA and the University of Pittsburgh and found to be “capable of significantly accelerating the natural rate of diesel oil degradation in near-environmental conditions.” PRP microspheres were found to be highly oleophilic as well as hydrophobic making it an ideal solution for hydrocarbon spills that contaminate water such as ocean spills, mangroves and marshes, and in groundwater aquifers. As a result, the powder was named “Petroleum Remediation Product.” At the time PRP was the only biological product approved for use in the Chesapeake bay. [ 10 ] In 1994 PRP was officially recognized as a NASA spinoff technology . [ 7 ] After the 1989 Exxon Valdez oil spill multiple PRP based products were developed, to assist in the later remediation of contaminated sites, such as PRP filled containment booms, bilge socks, and a PRP slurry that could be sprayed from a hydroseeder . [ 9 ] In 2004 PetrolRem and its assets, including PRP were acquired by Universal Remediation Inc. Universal Remediation broadened production and availability of PRP making it accessible to more clients in more varied forms. Universal Remediation also developed a variant of PRP that could be used on hard surfaces called “Oil Buster.” “Oil Buster” was initially used primarily by the railroad industry to remediate tracks that were saturated with diesel fuel and oil. [ 1 ] [ 11 ] Universal Remediation also developed the Wellboom, a thin weighted PRP filled sock that could be used in petroleum storage facilities, gasoline stations, and to remediate groundwater contamination. [ 1 ] In 2007 PRP was featured in the History Channel ’s “ Modern Marvels ” episode “It came from outer space” [ 12 ] In 2008 PRP was inducted into the Space Foundation 's Space Technology Technology Hall of Fame. [ 13 ] In 2010 PRP and its derivative products were used to assist in the remediation of the Deepwater Horizon spill. [ 2 ] [ 9 ] In 2019 Universal Remediation and its assets including PRP were acquired by United Remediation Technology, LLC. [ 14 ]
https://en.wikipedia.org/wiki/Petroleum_Remediation_Product
Petroleum benzine is a hydrocarbon -based solvent mixture that is classified by its physical properties (e.g. boiling point, vapor pressure) rather than a specific chemical composition. The chemical composition of a petroleum distillate can be modified to result in a solvent with a reduced concentration of unsaturated hydrocarbons , i.e. alkenes , by hydrotreating and/or reduced aromatics, e.g. benzene , toluene , xylene , by several dearomatization methods. [ 1 ] The most important distinction amongst the various hydrocarbon solvents may be their boiling/distillation ranges (and, by association, volatility , flash point , etc.) and aromatic content. [ 2 ] Given the toxicity/ carcinogenicity of some aromatic hydrocarbons , most notably benzene, the aromatic content of petroleum distillate solvents, which would typically be in the 10-25% (w/w) range for most petroleum fractions, can be advantageously reduced when their unique solvation properties are not required, and a less odorous, lower toxicity solvent is desired, especially when present in consumer products. Petroleum benzine appears synonymous with petroleum spirit. Petroleum spirit is generally considered to be the fractions between the very lightest hydrocarbons, petroleum ether, and the heavier distillates, mineral spirits. For example, petroleum benzine with a boiling range of 36 - 83 °C sold by EMD Millipore under CAS-No. 64742-49-0 is identified in the product MSDS as hydrotreated light petroleum distillates comprising ≥ 90% C5-C7 hydrocarbons, n-alkanes, isoalkanes , and < 5% n-hexane , while Santa Cruz Biotechnology sells a petroleum ether product under the same CAS-No. According to their corresponding MSDS, most commercially offered petroleum benzine solvents consist of paraffins ( alkanes ) with chain lengths of C5 to C9 ( i.e. n-pentane to n-nonane and their isomers), cycloparaffins ( cyclopentane , cyclohexane , ethylcyclopentane, etc.) and aromatic hydrocarbons ( benzene , toluene , xylene , etc.). The Toxic Substances Control Act Definition 2008 describes petroleum benzine as "a complex combination of hydrocarbons obtained by treating a petroleum fraction with hydrogen in the presence of a catalyst . It consists of hydrocarbons having carbon numbers predominantly in the range of C4 through C11 and boiling in the range of approximately -20°C to 190°C." Beginning in the 1960s and 70s, the high incidence rate of polyneuropathy amongst industrial workers chronically exposed to petroleum benzine and other hydrocarbon solvents prompted investigations into the safety of chronic exposure to petroleum distillates. [ 3 ] [ 4 ] [ 5 ] Many of the cases of polyneuropathy amongst workers chronically exposed to vapors of petroleum benzine and similar solvents have been attributed to the n-hexane component of these mixtures [ citation needed ] . Using an animal model (Wistar-strain male rats), [ 6 ] it was reported that chronic exposure (12 h a day for 24 weeks) to hydrocarbon solvent vapors conspicuously impaired peripheral nerve function in the 500 ppm n-hexane group, slightly impaired in the 200 ppm n-hexane group and petroleum benzine II group (containing 500 ppm n-hexane), and barely impaired in the petroleum benzine I group (containing 200 ppm n-hexane). These results suggest that some components in petroleum benzine are likely to antagonize the neurotoxic effects of n-hexane to the peripheral nerves, possibly by inhibiting the oxidation of n-hexane to its more toxic metabolites 2-hexanone and 2,5-hexanedione [ citation needed ] . Depressed body weight gains amongst the exposed groups compared to the control group were observed in the order: petroleum benzine II > petroleum benzine I (containing 200 ppm n-hexane) >> 500 ppm n-hexane > 200 ppm n-hexane. These results suggest that other components found in petroleum benzine may have additive, synergistic or potentiative effects on the biological effects of n-hexane. [ 7 ] Namely, 1000 ppm n-hexane, 3000 ppm n-heptane, and 1000 ppm toluene were reported to have depressing effects on the body weight gain of rats. [ 8 ]
https://en.wikipedia.org/wiki/Petroleum_benzine
Petroleum coke , abbreviated coke , pet coke or petcoke , is a final carbon -rich solid material that derives from oil refining , and is one type of the group of fuels referred to as cokes . Petcoke is the coke that, in particular, derives from a final cracking process—a thermo-based chemical engineering process that splits long chain hydrocarbons of petroleum into shorter chains—that takes place in units termed coker units . [ 1 ] (Other types of coke are derived from coal .) Stated succinctly, coke is the " carbonization product of high-boiling hydrocarbon fractions obtained in petroleum processing (heavy residues)". [ 1 ] Petcoke is also produced in the production of synthetic crude oil (syncrude) from bitumen extracted from Canada's oil sands and from Venezuela's Orinoco oil sands . [ 2 ] [ 3 ] In petroleum coker units, residual oils from other distillation processes used in petroleum refining are treated at a high temperature and pressure leaving the petcoke after driving off gases and volatiles, and separating off remaining light and heavy oils. These processes are termed "coking processes", and most typically employ chemical engineering plant operations for the specific process of delayed coking . This coke can either be fuel grade (high in sulfur and metals) or anode grade (low in sulfur and metals). The raw coke directly out of the coker is often referred to as green coke . [ 1 ] In this context, "green" means unprocessed. The further processing of green coke by calcining in a rotary kiln removes residual volatile hydrocarbons from the coke. The calcined petroleum coke can be further processed in an anode baking oven to produce anode coke of the desired shape and physical properties. The anodes are mainly used in the aluminium and steel industry. Petcoke is over 80% carbon and emits 5% to 10% more carbon dioxide (CO 2 ) than coal on a per-unit-of-energy basis when it is burned. As petcoke has a higher energy content, petcoke emits between 30% and 80% more CO 2 than coal per unit of weight. [ 3 ] The difference between coal and coke in CO 2 production per unit of energy produced depends upon the moisture in the coal, which increases the CO 2 per unit of energy – heat of combustion – and on the volatile hydrocarbons in coal and coke, which decrease the CO 2 per unit of energy. There are at least three basic types of petroleum coke: needle coke, sponge coke, and shot coke. Different types of petroleum coke have different microstructures due to differences in operating variables and nature of feedstock. Significant differences are also to be observed in the properties of the different types of coke, particularly ash and volatile matter contents. [ 4 ] Needle coke, also called acicular coke, is a highly crystalline petroleum coke used in the production of electrodes for the steel and aluminium industries and is particularly valuable because the electrodes must be replaced regularly. Needle coke is produced exclusively from either fluid catalytic cracking (FCC) decant oil or coal tar pitch. Petcoke, altered through the process of calcining which it is heated or refined raw coke eliminates much of the component of the resource. Usually petcoke when refined does not release the heavy metals as volatiles or emissions. [ 5 ] Depending on the petroleum feed stock used, the percentage of carbon in petcoke can be as high as 98-99%. This creates a carbon-based compound containing hydrogen in concentrations between 3.0 – 4.0%. Raw (or green) coke contains between 0.1 – 0.5% nitrogen and 0.2 – 6.0% sulfur which become emissions when coke is calcined. [ 5 ] Through thermal processing the composition in weight is reduced with the volatile matter and sulfur being emitted. [ 6 ] This process ends in the honeycomb petcoke which according to the name giving is a solid carbon structure with holes in it. [ 6 ] (Calcined @ 2375 °F = 1300 °C) [ 5 ] Fuel-grade coke is classified as either sponge coke or shot coke morphology. While oil refiners have been producing coke for over 100 years, the mechanisms that cause sponge coke or shot coke to form are not well understood and cannot be accurately predicted. In general, lower temperatures and higher pressures promote sponge coke formation. Additionally, the amount of heptane insolubles present and the fraction of light components in the coker feed contribute. While its high heat and low ash content make it a decent fuel for power generation in coal-fired boilers , petroleum coke is high in sulfur and low in volatile content, and this poses environmental (and technical) problems with its combustion. Its gross calorific value (HHV) is nearly 8000 Kcal/kg which is twice the value of average coal used in electricity generation. [ 5 ] A common choice of sulfur recovering unit for burning petroleum coke is the SNOX Flue gas desulfurisation technology, [ 7 ] which is based on the well-known WSA Process . Fluidized bed combustion is commonly used to burn petroleum coke. Gasification is increasingly used with this feedstock (often using gasifiers placed in the refineries themselves). Calcined petroleum coke (CPC) is the product from calcining petroleum coke. This coke is the product of the coker unit in a crude oil refinery . The calcined petroleum coke is used to make anodes for the aluminium , steel and titanium smelting industry and as the feed stock for the production of synthetic graphite. The green coke must have sufficiently low metal content to be used as anode material. Green coke with this low metal content is called anode-grade coke. When green coke has excessive metal content, it is not calcined and is used as fuel-grade coke in furnaces. A high sulfur content in petcoke reduces its market value, and may preclude its use as fuel due to restrictions on sulfur oxides emissions for environmental reasons. Methods have thus been proposed to reduce or eliminate the sulfur content of petcoke. Most of them involve the desorption of the inorganic sulfur present in the pores or surface of the coke, and the partition and removal of organic sulfur compounds, such as sulfurous aromatic heterocycles. Potential petroleum desulfurization techniques can be classified as follows: [ 8 ] As of 2011 there was no commercial process available to desulfurize petcoke. [ 9 ] Nearly pure carbon, petcoke is a potent source of carbon dioxide if burned. [ 10 ] Petroleum coke may be stored in a pile near an oil refinery pending sale. For example, in 2013 a large stockpile owned by Koch Carbon near the Detroit River was produced by a Marathon Petroleum refinery in Detroit which had begun refining bitumen from the oil sands of Alberta in November 2012. Large stockpiles of petcoke also existed in Canada as of 2013, and China and Mexico were markets for petcoke exported from California to be used as fuel. As of 2013 Oxbow Corporation, owned by William I. Koch , was a major dealer in petcoke, selling 11 million tons annually. [ 11 ] In 2017, a quarter of US exports of the fuel went to India, an Associated Press investigation found. In 2016, this amounted to more than eight million metric tons, more than 20 times as much as in 2010. [ 12 ] India's Environmental Pollution Control Authority tested imported petcoke in use near New Delhi , and found sulfur levels 17 times the legal limit. [ 12 ] The International Convention for Prevention of Pollution from Ships ( MARPOL 73/78 ), adopted by the International Maritime Organization (IMO), has mandated that marine vessels shall not consume residual fuel oils ( bunker fuel , etc) with a sulfur content greater than 0.5% from the year 2020. [ 13 ] Nearly 38% of residual fuel oils are consumed in the shipping sector. In the process of converting excess residual oils into lighter oils by coking processes, pet coke is generated as a byproduct. Pet coke availability is expected to increase in the future due to falling demand for residual oil. Pet coke is also used in methanation plants to produce synthetic natural gas , etc. in order to avoid a pet coke disposal problem. [ 14 ] Petroleum coke is sometimes a source of fine dust , which can penetrate the filtering process of the human airway, lodge in the lungs and cause serious health problems. Studies have shown that petroleum coke itself has a low level of toxicity and there is no evidence of carcinogenicity . [ 15 ] [ 16 ] Petroleum coke can contain vanadium , a toxic metal. Vanadium was found in the dust collected in occupied dwellings near the petroleum coke stored next to the Detroit River. Vanadium is toxic in tiny quantities, 0.8 micrograms per cubic meter of air, according to the EPA . [ 17 ] According to multiple EPA studies and analyses, petroleum coke has a low health hazard potential in humans. It does not have any observable carcinogenic, developmental, or reproductive effects. During animal case studies repeated-dose chronic inhalation did show respiratory inflammation due to dust particles, but not specific to petroleum coke. [ 18 ] Environmental concerns stem from the storage and combustion of petcoke. By-waste accumulates as petcoke is processed, making waste management an issue. Petcoke's high silt content of 21.2% increases the risk of fugitive dust drifting away from petcoke mounds under heavy wind. An estimated 100 tons of petcoke fugitive dust including PM10 and PM2.5 are released into the atmosphere per year in the United States. [ 19 ] Waste management and release of fugitive dust is especially an issue in the cities of Chicago , Detroit and Green Bay . [ 18 ] Externalities stem from petcoke that cause potential environmental impacts. Petcoke is composed of 90% elemental carbon by weight which is converted to CO 2 during combustion. Use of petcoke also produces emissions of sulfur, and the potential for water pollution through nickel and vanadium runoff from refining and storage. [ 17 ] Media related to Petroleum coke at Wikimedia Commons
https://en.wikipedia.org/wiki/Petroleum_coke
Petroleum engineering is a field of engineering concerned with the activities related to the production of hydrocarbons , which can be either crude oil or natural gas . [ 1 ] Exploration and production are deemed to fall within the upstream sector of the oil and gas industry. Exploration , by earth scientists , and petroleum engineering are the oil and gas industry's two main subsurface disciplines, which focus on maximizing economic recovery of hydrocarbons from subsurface reservoirs. Petroleum geology and geophysics focus on provision of a static description of the hydrocarbon reservoir rock, while petroleum engineering focuses on estimation of the recoverable volume of this resource using a detailed understanding of the physical behavior of oil, water and gas within porous rock at very high pressure. The combined efforts of geologists and petroleum engineers throughout the life of a hydrocarbon accumulation determine the way in which a reservoir is developed and depleted, and usually they have the highest impact on field economics. Petroleum engineering requires a good knowledge of many other related disciplines, such as geophysics, petroleum geology, formation evaluation ( well logging ), drilling , economics , reservoir simulation , reservoir engineering , well engineering, artificial lift systems, completions and petroleum production engineering . Recruitment to the industry has historically been from the disciplines of physics , mechanical engineering , chemical engineering and mining engineering . Subsequent development training has usually been done within oil companies. The profession got its start in 1914 within the American Institute of Mining, Metallurgical and Petroleum Engineers (AIME). The first Petroleum Engineering degree was conferred in 1915 by the University of Pittsburgh . [ 2 ] Since then, the profession has evolved to solve increasingly difficult situations. Improvements in computer modeling, materials and the application of statistics, probability analysis, and new technologies like horizontal drilling and enhanced oil recovery , have drastically improved the toolbox of the petroleum engineer in recent decades. Automation, [ 3 ] sensors, [ 4 ] and robots [ 5 ] [ 6 ] are being used to propel the industry to more efficiency and safety. Deep-water, arctic and desert conditions are usually contended with. High temperature and high pressure (HTHP) environments have become increasingly commonplace in operations and require the petroleum engineer to be savvy in topics as wide-ranging as thermo-hydraulics, geomechanics, and intelligent systems. The Society of Petroleum Engineers (SPE) is the largest professional society for petroleum engineers and publishes much technical information and other resources to support the oil and gas industry. It provides free online education (webinars), mentoring, and access to SPE Connect, an exclusive platform for members to discuss technical issues, best practices, and other topics. SPE members also are able to access the SPE Competency Management Tool to find knowledge and skill strengths and opportunities for growth. [ 7 ] SPE publishes peer-reviewed journals, books, and magazines. [ 8 ] SPE members receive a complimentary subscription to the Journal of Petroleum Technology and discounts on SPE's other publications. [ 9 ] SPE members also receive discounts on registration fees for SPE organized events and training courses. [ 9 ] SPE provides scholarships and fellowships to undergraduate and graduate students. According to the United States Department of Labor's Bureau of Labor Statistics, petroleum engineers are required to have a bachelor's degree in engineering, generally a degree focused on petroleum engineering is preferred, but degrees in mechanical, chemical, and civil engineering are satisfactory as well. [ 10 ] Petroleum engineering education is available at many universities in the United States and throughout the world - primarily in oil producing regions. U.S. News & World Report maintains a list of the Best Undergraduate Petroleum Engineering Programs. [ 11 ] SPE and some private companies offer training courses. [ 12 ] [ 13 ] [ 14 ] Some oil companies have considerable in-house petroleum engineering training classes. [ 15 ] [ 16 ] Petroleum engineering has historically been one of the highest-paid engineering disciplines, although there is a tendency for mass layoffs when oil prices decline and waves of hiring as prices rise. In 2020, the United States Department of Labor's Bureau of Labor Statistics reported the median pay for petroleum engineers was US$137,330, or roughly $66.02 per hour. [ 17 ] The same summary projects there will be 3% job growth in this field from 2019 to 2029. [ 17 ] SPE annually conducts a salary survey . In 2017, SPE reported that the average SPE professional member reported earning US$194,649 (including salary and bonus). [ 18 ] The average base pay reported in 2016 was $143,006. [ 18 ] Base pay and other compensation was on average was highest in the United States where the base pay was US$174,283. Drilling and production engineers tended to make the best base pay, US$160,026 for drilling engineers and US$158,964 for production engineers. Average base pay ranged from US$96,382-174,283. [ 19 ] There are still significant gender pay gaps, plus or minus 5% of the US average pay gap which was 18% difference in 2017. [ 20 ] [ 19 ] Also in 2016, U.S. News & World Report named petroleum engineering the top college major in terms of highest median annual wages of college-educated workers (age 25–59). [ 21 ] The 2010 National Association of Colleges and Employers survey showed petroleum engineers as the highest paid 2010 graduates, at an average annual salary of $125,220. [ 22 ] For individuals with experience, salaries can range from $170,000 to $260,000. They make an average of $112,000 a year and about $53.75 per hour. In a 2007 article, Forbes.com reported that petroleum engineering was the 24th best paying job in the United States. [ 23 ] Petroleum engineers divide themselves into several types: [ 1 ] Petroleum Engineering, like most forms of engineering, requires a strong foundation in physics , chemistry , and mathematics . [ 24 ] Other fields pertinent to petroleum engineering include geology , formation evaluation, fluid flow in porous media, well drilling technology, economics , geostatistics , etc. [ 24 ] [ 25 ] Geostatistics as applied to petroleum engineering uses statistical analysis to characterize reservoirs and create flow simulations that quantify uncertainties of the location of oil and gas. [ 26 ] Petroleum geology is an interdisciplinary field composed of geophysics , geochemistry , and paleontology . [ 27 ] The main focus of petroleum geology is the exploration and appraisal of reservoirs containing hydrocarbons via technical forms of analysis. [ 27 ] Well drilling technology is primarily the focus for drilling engineers. The two forms of well drilling are percussion and rotary drilling, rotary being the most common of the two. An important aspect of drilling is the drill bit , which creates a borehole of approximately three and a half to thirty inches in diameter. The three classes of drill bits, roller cone , fixed cutter, and hybrid, each use teeth to break up the rock. [ 28 ] To optimize drilling efficiency and cost, drilling engineers make use of drilling simulators that allow them to identify drilling conditions. [ 29 ] Drilling technologies including horizontal drilling and directional drilling have been developed to obtain hydrocarbons profitably from impermeable and coal-bed methane accumulations.
https://en.wikipedia.org/wiki/Petroleum_engineering
Petroleum ether is the petroleum fraction consisting of aliphatic hydrocarbons and boiling in the range 35–60 °C, and commonly used as a laboratory solvent . [ 4 ] Despite the name, petroleum ether is not an ether ; the term is used only figuratively, signifying extreme lightness and volatility. The very lightest, most volatile liquid hydrocarbon solvents that can be bought from laboratory chemical suppliers may also be offered under the name petroleum ether. Petroleum ether consists mainly of aliphatic hydrocarbons and is usually low in aromatics . It is commonly hydrodesulfurized and may be hydrogenated to reduce the amount of aromatic and other unsaturated hydrocarbons. Petroleum ether bears normally a descriptive suffix giving the boiling range. Thus, from the leading international laboratory chemical suppliers it is possible to buy various petroleum ethers with boiling ranges such as 30–50 °C, 40–60 °C, 50–70 °C, 60–80 °C, etc. In the United States, laboratory-grade aliphatic hydrocarbon solvents with boiling ranges as high as 100–140 °C may be called petroleum ether, rather than petroleum spirit. [ 5 ] It is not advisable to employ a fraction with a wider boiling point range than 20 °C, because of possible loss of the more volatile portion during its use in recrystallisation, etc., and consequent different solubilising properties of the higher boiling residue. [ 6 ] Most of the unsaturated hydrocarbons may be removed by shaking two or three times with 10% of the volume worth of concentrated sulfuric acid ; vigorous shaking is then continued with successive portions of a concentrated solution of potassium permanganate in 10% sulfuric acid until the color of the permanganate remains unchanged. The solvent is then thoroughly washed with sodium carbonate solution and then with water, dried over anhydrous calcium chloride , and distilled. If required perfectly dry, it can be allowed to stand over sodium wire, or calcium hydride . [ 6 ] Ligroin is assigned the CAS Registry Number 8032-32-4, which is also applied to many other products, particularly those with low boiling points, called petroleum spirit , petroleum ether, and petroleum benzine . " Naphtha " has the CAS Registry Number 8030-30-6, which also covers petroleum benzine and petroleum ether: that is, the lower boiling point non-aromatic hydrocarbon solvents. [ 5 ] DIN 51630 provides for petroleum spirit (also called spezialbenzine or petrolether) which is described as "a special boiling-point spirit (SBPS) commonly used in laboratory applications, having high volatility and low aromatics content." Its initial boiling point is above 25 °C, its final boiling point up to 80 °C. [ 5 ] Petroleum ethers are extremely volatile, have very low flash points , and present a significant fire hazard. [ 5 ] Fires should be fought with foam, carbon dioxide , dry chemical or carbon tetrachloride . [ 2 ] The naphtha mixtures that are distilled at a lower boiling temperature have a higher volatility and, generally speaking, a higher degree of toxicity than the higher boiling fractions. [ 7 ] Exposure to petroleum ether occurs most commonly by either inhalation or through skin contact. Petroleum ether is metabolized by the liver with a biological half-life of 46–48 h. [ 3 ] Inhalation overexposure causes primarily central nervous system (CNS) effects (headaches, dizziness, nausea, fatigue, and incoordination). In general, the toxicity is more pronounced with petroleum ethers containing higher concentrations of aromatic compounds. n- Hexane is known to cause axonal damage in peripheral nerves. [ 3 ] Skin contact can cause allergic contact dermatitis . [ 3 ] Oral ingestion of hydrocarbons often is associated with symptoms of mucous membrane irritation, vomiting, and central nervous system depression. Cyanosis, tachycardia, and tachypnea may appear as a result of aspiration, with subsequent development of chemical pneumonitis . Other clinical findings include albuminuria, hematuria, hepatic enzyme derangement, and cardiac arrhythmias. Doses as low as 10 ml orally have been reported to be potentially fatal, whereas some patients have survived the ingestion of 60 ml of petroleum distillates. A history of coughing or choking in association with vomiting strongly suggests aspiration and hydrocarbon pneumonia. Hydrocarbon pneumonia is an acute hemorrhagic necrotizing disease that can develop within 24 h after the ingestion. Pneumonia may require several weeks for a complete resolution. [ 8 ] Intravenous administration produces fever and local tissue damage. [ 9 ] Petroleum-derived distillates have not been shown to be carcinogenic in humans. [ 7 ] Petroleum ether degrades rapidly in soil and water. [ 3 ]
https://en.wikipedia.org/wiki/Petroleum_ether