text
stringlengths
11
320k
source
stringlengths
26
161
A null corrector is an optical device used in the testing of large aspheric mirrors. A spherical mirror of any size can be tested relatively easily using standard optical components such as laser , mirrors , beamsplitters , and converging lenses . One method of doing this using a Shack cube is shown at the right, and many other setups are possible. An interferometer test such as this one generates a contour map of the deviation of the surface from a perfect sphere, with the contours in units of half the wavelength used. This is called a null test because when the mirror is perfect, the result is null (no contours at all). If the result is not null, then the mirror is not perfect, and the pattern shows where the optician should polish the mirror to improve it. However, the mirrors used in modern telescopes are not spherical – they are rotations of parabolas or hyperbolas , since these more complex shapes reduce optical aberrations and give a larger field of view . (See, for example, Ritchey-Chrétien telescope , or three-mirror anastigmats such as LSST .) Non-spherical mirrors such as these will not give a null result when tested as above, and tests that give null results are strongly preferred (they require little interpretation, and the results translate directly to polishing requirements). One solution is to introduce a null corrector . This consists of one or more lenses and/or mirrors introduced into the optical path that make the desired mirror look like a perfectly spherical mirror. Using this device, the measured contour map now shows the difference from the desired shape instead of the difference from a sphere. Now measurement and polishing can proceed just as in the spherical case. This method is used in the manufacture of almost all large mirrors for modern telescopes. [ 1 ] Since the mirror will be ground to what the null corrector reports as the right prescription, it is critical that the null corrector be itself correct. An error in building the null corrector led to the mirror in the Hubble Space Telescope being ground to the wrong shape. [ 2 ] Less famously, this has happened in other cases as well, such as the New Technology Telescope . [ 3 ] Originally, there was no easy way to test a null corrector, so mirror fabricators needed to take extra care that the lenses were correct and spaced correctly (this second part, spacing, was the source of the Hubble null corrector failure). [ 2 ] With the advent of computer-generated holograms , it is now possible to create a hologram with the phase response of an arbitrary mirror. Such a hologram can be made to analytically duplicate the phase response of the desired mirror, then be tested with the null corrector just as the real mirror would be tested. If the combination looks like a spherical mirror to the interferometer, then both the null corrector and the hologram are correct with high probability, since the null corrector and the hologram are constructed independently by different procedures. [ 4 ] This procedure was used to test (and find an error in) the null corrector used for the MMT Observatory single-mirror retrofit. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Null_corrector
In atmospheric chemistry , a null cycle is a catalytic cycle that simply interconverts chemical species without leading to net production or removal of any component. [ 1 ] In the stratosphere , null cycles and when the null cycles are broken are very important to the ozone layer . One of the most important null cycles takes place in the stratosphere, with the photolysis of ozone by ultraviolet photons with wavelengths less than 330 nanometers. This photolysis produces a monatomic oxygen that then reacts with the diatomic oxygen producing ozone. [ 2 ] There is no net molecular or atomic change, however. Overall, the reaction converts UV photon energy into heat thereby warming the stratosphere. [ 3 ] O 3 + hv (λ < 330 nm) → O 2 + O ( 1 D) O ( 1 D) + M → O ( 3 P) + M O ( 3 P) + O 2 → O 3 Net: hv → H The null cycle can be broken in the presence of certain molecules, leading to a net increase or decrease in ozone in the stratosphere. One important example is NO x emissions into the stratosphere. The NO x reacts with both the atomic oxygen and ozone leading to a net decrease in ozone. [ 2 ] This is particularly important at night when NO 2 cannot photolyze. NO + O 3 → NO 2 + O 2 NO 2 + O( 1 D) → NO + O 2 Net: O 3 + O( 1 D) → 2O 2 (net loss of ozone) Null cycles can also occur in the troposphere. One example is the null cycle that occurs during the day between NO x and ozone. Tropospheric Null Cycle O 3 + NO → O 2 + NO 2 NO 2 + hν → NO + O( 3 P) O ( 3 P) + O 2 + M → O 3 + M Net: hv → H This cycle links ozone to NOx in the troposphere during daytime. In equilibrium, described by the Leighton relationship , solar radiation and the NO 2 :NO ratio determine ozone abundance, maximizing around noon time.
https://en.wikipedia.org/wiki/Null_cycle
In mathematical physics , a null dust solution (sometimes called a null fluid ) is a Lorentzian manifold in which the Einstein tensor is null . Such a spacetime can be interpreted as an exact solution of Einstein's field equation , in which the only mass–energy present in the spacetime is due to some kind of massless radiation . By definition, the Einstein tensor of a null dust solution has the form G a b = 8 π Φ k a k b {\displaystyle G^{ab}=8\pi \Phi \,k^{a}\,k^{b}} where k → {\displaystyle {\vec {k}}} is a null vector field. This definition makes sense purely geometrically, but if we place a stress–energy tensor on our spacetime of the form T a b = Φ k a k b {\displaystyle T^{ab}=\Phi \,k^{a}\,k^{b}} , then Einstein's field equation is satisfied, and such a stress–energy tensor has a clear physical interpretation in terms of massless radiation. The vector field k → {\displaystyle {\vec {k}}} specifies the direction in which the radiation is moving; the scalar multiplier Φ {\displaystyle \Phi } specifies its intensity. Physically speaking, a null dust describes either gravitational radiation , or some kind of nongravitational radiation which is described by a relativistic classical field theory (such as electromagnetic radiation ), or a combination of these two. Null dusts include vacuum solutions as a special case. Phenomena which can be modeled by null dust solutions include: In particular, a plane wave of incoherent electromagnetic radiation is a linear superposition of plane waves, all moving in the same direction but having randomly chosen phases and frequencies. (Even though the Einstein field equation is nonlinear, a linear superposition of comoving plane waves is possible.) Here, each electromagnetic plane wave has a well defined frequency and phase, but the superposition does not. Individual electromagnetic plane waves are modeled by null electrovacuum solutions , while an incoherent mixture can be modeled by a null dust. The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components , because these are the components which can (in principle) be measured by an observer. In the case of a null dust solution, an adapted frame (a timelike unit vector field and three spacelike unit vector fields, respectively) can always be found in which the Einstein tensor has a particularly simple appearance: Here, e → 0 {\displaystyle {\vec {e}}_{0}} is everywhere tangent to the world lines of our adapted observers , and these observers measure the energy density of the incoherent radiation to be ϵ {\displaystyle \epsilon } . From the form of the general coordinate basis expression given above, it is apparent that the stress–energy tensor has precisely the same isotropy group as the null vector field k → {\displaystyle {\vec {k}}} . It is generated by two parabolic Lorentz transformations (pointing in the e → 3 {\displaystyle {\vec {e}}_{3}} direction) and one rotation (about the e → 3 {\displaystyle {\vec {e}}_{3}} axis), and it is isometric to the three-dimensional Lie group E ( 2 ) {\displaystyle E(2)} , the isometry group of the euclidean plane . Null dust solutions include two large and important families of exact solutions: The pp-waves include the gravitational plane waves and the monochromatic electromagnetic plane wave . A specific example of considerable interest is Robinson–Trautman null dusts include the Kinnersley–Walker photon rocket solutions, which include the Vaidya null dust , which includes the Schwarzschild vacuum .
https://en.wikipedia.org/wiki/Null_dust_solution
In theoretical physics , null infinity is a region at the boundary of asymptotically flat spacetimes . In general relativity , straight paths in spacetime, called geodesics , may be space-like, time-like, or light-like (also called null). The distinction between these paths stems from whether the spacetime interval of the path is positive (corresponding to space-like), negative (corresponding to time-like), or zero (corresponding to null). Light-like paths physically correspond to physical phenomena which propagate through space at the speed of light , such as electromagnetic radiation and gravitational radiation . The boundary of a flat spacetime is known as conformal infinity, and can be thought of as the end points of all geodesics as they go off to infinity. [ 1 ] The region of null infinity corresponds to the terminus of all null geodesics in a flat Minkowski space . The different regions of conformal infinity are most often visualized on a Penrose diagram , where they make up the boundary of the diagram. There are two distinct regions of null infinity, called past and future null infinity, which can be denoted using a script ' I ' as I + {\displaystyle {\mathcal {I}}^{+}} and I − {\displaystyle {\mathcal {I}}^{-}} . These two regions are often referred to as 'scri-plus' and 'scri-minus' respectively. [ 2 ] Geometrically, each of these regions actually has the structure of a topologically cylindrical three dimensional region. The study of null infinity originated from the need to describe the global properties of spacetime. While early methods in general relativity focused on the local structure built around local frames of reference, work beginning in the 1960s began analyzing global descriptions of general relativity, analyzing the structure of spacetime as a whole. [ 3 ] The original study of null infinity originated with Roger Penrose's work analyzing black hole spacetimes . [ 4 ] Null infinity is a useful mathematical tool for analyzing behavior in asymptotically flat spaces when limits of null paths need to be taken. For instance, black hole spacetimes are asymptotically flat, and null infinity can be used to characterize radiation in the limit that it travels outward away from the black hole. [ 5 ] Null infinity can also be considered in the context of spacetimes which are not necessarily asymptotically flat, such as in the FLRW cosmology. [ 2 ] The metric for a flat Minkowski spacetime in spherical coordinates is d s 2 = − d t 2 + d r 2 + r 2 d Ω 2 {\displaystyle ds^{2}=-dt^{2}+dr^{2}+r^{2}d\Omega ^{2}} . Conformal compactification induces a transformation which preserves angles, but changes the local structure of the metric and adds the boundary of the manifold, thus making it compact. [ 6 ] For a given metric g i j {\displaystyle g_{ij}} , a conformal compactification scales the entire metric by some conformal factor such that g i j ¯ = Ω 2 g i j {\displaystyle {\overline {g_{ij}}}=\Omega ^{2}g_{ij}} such that all of the points at infinity are scaled down to a finite value. [ 3 ] Typically, the radial and time coordinates are transformed into null coordinates u = t + r {\displaystyle u=t+r} and v = t − r {\displaystyle v=t-r} . These are then transformed as p = tan − 1 ⁡ u {\displaystyle p=\tan ^{-1}u} and q = tan − 1 ⁡ v {\displaystyle q=\tan ^{-1}v} in order to use the properties of the inverse tangent function to map infinity to a finite value. [ 2 ] The typical time and space coordinates may be introduced as T = p + q {\displaystyle T=p+q} and R = p − q {\displaystyle R=p-q} . After these coordinate transformations, a conformal factor is introduced, leading to a new unphysical metric for Minkowski space: [ 7 ] d s 2 = − d T 2 + d R 2 + ( sin 2 ⁡ R ) d Ω 2 {\displaystyle ds^{2}=-dT^{2}+dR^{2}+(\sin ^{2}R)d\Omega ^{2}} . This is the metric on a Penrose diagram , illustrated. Unlike the original metric, this metric describes, a manifold with a boundary, given by the restrictions on R {\displaystyle R} and T {\displaystyle T} . There are two null surfaces on this boundary, corresponding to past and future null infinity. Specifically, future null infinity consists of all points where T = π − R {\displaystyle T=\pi -R} and 0 < R < π {\displaystyle 0<R<\pi } , and past null infinity consists of all points where T = R − π {\displaystyle T=R-\pi } and 0 < R < π {\displaystyle 0<R<\pi } . [ 2 ] From the coordinate restrictions, null infinity is a three dimensional null surface, with a cylindrical topology R × S 2 {\displaystyle \mathbb {R} \times S^{2}} . [ 1 ] [ 8 ] The construction given here is specific to the flat metric of Minkowski space. However, such a construction generalizes to other asymptotically flat spaces as well. In such scenarios, null infinity still exists as a three dimensional null surface at the boundary of the spacetime manifold, but the manifold's overall structure might be different. For instance, in Minkowski space, all null geodesics begin at past null infinity and end at future null infinity. However, in the Schwarzschild black hole spacetime, the black hole event horizon leads to two possibilities: geodesics may end at null infinity, but may also end at the black hole's future singularity. The presence of null infinity (along with the other regions of conformal infinity) guarantees geodesic completion on the spacetime manifold, where all geodesics terminate either at a true singularity or intersect the boundary of infinity. [ 7 ] The symmetries of null infinity are characteristically different from that of the typical regions of spacetime. While the symmetries of a flat Minkowski spacetime are given by the Poincaré group , the symmetries of null infinity are instead given by the Bondi–Metzner–Sachs (BMS) group . [ 9 ] [ 10 ] The work by Bondi , Metzner, and Sachs characterized gravitational radiation using analyses related to null infinity, whereas previous work such as the ADM framework dealt with characterizations of spacelike infinity. [ 8 ] In recent years, interest has grown in studying gravitons on the boundary null infinity. [ 8 ] [ 11 ] Using the BMS group, quanta on null infinity can be characterized as massless spin-2 particles, consistent with the quanta of general relativity being gravitons. [ 8 ]
https://en.wikipedia.org/wiki/Null_infinity
In game theory , a null move or pass is a decision by a player to not make a move when it is that player's turn to move. Even though null moves are against the rules of many games, they are often useful to consider when analyzing these games. Examples of this include the analysis of zugzwang (a situation in chess or other games in which a null move, if it were allowed, would be better than any other move), [ 1 ] and the null-move heuristic in game tree analysis (a method of pruning game trees involving making a null move and then searching to a lower depth). [ 2 ] The reason a reduced-depth null move is effective in game tree alpha-beta search reduction is that tactical threats tend to show up very quickly, in just one or two moves. If the opponent has no tactical threats revealed by null move search, the position may be good enough to exceed the best result obtainable in another branch of the tree (i.e. "beta"), so that no further search need be done from the current node, and the result from the null move can be returned as the search value. Even if the null move search value doesn't exceed beta, the returned value may set a higher floor on the valuation of the position than the present alpha, so more cutoffs will occur at descendant sibling nodes from the position. The underlying assumption is that at least some legal move available to the player on move at the node is better than no move at all. In the case of the player on move being in zugzwang, that assumption is false, and the null move result is invalid (in that case, it actually sets a ceiling on the value of the position). Therefore it is necessary to have logic to exclude null moves at nodes in the tree where zugzwang is possible. In chess, zugzwang positions can occur in king and pawn endgames, and sometimes in end games that include other pieces as well. This game theory article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Null_move
In science , a null result is a result without the expected content: that is, the proposed result is absent. [ 1 ] It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis . In statistical hypothesis testing , a null result occurs when an experimental result is not significantly different from what is to be expected under the null hypothesis ; its probability (under the null hypothesis) does not exceed the significance level , i.e., the threshold set prior to testing for rejection of the null hypothesis. The significance level varies, but common choices include 0.10, 0.05, and 0.01. [ 2 ] However, a non-significant result does not necessarily mean that an effect is absent. [ 3 ] [ 4 ] [ 5 ] [ 6 ] As an example in physics , the results of the Michelson–Morley experiment were of this type, as it did not detect the expected velocity relative to the postulated luminiferous aether . This experiment's famous failed detection, commonly referred to as the null result , contributed to the development of special relativity . The experiment did appear to measure a non-zero "drift", but the value was far too small to account for the theoretically expected results; it is generally thought to be inside the noise level of the experiment. [ 7 ] Despite similar quality of execution and design , [ 8 ] papers with statistically significant results are three times more likely to be published than those with null results. [ 9 ] This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by data dredging . [ 10 ] Many factors contribute to publication bias. [ 11 ] [ 12 ] For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis. [ 13 ] Most commonly, investigators simply decline to submit results, leading to non-response bias . Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results. [ 8 ] There are several scientific journals dedicated to the publication of negative or null results, including the following: While it is not exclusively dedicated to publishing negative results, BMC Research Notes also publishes negative results in the form of research or data notes.
https://en.wikipedia.org/wiki/Null_result
A null session is an anonymous connection to an inter-process communication network service on Windows -based computers. [ 1 ] The service is designed to allow named pipe connections [ 2 ] but may be used by attackers to remotely gather information about the system. [ 3 ] From a NULL session, hackers can call APIs and use Remote Procedure calls to enumerate information. These techniques can, and will provide information on passwords, groups, services, users and even active processors. NULL session access can also even be used for escalating privileges and perform DoS attacks. This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Null_session
The null sign (∅) is a symbol often used in mathematics for denoting the empty set . The same letter in linguistics represents zero , the lack of an element. It is commonly used in phonology , morphology , and syntax . The symbol ∅ is available at Unicode point U+2205. [ 1 ] It can be coded in HTML as &empty; and as &#8709; or as &#x2205; . It can be coded in LaTeX as \varnothing . Similar letters and symbols include the following: In mathematics, the null sign (∅) denotes the empty set . [ 3 ] [ 4 ] Note that a null set is not necessarily an empty set . Common notations for the empty set include "{}", "∅", and " ∅ {\displaystyle \emptyset } ". The latter two symbols were introduced by the Bourbaki group (specifically André Weil ) in 1939, inspired by the letter Ø in the Danish and Norwegian alphabets [ 3 ] (and not related in any way to the Greek letter Φ ). [ 5 ] Empty sets are used in set operations. For example: A = { 2 , 3 , 5 , 7 , 11 } {\displaystyle A=\{2,3,5,7,11\}} B = { 4 , 6 , 8 , 9 } {\displaystyle B=\{4,6,8,9\}} A ∩ B = ? {\displaystyle A\cap B=?} There are no common elements in the solution; so it should be denoted as: A ∩ B = ∅ {\displaystyle A\cap B=\varnothing } or A ∩ B = { } {\displaystyle A\cap B=\{\}} In linguistics , the null sign is used to indicate the absence of an element, such as a phoneme or morpheme . [ 2 ] The English language was a fusional language , this means the language makes use of inflectional changes to convey multiple grammatical meanings. Although the inflectional complexity of English has been largely reduced in the course of development, the inflectional endings can be seen in earlier forms of English, such as Early Modern English (abbreviated as EModE). The verb endings of EModE were summarised in the table below by Roger Lass : [ 6 ] In photography the null sign, Ø, found on camera lenses denotes the filter thread diameter, measured in millimeters. This marking indicates the size of screw-in filters that can be attached to the front of the lens. This diameter is separate from any optical specification of the lens, as it is a standardized measurement for photographers to select a compatible filter. Common filter thread sizes include 52mm, 72mm, 77mm, and 82mm. If a photographer owns multiple lenses with different filter thread diameters, stepping rings can be utilized to adapt larger filters to smaller lens threads, eliminating the need to purchase duplicate filters in various sizes.
https://en.wikipedia.org/wiki/Null_sign
In mathematics , given a vector space X with an associated quadratic form q , written ( X , q ) , a null vector or isotropic vector is a non-zero element x of X for which q ( x ) = 0 . In the theory of real bilinear forms , definite quadratic forms and isotropic quadratic forms are distinct. They are distinguished in that only for the latter does there exist a nonzero null vector. A quadratic space ( X , q ) which has a null vector is called a pseudo-Euclidean space . The term isotropic vector v when q ( v ) = 0 has been used in quadratic spaces, [ 1 ] and anisotropic space for a quadratic space without null vectors. A pseudo-Euclidean vector space may be decomposed (non-uniquely) into orthogonal subspaces A and B , X = A + B , where q is positive-definite on A and negative-definite on B . The null cone , or isotropic cone , of X consists of the union of balanced spheres: ⋃ r ≥ 0 { x = a + b : q ( a ) = − q ( b ) = r , a , b ∈ B } . {\displaystyle \bigcup _{r\geq 0}\{x=a+b:q(a)=-q(b)=r,\ \ a,b\in B\}.} The null cone is also the union of the isotropic lines through the origin. A composition algebra with a null vector is a split algebra . [ 2 ] In a composition algebra ( A , +, ×, *), the quadratic form is q( x ) = x x *. When x is a null vector then there is no multiplicative inverse for x , and since x ≠ 0, A is not a division algebra . In the Cayley–Dickson construction , the split algebras arise in the series bicomplex numbers , biquaternions , and bioctonions , which uses the complex number field C {\displaystyle \mathbb {C} } as the foundation of this doubling construction due to L. E. Dickson (1919). In particular, these algebras have two imaginary units , which commute so their product, when squared, yields +1: The real subalgebras, split complex numbers , split quaternions , and split-octonions , with their null cones representing the light tracking into and out of 0 ∈ A , suggest spacetime topology . The light-like vectors of Minkowski space are null vectors. The four linearly independent biquaternions l = 1 + hi , n = 1 + hj , m = 1 + hk , and m ∗ = 1 – hk are null vectors and { l , n , m , m ∗ } can serve as a basis for the subspace used to represent spacetime . Null vectors are also used in the Newman–Penrose formalism approach to spacetime manifolds. [ 3 ] In the Verma module of a Lie algebra there are null vectors.
https://en.wikipedia.org/wiki/Null_vector
In mathematical analysis , nullclines , sometimes called zero-growth isoclines , are encountered in a system of ordinary differential equations where x ′ {\displaystyle x'} here represents a derivative of x {\displaystyle x} with respect to another parameter, such as time t {\displaystyle t} . The j {\displaystyle j} 'th nullcline is the geometric shape for which x j ′ = 0 {\displaystyle x_{j}'=0} . The equilibrium points of the system are located where all of the nullclines intersect. In a two-dimensional linear system , the nullclines can be represented by two lines on a two-dimensional plot; in a general two-dimensional system they are arbitrary curves. The definition, though with the name ’directivity curve’, was used in a 1967 article by Endre Simonyi. [ 1 ] This article also defined 'directivity vector' as w = s i g n ( P ) i + s i g n ( Q ) j {\displaystyle \mathbf {w} =\mathrm {sign} (P)\mathbf {i} +\mathrm {sign} (Q)\mathbf {j} } , where P and Q are the dx/dt and dy/dt differential equations, and i and j are the x and y direction unit vectors. Simonyi developed a new stability test method from these new definitions, and with it he studied differential equations. This method, beyond the usual stability examinations, provided semi-quantitative results.
https://en.wikipedia.org/wiki/Nullcline
In mathematics, a nullform of a vector space acted on linearly by a group is a vector on which all invariants of the group vanish. Nullforms were introduced by Hilbert ( 1893 ). (Dieudonné & Carrell 1970 , 1971 , p.57). This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nullform
The nullity of a graph in the mathematical subject of graph theory can mean either of two unrelated numbers. If the graph has n vertices and m edges, then: This graph theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nullity_(graph_theory)
Nullomers are short sequences of DNA that do not occur in the genome of a species (for example, humans), even though they are theoretically possible. [ 1 ] [ 2 ] Nullomers must be under selective pressure - for example, they may be toxic to the cell. [ 2 ] Some nullomers have been shown to be useful to treat leukemia , breast , and prostate cancer . They are not useful in healthy cells because normal cells adapt and become immune to them. [ 2 ] Nullomers are also being developed for use as DNA tags to prevent cross contamination when analyzing crime scene material. [ 3 ] Nullomers are naturally occurring but potentially unused sequences of DNA . Determining these "forbidden" sequences can improve the understanding of the basic rules that govern sequence evolution . [ 4 ] Sequencing entire genomes has shown that there is a high level of non-uniformity in genomic sequences. When a codon is artificially substituted with a synonymous codon, it often results in a lethal change and cell death. This is believed to be due to ribosomal stalling and early termination of protein synthesis. For example, both AGA and CGA code for arginine in bacteria; however, bacteria almost never use AGA, and when substituted it proves lethal. [ 5 ] Such codon biases have been observed in all species, [ 6 ] and are examples of constraints on sequence evolution . Other sequences may have selective pressure; for example, GG-rich sequences are used as sacrificial sinks for oxidative damage because oxidizing agents are attracted to regions with GG-rich sequences and then induce strand breakage . [ 7 ] Moreover, it has been shown that statistically significant nullomers (i.e. absent short sequences which are highly expected to exist) in virus genomes are restriction recognition sites indicating that viruses have probably got rid of these motifs to facilitate invasion of bacterial hosts. [ 8 ] Nullomers Database provides a comprehensive collection of minimal absent sequences from hundreds of species and viruses as well as the human and mouse proteomes. Nullomers have been used as an approach to drug discovery and development. Nullomer peptides were screened for anti-cancer action . Absent sequences have short polyarginine tails added to increase solubility and uptake into the cell, producing peptides called PolyArgNulloPs. One successful sequence, RRRRRNWMWC, was demonstrated to have lethal effects in breast and prostate cancer . It damaged mitochondria by increasing ROS production, which reduced ATP production, leading to cell growth inhibition and cell death . Normal cells show a decreased sensitivity to PolyArgNulloPs over time. [ 2 ] Accidental transfer of biological material containing DNA can produce misleading results. This is a particularly important consideration in forensic and crime labs, where mistakes can cause an innocent person to be convicted of a crime. There was no way to detect if a reference sample was mislabeled as evidence or if a forensic sample is contaminated, but a nullomer barcode can be added to reference samples to distinguish them from evidence on analysis. Tagging can be carried out during sample collection without affecting genotype or quantification results. Impregnated filter paper with various nullomers can be used to soak up and store DNA samples from a crime scene, making the technology simple and effective. [ 3 ] Tagging with nullomers can be detected—even when diluted to a million-fold and spilled on evidence, these tags are still clearly detected. [ 3 ] Tagging in this way supports National Research Council's recommendations on quality control to reduce fraud and mistakes. [ 3 ]
https://en.wikipedia.org/wiki/Nullomers
In compressed sensing , the nullspace property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of ℓ 1 {\displaystyle \ell _{1}} -relaxation . The term "nullspace property" originates from Cohen, Dahmen, and DeVore. [ 1 ] The nullspace property is often difficult to check in practice, and the restricted isometry property is a more modern condition in the field of compressed sensing. The non-convex ℓ 0 {\displaystyle \ell _{0}} -minimization problem, min x ‖ x ‖ 0 {\displaystyle \min \limits _{x}\|x\|_{0}} subject to A x = b {\displaystyle Ax=b} , is a standard problem in compressed sensing. However, ℓ 0 {\displaystyle \ell _{0}} -minimization is known to be NP-hard in general. [ 2 ] As such, the technique of ℓ 1 {\displaystyle \ell _{1}} -relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the ℓ 0 {\displaystyle \ell _{0}} -norm. In ℓ 1 {\displaystyle \ell _{1}} -relaxation, the ℓ 1 {\displaystyle \ell _{1}} problem, min x ‖ x ‖ 1 {\displaystyle \min \limits _{x}\|x\|_{1}} subject to A x = b {\displaystyle Ax=b} , is solved in place of the ℓ 0 {\displaystyle \ell _{0}} problem. Note that this relaxation is convex and hence amenable to the standard techniques of linear programming - a computationally desirable feature. Naturally we wish to know when ℓ 1 {\displaystyle \ell _{1}} -relaxation will give the same answer as the ℓ 0 {\displaystyle \ell _{0}} problem. The nullspace property is one way to guarantee agreement. An m × n {\displaystyle m\times n} complex matrix A {\displaystyle A} has the nullspace property of order s {\displaystyle s} , if for all index sets S {\displaystyle S} with s = | S | ≤ n {\displaystyle s=|S|\leq n} we have that: ‖ η S ‖ 1 < ‖ η S C ‖ 1 {\displaystyle \|\eta _{S}\|_{1}<\|\eta _{S^{C}}\|_{1}} for all η ∈ ker ⁡ A ∖ { 0 } {\displaystyle \eta \in \ker {A}\setminus \left\{0\right\}} . The following theorem gives necessary and sufficient condition on the recoverability of a given s {\displaystyle s} -sparse vector in C n {\displaystyle \mathbb {C} ^{n}} . The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut. [ 3 ] Theorem: {\displaystyle {\textbf {Theorem:}}} Let A {\displaystyle A} be a m × n {\displaystyle m\times n} complex matrix. Then every s {\displaystyle s} -sparse signal x ∈ C n {\displaystyle x\in \mathbb {C} ^{n}} is the unique solution to the ℓ 1 {\displaystyle \ell _{1}} -relaxation problem with b = A x {\displaystyle b=Ax} if and only if A {\displaystyle A} satisfies the nullspace property with order s {\displaystyle s} . Proof: {\displaystyle {\textit {Proof:}}} For the forwards direction notice that η S {\displaystyle \eta _{S}} and − η S C {\displaystyle -\eta _{S^{C}}} are distinct vectors with A ( − η S C ) = A ( η S ) {\displaystyle A(-\eta _{S^{C}})=A(\eta _{S})} by the linearity of A {\displaystyle A} , and hence by uniqueness we must have ‖ η S ‖ 1 < ‖ η S C ‖ 1 {\displaystyle \|\eta _{S}\|_{1}<\|\eta _{S^{C}}\|_{1}} as desired. For the backwards direction, let x {\displaystyle x} be s {\displaystyle s} -sparse and z {\displaystyle z} another (not necessary s {\displaystyle s} -sparse) vector such that z ≠ x {\displaystyle z\neq x} and A z = A x {\displaystyle Az=Ax} . Define the (non-zero) vector η = x − z {\displaystyle \eta =x-z} and notice that it lies in the nullspace of A {\displaystyle A} . Call S {\displaystyle S} the support of x {\displaystyle x} , and then the result follows from an elementary application of the triangle inequality : ‖ x ‖ 1 ≤ ‖ x − z S ‖ 1 + ‖ z S ‖ 1 = ‖ η S ‖ 1 + ‖ z S ‖ 1 < ‖ η S C ‖ 1 + ‖ z S ‖ 1 = ‖ − z S C ‖ 1 + ‖ z S ‖ 1 = ‖ z ‖ 1 {\displaystyle \|x\|_{1}\leq \|x-z_{S}\|_{1}+\|z_{S}\|_{1}=\|\eta _{S}\|_{1}+\|z_{S}\|_{1}<\|\eta _{S^{C}}\|_{1}+\|z_{S}\|_{1}=\|-z_{S^{C}}\|_{1}+\|z_{S}\|_{1}=\|z\|_{1}} , establishing the minimality of x {\displaystyle x} . ◻ {\displaystyle \square }
https://en.wikipedia.org/wiki/Nullspace_property
Number Assignment Module ( NAM ) is an electronic memory in a cellular phone that stores the telephone number , international mobile subscriber identity and an Electronic Serial Number . Phones with dual- or multi-NAM features offer users the option of registering the phone with a local number in more than one market. This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Number_Assignment_Module
Number Forms is a Unicode block containing Unicode compatibility characters that have specific meaning as numbers , but are constructed from other characters. They consist primarily of vulgar fractions and Roman numerals . In addition to the characters in the Number Forms block, three fractions (¼, ½, and ¾) were inherited from ISO-8859-1 , which was incorporated whole as the Latin-1 Supplement block. The following Unicode-related documents record the purpose and process of defining specific characters in the Number Forms block:
https://en.wikipedia.org/wiki/Number_Forms
The Number Theory Foundation ( NTF ) is a non-profit organization based in the United States which supports research and conferences in the field of number theory , with a particular focus on computational aspects and explicit methods. [ 1 ] The NTF funds the Selfridge prize awarded at each Algorithmic Number Theory Symposium (ANTS) [ 2 ] [ 3 ] and is a regular supporter of several conferences and organizations in number theory, including the Canadian Number Theory Association (CNTA) , [ 4 ] [ 5 ] Women in Numbers (WIN), and the West Coast Number Theory (WCNT) conference. [ 1 ] The NTF was created in 1999 via a grant from John Selfridge with an initial board of directors including Paul Bateman , John Brillhart , Richard Blecksmith, Brian Conrey , Ronald Graham , Richard Guy , Carl Pomerance , John Selfridge, Sam Wagstaff , and Hugh Williams . [ 6 ] [ 7 ] Carl Pomerance served as President of the foundation for its first two decades and was succeeded by Andrew Sutherland in 2019. [ 1 ] [ 8 ] This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Number_Theory_Foundation
In mathematics education at primary school level, a number bond (sometimes alternatively called an addition fact ) is a simple addition sum which has become so familiar that a child can recognise it and complete it almost instantly, with recall as automatic as that of an entry from a multiplication table in multiplication . For example, a number bond looks like A child who "knows" this number bond should be able to immediately fill in any one of these three numbers if it were missing, given the other two, without having to "work it out". Number bonds are often learned in sets for which the sum is a common round number such as 10 or 20. Having acquired some familiar number bonds, children should also soon learn how to use them to develop strategies to complete more complicated sums, for example by navigating from a new sum to an adjacent number bond they know, i.e. 5 + 2 and 4 + 3 are both number bonds that make 7; or by strategies like "making ten", for example recognising that 7 + 6 = (7 + 3) + 3 = 13. The term "number bond" is also used to refer to a pictorial representation of part-part-whole relationships, often found in the Singapore mathematics curriculum. Number bonds consist of a minimum of 3 circles that are connected by lines. The “whole” is written in the first circle and its “parts” are written in the adjoining circles. Number bonds are used to build deeper understanding of math facts. The term "number bond" is sometimes derided as a piece of unnecessary new mathematical jargon, adding an element of pointless abstraction or incomprehensibility for those not familiar with it (such as children's parents) to a subject even as simple as primary school addition. [ 1 ] The term has been used at least since the 1920s [ 2 ] [ 3 ] and formally entered the primary curriculum in Singapore in the early 1970s. [ 4 ] In the U.K. the phrase came into widespread classroom use from the late 1990s when the National Numeracy Strategy brought in an emphasis on in-classroom discussion of strategies for developing mental arithmetic in its "numeracy hour".
https://en.wikipedia.org/wiki/Number_bond
The number density (symbol: n or ρ N ) is an intensive quantity used to describe the degree of concentration of countable objects ( particles , molecules , phonons , cells , galaxies , etc.) in physical space: three-dimensional volumetric number density , two-dimensional areal number density , or one-dimensional linear number density . Population density is an example of areal number density. The term number concentration (symbol: lowercase n , or C , to avoid confusion with amount of substance indicated by uppercase N ) is sometimes used in chemistry for the same quantity, particularly when comparing with other concentrations . Volume number density is the number of specified objects per unit volume : [ 1 ] n = N V , {\displaystyle n={\frac {N}{V}},} where N is the total number of objects in a volume V . Here it is assumed [ 2 ] that N is large enough that rounding of the count to the nearest integer does not introduce much of an error , however V is chosen to be small enough that the resulting n does not depend much on the size or shape of the volume V because of large-scale features. Area number density is the number of specified objects per unit area , A : n ′ = N A , {\displaystyle n'={\frac {N}{A}},} Similarly, linear number density is the number of specified objects per unit length , L : n ″ = N L , {\displaystyle n''={\frac {N}{L}},} Column number density is a kind of areal density, the number or count of a substance per unit area, obtained integrating volumetric number density along a vertical path: n c ′ = ∫ n d s . {\displaystyle n'_{c}=\int n\,\mathrm {d} s.} It's related to column mass density , with the volumetric number density replaced by the volume mass density. In SI units, number density is measured in m −3 , although cm −3 is often used. However, these units are not quite practical when dealing with atoms or molecules of gases , liquids or solids at room temperature and atmospheric pressure , because the resulting numbers are extremely large (on the order of 10 20 ). Using the number density of an ideal gas at 0 °C and 1 atm as a yardstick : n 0 = 1 amg = 2.686 7774 × 10 25 m −3 is often introduced as a unit of number density, for any substances at any conditions (not necessarily limited to an ideal gas at 0 °C and 1 atm ). [ 3 ] Using the number density as a function of spatial coordinates , the total number of objects N in the entire volume V can be calculated as N = ∭ V n ( x , y , z ) d V , {\displaystyle N=\iiint _{V}n(x,\,y,\,z)\,\mathrm {d} V,} where d V = d x d y d z is a volume element. If each object possesses the same mass m 0 , the total mass m of all the objects in the volume V can be expressed as m = ∭ V m 0 n ( x , y , z ) d V . {\displaystyle m=\iiint _{V}m_{0}n(x,\,y,\,z)\,\mathrm {d} V.} Similar expressions are valid for electric charge or any other extensive quantity associated with countable objects. For example, replacing m with q (total charge) and m 0 with q 0 (charge of each object) in the above equation will lead to a correct expression for charge. The number density of solute molecules in a solvent is sometimes called concentration , although usually concentration is expressed as a number of moles per unit volume (and thus called molar concentration ). For any substance, the number density can be expressed in terms of its amount concentration c (in mol /m 3 ) as n = N A c {\displaystyle n=N_{\rm {A}}c} where N A is the Avogadro constant . This is still true if the spatial dimension unit, metre, in both n and c is consistently replaced by any other spatial dimension unit, e.g. if n is in cm −3 and c is in mol/cm 3 , or if n is in L −1 and c is in mol/L, etc. For atoms or molecules of a well-defined molar mass M (in kg /mol), the number density can sometimes be expressed in terms of their mass density ρ m (in kg/m 3 ) as n = N A M ρ m . {\displaystyle n={\frac {N_{\rm {A}}}{M}}\rho _{\mathrm {m} }.} Note that the ratio M / N A is the mass of a single atom or molecule in kg. The following table lists common examples of number densities at 1 atm and 20 °C , unless otherwise noted.
https://en.wikipedia.org/wiki/Number_density
A number line is a graphical representation of a straight line that serves as spatial representation of numbers , usually graduated like a ruler with a particular origin point representing the number zero and evenly spaced marks in either direction representing integers , imagined to extend infinitely. The association between numbers and points on the line links arithmetical operations on numbers to geometric relations between points, and provides a conceptual framework for learning mathematics. In elementary mathematics , the number line is initially used to teach addition and subtraction of integers, especially involving negative numbers . As students progress, more kinds of numbers can be placed on the line, including fractions , decimal fractions , square roots , and transcendental numbers such as the circle constant π : Every point of the number line corresponds to a unique real number , and every real number to a unique point. [ 1 ] Using a number line, numerical concepts can be interpreted geometrically and geometric concepts interpreted numerically. An inequality between numbers corresponds to a left-or-right order relation between points. Numerical intervals are associated to geometrical segments of the line. Operations and functions on numbers correspond to geometric transformations of the line. Wrapping the line into a circle relates modular arithmetic to the geometric composition of angles . Marking the line with logarithmically spaced graduations associates multiplication and division with geometric translations , the principle underlying the slide rule . In analytic geometry , coordinate axes are number lines which associate points in a geometric space with tuples of numbers, so geometric shapes can be described using numerical equations and numerical functions can be graphed . In advanced mathematics, the number line is usually called the real line or real number line , and is a geometric line isomorphic to the set of real numbers, with which it is often conflated; both the real numbers and the real line are commonly denoted R or ⁠ R {\displaystyle \mathbb {R} } ⁠ . The real line is a one- dimensional real coordinate space , so is sometimes denoted R 1 when comparing it to higher-dimensional spaces. The real line is a one-dimensional Euclidean space using the difference between numbers to define the distance between points on the line. It can also be thought of as a vector space , a metric space , a topological space , a measure space , or a linear continuum . The real line can be embedded in the complex plane , used as a two-dimensional geometric representation of the complex numbers . The first mention of the number line used for operation purposes is found in John Wallis 's Treatise of Algebra (1685). [ 2 ] In his treatise, Wallis describes addition and subtraction on a number line in terms of moving forward and backward, under the metaphor of a person walking. An earlier depiction without mention to operations, though, is found in John Napier 's A Description of the Admirable Table of Logarithmes (1616), which shows values 1 through 12 lined up from left to right. [ 3 ] Contrary to popular belief, René Descartes 's original La Géométrie does not feature a number line, defined as we use it today, though it does use a coordinate system. In particular, Descartes's work does not contain specific numbers mapped onto lines, only abstract quantities. [ 4 ] A number line is usually represented as being horizontal , but in a Cartesian coordinate plane the vertical axis (y-axis) is also a number line. [ 5 ] The arrow on the line indicates the positive direction in which numbers increase. [ 5 ] Some textbooks attach an arrow to both sides, suggesting that the arrow indicates continuation. This is unnecessary, since according to the rules of geometry a line without endpoints continues indefinitely in the positive and negative directions. A line with one endpoint as a ray , and a line with two endpoints as a line segment . If a particular number is farther to the right on the number line than is another number, then the first number is greater than the second (equivalently, the second is less than the first). The distance between them is the magnitude of their difference—that is, it measures the first number minus the second one, or equivalently the absolute value of the second number minus the first one. Taking this difference is the process of subtraction . Thus, for example, the length of a line segment between 0 and some other number represents the magnitude of the latter number. Two numbers can be added by "picking up" the length from 0 to one of the numbers, and putting it down again with the end that was 0 placed on top of the other number. Two numbers can be multiplied as in this example: To multiply 5 × 3, note that this is the same as 5 + 5 + 5, so pick up the length from 0 to 5 and place it to the right of 5, and then pick up that length again and place it to the right of the previous result. This gives a result that is 3 combined lengths of 5 each; since the process ends at 15, we find that 5 × 3 = 15. Division can be performed as in the following example: To divide 6 by 2—that is, to find out how many times 2 goes into 6—note that the length from 0 to 2 lies at the beginning of the length from 0 to 6; pick up the former length and put it down again to the right of its original position, with the end formerly at 0 now placed at 2, and then move the length to the right of its latest position again. This puts the right end of the length 2 at the right end of the length from 0 to 6. Since three lengths of 2 filled the length 6, 2 goes into 6 three times (that is, 6 ÷ 2 = 3). The section of the number line between two numbers is called an interval . If the section includes both numbers it is said to be a closed interval, while if it excludes both numbers it is called an open interval. If it includes one of the numbers but not the other one, it is called a half-open interval. All the points extending forever in one direction from a particular point are together known as a ray . If the ray includes the particular point, it is a closed ray; otherwise it is an open ray. On the number line, the distance between two points is the unit length if and only if the difference of the represented numbers equals 1. Other choices are possible. One of the most common choices is the logarithmic scale , which is a representation of the positive numbers on a line, such that the distance of two points is the unit length, if the ratio of the represented numbers has a fixed value, typically 10. In such a logarithmic scale, the origin represents 1; one inch to the right, one has 10, one inch to the right of 10 one has 10×10 = 100 , then 10×100 = 1000 = 10 3 , then 10×1000 = 10,000 = 10 4 , etc. Similarly, one inch to the left of 1, one has 1/10 = 10 –1 , then 1/100 = 10 –2 , etc. This approach is useful, when one wants to represent, on the same figure, values with very different order of magnitude . For example, one requires a logarithmic scale for representing simultaneously the size of the different bodies that exist in the Universe , typically, a photon , an electron , an atom , a molecule , a human , the Earth , the Solar System , a galaxy , and the visible Universe. Logarithmic scales are used in slide rules for multiplying or dividing numbers by adding or subtracting lengths on logarithmic scales. A line drawn through the origin at right angles to the real number line can be used to represent the imaginary numbers . This line, called imaginary line , extends the number line to a complex number plane , with points representing complex numbers . Alternatively, one real number line can be drawn horizontally to denote possible values of one real number, commonly called x , and another real number line can be drawn vertically to denote possible values of another real number, commonly called y . Together these lines form what is known as a Cartesian coordinate system , and any point in the plane represents the value of a pair of real numbers. Further, the Cartesian coordinate system can itself be extended by visualizing a third number line "coming out of the screen (or page)", measuring a third variable called z . Positive numbers are closer to the viewer's eyes than the screen is, while negative numbers are "behind the screen"; larger numbers are farther from the screen. Then any point in the three-dimensional space that we live in represents the values of a trio of real numbers. The real line is a linear continuum under the standard < ordering. Specifically, the real line is linearly ordered by < , and this ordering is dense and has the least-upper-bound property . In addition to the above properties, the real line has no maximum or minimum element . It also has a countable dense subset , namely the set of rational numbers . It is a theorem that any linear continuum with a countable dense subset and no maximum or minimum element is order-isomorphic to the real line. The real line also satisfies the countable chain condition : every collection of mutually disjoint , nonempty open intervals in R is countable. In order theory , the famous Suslin problem asks whether every linear continuum satisfying the countable chain condition that has no maximum or minimum element is necessarily order-isomorphic to R . This statement has been shown to be independent of the standard axiomatic system of set theory known as ZFC . The real line forms a metric space , with the distance function given by absolute difference: The metric tensor is clearly the 1-dimensional Euclidean metric . Since the n -dimensional Euclidean metric can be represented in matrix form as the n -by- n identity matrix, the metric on the real line is simply the 1-by-1 identity matrix, i.e. 1. If p ∈ R and ε > 0 , then the ε - ball in R centered at p is simply the open interval ( p − ε , p + ε ) . This real line has several important properties as a metric space: The real line carries a standard topology , which can be introduced in two different, equivalent ways. First, since the real numbers are totally ordered , they carry an order topology . Second, the real numbers inherit a metric topology from the metric defined above. The order topology and metric topology on R are the same. As a topological space, the real line is homeomorphic to the open interval (0, 1) . The real line is trivially a topological manifold of dimension 1 . Up to homeomorphism, it is one of only two different connected 1-manifolds without boundary , the other being the circle . It also has a standard differentiable structure on it, making it a differentiable manifold . (Up to diffeomorphism , there is only one differentiable structure that the topological space supports.) The real line is a locally compact space and a paracompact space , as well as second-countable and normal . It is also path-connected , and is therefore connected as well, though it can be disconnected by removing any one point. The real line is also contractible , and as such all of its homotopy groups and reduced homology groups are zero. As a locally compact space, the real line can be compactified in several different ways. The one-point compactification of R is a circle (namely, the real projective line ), and the extra point can be thought of as an unsigned infinity. Alternatively, the real line has two ends , and the resulting end compactification is the extended real number line [−∞, +∞] . There is also the Stone–Čech compactification of the real line, which involves adding an infinite number of additional points. In some contexts, it is helpful to place other topologies on the set of real numbers, such as the lower limit topology or the Zariski topology . For the real numbers, the latter is the same as the finite complement topology . The real line is a vector space over the field R of real numbers (that is, over itself) of dimension 1 . It has the usual multiplication as an inner product , making it a Euclidean vector space . The norm defined by this inner product is simply the absolute value . The real line carries a canonical measure , namely the Lebesgue measure . This measure can be defined as the completion of a Borel measure defined on R , where the measure of any interval is the length of the interval. Lebesgue measure on the real line is one of the simplest examples of a Haar measure on a locally compact group . When A is a unital real algebra , the products of real numbers with 1 is a real line within the algebra. For example, in the complex plane z = x + i y , the subspace { z : y = 0} is a real line. Similarly, the algebra of quaternions has a real line in the subspace { q : x = y = z = 0}. When the real algebra is a direct sum A = R ⊕ V , {\displaystyle A=R\oplus V,} then a conjugation on A is introduced by the mapping v → − v {\displaystyle v\to -v} of subspace V . In this way the real line consists of the fixed points of the conjugation. For a dimension n , the square matrices form a ring that has a real line in the form of real products with the identity matrix in the ring.
https://en.wikipedia.org/wiki/Number_line
In medicine , the number needed to harm ( NNH ) is an epidemiological measure that indicates how many persons on average need to be exposed to a risk factor over a specific period to cause harm in an average of one person who would not otherwise have been harmed. It is defined as the inverse of the absolute risk increase , and computed as 1 / ( I e − I u ) {\displaystyle 1/(I_{e}-I_{u})} , where I e {\displaystyle I_{e}} is the incidence in the treated (exposed) group, and I u {\displaystyle I_{u}} is the incidence in the control (unexposed) group. [ 1 ] Intuitively, the lower the number needed to harm, the worse the risk factor, with 1 meaning that every exposed person is harmed. NNH is similar to number needed to treat (NNT), where NNT usually refers to a positive therapeutic result and NNH to a detrimental effect or risk factor. Marginal metrics: are also used. [ 2 ] The NNH is an important measure in evidence-based medicine and helps physicians decide whether it is prudent to proceed with a particular treatment which may expose the patient to harms while providing therapeutic benefits. If a clinical endpoint is devastating enough without the drug (e.g. death , heart attack ), drugs with a low NNH may still be indicated in particular situations if the NNT is smaller than the NNH. [ dubious – discuss ] [ citation needed ] However, there are several important problems with the NNH, involving bias and lack of reliable confidence intervals, as well as difficulties in excluding the possibility of no difference between two treatments or groups. [ 3 ]
https://en.wikipedia.org/wiki/Number_needed_to_harm
The number needed to treat ( NNT ) or number needed to treat for an additional beneficial outcome ( NNTB ) is an epidemiological measure used in communicating the effectiveness of a health-care intervention, typically a treatment with medication . The NNT is the average number of patients who need to be treated to prevent one additional bad outcome. It is defined as the inverse of the absolute risk reduction , and computed as 1 / ( I u − I e ) {\displaystyle 1/(I_{u}-I_{e})} , where I u {\displaystyle I_{u}} is the incidence in the control (unexposed) group, and I e {\displaystyle I_{e}} is the incidence in the treated (exposed) group. [ 1 ] [ 2 ] This calculation implicitly assumes monotonicity, that is, no individual can be harmed by treatment. The modern approach, based on counterfactual conditionals , relaxes this assumption and yields bounds on NNT. A type of effect size , the NNT was described in 1988 by McMaster University 's Laupacis, Sackett and Roberts. [ 3 ] While theoretically, the ideal NNT is 1, where everyone improves with treatment and no one improves with control, in practice, NNT is always rounded up to the nearest round number [ 4 ] and so even a NNT of 1.1 becomes a NNT of 2 [ 5 ] . A higher NNT indicates that treatment is less effective. [ 6 ] NNT is similar to number needed to harm (NNH), where NNT usually refers to a therapeutic intervention and NNH to a detrimental effect or risk factor. A combined measure, the number needed to treat for an additional beneficial or harmful outcome (NNTB/H), is also used. The NNT is an important measure in pharmacoeconomics . If a clinical endpoint is devastating enough ( e.g. death , heart attack ), drugs with a high NNT may still be indicated in particular situations. If the endpoint is minor, health insurers may decline to reimburse drugs with a high NNT. NNT is significant to consider when comparing possible side effects of a medication against its benefits. For medications with a high NNT, even a small incidence of adverse effects may outweigh the benefits. Even though NNT is an important measure in a clinical trial, it is infrequently included in medical journal articles reporting the results of clinical trials. [ 7 ] There are several important problems with the NNT, involving bias and lack of reliable confidence intervals, as well as difficulties in excluding the possibility of no difference between two treatments or groups. [ 8 ] NNT may vary substantially over time, [ 9 ] [ 10 ] and hence convey different information as a function of the specific time-point of its calculation. Snapinn and Jiang [ 11 ] showed examples where the information conveyed by the NNT may be incomplete or even contradictory compared to the traditional statistics of interest in survival analysis. A comprehensive research on adjustment of the NNT for explanatory variables and accommodation to time-dependent outcomes was conducted by Bender and Blettner, [ 12 ] Austin, [ 13 ] and Vancak et al. [ 14 ] There are a number of factors that can affect the meaning of the NNT depending on the situation. The treatment may be a drug in the form of a pill or injection, a surgical procedure, or many other possibilities. The following examples demonstrate how NNT is determined and what it means. In this example, it is important to understand that every participant has the condition being treated, so there are only "diseased" patients who received the treatment or did not. This is typically a type of study that would occur only if both the control and the tested treatment carried significant risks of serious harm, or if the treatment was unethical for a healthy participant (for example, chemotherapy drugs or a new method of appendectomy - surgical removal of the appendix). Most drug trials test both the control and the treatment on both healthy and "diseased" participants. Or, if the treatment's purpose is to prevent a condition that is fairly common (an anticoagulant to prevent heart attack for example), a prospective study may be used. A study which starts with all healthy participants is termed a prospective study, and is in contrast to a retrospective study, in which some participants already have the condition in question. Prospective studies produce much higher quality evidence, but are much more difficult and time-consuming to perform. [ citation needed ] In the table below: ASCOT-LLA manufacturer-sponsored study addressed the benefit of atorvastatin 10 mg (a cholesterol -lowering drug) in patients with hypertension (high blood pressure) but no previous cardiovascular disease ( primary prevention ). The trial ran for 3.3 years, and during this period the relative risk of a "primary event" (heart attack) was reduced by 36% (relative risk reduction, RRR). The absolute risk reduction (ARR), however, was much smaller, because the study group did not have a very high rate of cardiovascular events over the study period: 2.67% in the control group, compared to 1.65% in the treatment group. [ 15 ] Taking atorvastatin for 3.3 years, therefore, would lead to an ARR of only 1.02% (2.67% minus 1.65%). The number needed to treat to prevent one cardiovascular event would then be 98.04 for 3.3 years. [ 16 ] The above calculations for NNT are valid under monotonicity, where treatment can't have a negative effect on any individual. However, in the case where the treatment may benefit some individuals and harm others, the NNT as defined above cannot be estimated from a Randomized Controlled Trial (RCT) alone. The inverse of the absolute risk reduction only provides an upper bound, i.e., NNT ⩽ 1 / ( I u − I e ) {\displaystyle {\text{NNT}}\leqslant 1/(I_{u}-I_{e})} . The modern approach defines NNT literally, as the number of patients one needs to treat (on the average) before saving one. However, since "saving" is a counterfactual notion (a patient must recover if treated and not recover if not treated) the logic of counterfactuals must be invoked to estimate this quantity from experimental or observational studies. [ 17 ] The probability of "saving" is captured by the Probability of Necessity and Sufficiency (PNS), where PNS = P ( Recovery if and only if treated ) {\displaystyle {\text{PNS}}=P({\text{Recovery if and only if treated}})} . [ 18 ] Once PNS is estimated, NNT is given as NNT = PNS − 1 {\displaystyle {\text{NNT}}={\text{PNS}}^{-1}} . However, due to the counterfactual nature of PNS, only bounds can be computed from an RCT, rather than a precise estimate. Tian and Pearl have derived tight bounds on PNS, based on multiple data sources, and Pearl showed that a combination of observational and experimental data may sometimes make the bounds collapse to a point estimate. [ 19 ] [ 20 ] Mueller and Pearl provide a conceptual interpretation for this phenomenon and illustrate its impact on both individual and policy-makers decisions. [ 21 ]
https://en.wikipedia.org/wiki/Number_needed_to_treat
Number needed to vaccinate ( NNV ) is a metric used in the evaluation of vaccines , [ 1 ] [ 2 ] [ 3 ] and in the determination of vaccination policy. It is defined as the average number of patients that must be vaccinated to prevent one case of disease. It is a specific application of the number needed to treat metric (NNT). NNV is the inverse of the absolute risk reduction of the vaccine. If the incidence in the vaccinated population is I e {\displaystyle I_{e}} , and the incidence in the unvaccinated population is I u {\displaystyle I_{u}} , then the NNV is 1 / ( I u − I e ) {\displaystyle 1/(I_{u}-I_{e})} . For example, one study reported a number needed to vaccinate of 5206 for invasive pneumococcal disease. [ 4 ] In order to determine a NNV, it is necessary to identify a specific population and a defined endpoint, because these can vary: [ citation needed ] Despite the limitations, the NNV can serve as a useful resource. For example, it can be used to report the results of computer simulations of varying vaccination strategies. [ 5 ]
https://en.wikipedia.org/wiki/Number_needed_to_vaccinate
A species description is a formal scientific description of a newly encountered species , typically articulated through a scientific publication . Its purpose is to provide a clear description of a new species of organism and explain how it differs from species that have been previously described or related species. For a species to be considered valid, a species description must follow established guidelines and naming conventions dictated by relevant nomenclature codes . These include the International Code of Zoological Nomenclature (ICZN) for animals, the International Code of Nomenclature for algae, fungi, and plants (ICN) for plants, and the International Committee on Taxonomy of Viruses (ICTV) for viruses. A species description often includes photographs or other illustrations of type material and information regarding where this material is deposited. The publication in which the species is described gives the new species a formal scientific name . Some 1.9 million species have been identified and described, out of some 8.7 million that may actually exist. [ 1 ] Additionally, over five billion species have gone extinct over the history of life on Earth . [ 2 ] A name of a new species becomes valid ( available in zoological terminology) with the date of publication of its formal scientific description. Once the scientist has performed the necessary research to determine that the discovered organism represents a new species, the scientific results are summarized in a scientific manuscript, either as part of a book or as a paper to be submitted to a scientific journal . A scientific species description must fulfill several formal criteria specified by the nomenclature codes , e.g. selection of at least one type specimen . These criteria are intended to ensure that the species name is clear and unambiguous, for example, the International Code of Zoological Nomenclature states that "Authors should exercise reasonable care and consideration in forming new names to ensure that they are chosen with their subsequent users in mind and that, as far as possible, they are appropriate, compact, euphonious , memorable, and do not cause offence." [ 3 ] Species names are written in the 26 letters of the Latin alphabet, but many species names are based on words from other languages, and are Latinized. Once the manuscript has been accepted for publication, [ 4 ] the new species name is officially created. Once a species name has been assigned and approved, it can generally not be changed except in the case of error. For example, a species of beetle ( Anophthalmus hitleri ) was named by a German collector after Adolf Hitler in 1933 when he had recently become chancellor of Germany. [ 5 ] It is not clear whether such a dedication would be considered acceptable or appropriate today, but the name remains in use. [ 5 ] Species names have been chosen on many different bases. The most common is a naming for the species' external appearance, its origin, or the species name is a dedication to a certain person. Examples would include a bat species named for the two stripes on its back ( Saccopteryx bilineata ), a frog named for its Bolivian origin ( Phyllomedusa boliviana ), and an ant species dedicated to the actor Harrison Ford ( Pheidole harrisonfordi ). A scientific name in honor of a person or persons is known as a taxonomic eponym or eponymic; patronym and matronym are the gendered terms for this. [ 6 ] [ 7 ] A number of humorous species names also exist. Literary examples include the genus name Borogovia (an extinct dinosaur), which is named after the borogove, a mythical character from Lewis Carroll 's poem " Jabberwocky ". A second example, Macrocarpaea apparata (a tall plant) was named after the magical spell "to apparate" from the Harry Potter novels by J. K. Rowling , as it seemed to appear out of nowhere. [ 8 ] In 1975, the British naturalist Peter Scott proposed the binomial name Nessiteras rhombopteryx ("Ness monster with diamond-shaped fin") for the Loch Ness Monster; it was soon spotted that it was an anagram of "Monster hoax by Sir Peter S". Species have frequently been named by scientists in recognition of supporters and benefactors. For example, the genus Victoria (a flowering waterplant) was named in honour of Queen Victoria of Great Britain. More recently, a species of lemur ( Avahi cleesei ) was named after the actor John Cleese in recognition of his work to publicize the plight of lemurs in Madagascar. Non-profit ecological organizations may also allow benefactors to name new species in exchange for financial support for taxonomic research and nature conservation. A German non-profit organisation, BIOPAT – Patrons for Biodiversity , has raised more than $450,000 for research and conservation through sponsorship of over 100 species using this model. [ 9 ] An individual example of this system is the Callicebus aureipalatii (or "monkey of the Golden Palace"), which was named after the Golden Palace casino in recognition of a $650,000 contribution to the Madidi National Park in Bolivia in 2005. [ 10 ] The International Code of Nomenclature for algae, fungi, and plants discourages this practice somewhat: "Recommendation 20A. Authors forming generic names should comply with the following ... (h) Not dedicate genera to persons quite unconcerned with botany, mycology, phycology, or natural science in general." [ 11 ] Early biologists often published entire volumes or multiple-volume works of descriptions in an attempt to catalog all known species. These catalogs typically featured extensive descriptions of each species and were often illustrated upon reprinting. The first of these large catalogs was Aristotle 's History of Animals , published around 343 BC. Aristotle included descriptions of creatures, mostly fish and invertebrates, in his homeland, and several mythological creatures rumored to live in far-away lands, such as the manticore . In 77 AD Pliny the Elder dedicated several volumes of his Natural History to the description of all life forms he knew to exist. He appears to have read Aristotle's work since he writes about many of the same far-away mythological creatures. Toward the end of the 12th century, Konungs skuggsjá , an Old Norse philosophical didactic work, featured several descriptions of the whales, seals, and monsters of the Icelandic seas. These descriptions were brief and often erroneous, and they included a description of the mermaid and a rare island-like sea monster called hafgufu . The author was hesitant to mention the beast (known today to be fictitious) for fear of its size, but felt it was important enough to be included in his descriptions. [ 12 ] However, the earliest recognized species authority is Carl Linnaeus , who standardized the modern taxonomy system beginning with his Systema Naturae in 1735. [ 13 ] As the catalog of known species was increasing rapidly, it became impractical to maintain a single work documenting every species. Publishing a paper documenting a single species was much faster and could be done by scientists with less broadened scopes of study. For example, a scientist who discovered a new species of insect would not need to understand plants, or frogs, or even insects which did not resemble the species, but would only need to understand closely related insects. Formal species descriptions today follow strict guidelines set forth by the codes of nomenclature . Very detailed formal descriptions are made by scientists, who usually study the organism closely for a considerable time. A diagnosis may be used instead of, [ 14 ] or as well as [ 15 ] the description. A diagnosis specifies the distinction between the new species and other species, and it does not necessarily have to be based on morphology. [ 16 ] In recent times, new species descriptions have been made without voucher specimens, and this has been controversial. [ 17 ] According to the RetroSOS report, [ 18 ] the following numbers of species have been described each year in the 2000s.
https://en.wikipedia.org/wiki/Number_of_species_described_each_year_in_the_2000s
Number sense in animals is the ability of creatures to represent and discriminate quantities of relative sizes by number sense . It has been observed in various species, from fish to primates . Animals are believed to have an approximate number system , the same system for number representation demonstrated by humans, which is more precise for smaller quantities and less so for larger values. An exact representation of numbers higher than three has not been attested in wild animals, [ 1 ] but can be demonstrated after a period of training in captive animals. In order to distinguish number sense in animals from the symbolic and verbal number system in humans, researchers use the term numerosity , [ 2 ] rather than number , to refer to the concept that supports approximate estimation but does not support an exact representation of number quality. Number sense in animals includes the recognition and comparison of number quantities. Some numerical operations, such as addition, have been demonstrated in many species, including rats and great apes. Representing fractions and fraction addition has been observed in chimpanzees. A wide range of species with an approximate number system suggests an early evolutionary origin of this mechanism or multiple convergent evolution events. Like humans, chicks have a left-to-right mental number line (they associate the left space with smaller numbers and the right space with larger numbers). [ 3 ] At the beginning of the 20th century, Wilhelm von Osten famously, but prematurely, claimed human-like counting abilities in animals on the example of his horse named Hans. His claim is widely rejected today, as it is attributed to a methodological fallacy, which received the name Clever Hans phenomenon after this case. Von Osten claimed that his horse could perform arithmetic operations presented to the horse in writing or verbally, upon which the horse would knock on the ground with its hoof the number of times that corresponded to the answer. This apparent ability was demonstrated numerous times in the presence of the horse's owner and a wider audience, and was also observed when the owner was absent. However, upon a rigorous investigation by Oskar Pfungst in the first decade of 20th century, Hans' ability was shown to be not arithmetic in nature, but to be the ability to interpret minimal unconscious changes in body language of people when the correct answer was approaching. Today, the arithmetic abilities of Clever Hans are commonly rejected and the case serves as a reminder to the scientific community about the necessity of rigorous control for experimenter expectation in experiments. [ 2 ] There were, however, other early and more reliable studies on number sense in animals. A prominent example is the work of Otto Koehler , who conducted a number of studies on number sense in animals between 1920s and 1970s. [ 4 ] In one of his studies [ 5 ] he showed that a raven named Jacob could reliably distinguish the number 5 across different tasks. This study was remarkable in that Koehler provided a systematic control condition in his experiment, which allowed him to test the number ability of the raven separately from the ability of the raven to encode other features, such as size and location of the objects. However, Koehler's work was largely overlooked in the English-speaking world, due to the limited availability of his publications, which were in German and only partially published during World War II. The experimental setup for the study of numerical cognition in animals was further enriched by the work of Francis [ 6 ] and Platt and Johnson. [ 7 ] In their experiments, the researchers deprived rats of food and then taught them to press a lever a specific number of times to obtain food. The rats learned to press the lever approximately the number of times specified by the researchers. Additionally, the researchers showed that rats' behavior was dependent on the number of required presses, and not for example on the time of pressing, as they varied the experiment to include faster and slower behavior on the rat's part by controlling how hungry the animal was. Examining the representation of numerosity in animals is a challenging task, since it is not possible to use language as a medium. Because of this, carefully designed experimental setups are required to differentiate between numerical abilities and other phenomena, such as the Clever Hans phenomenon, memorization of the single objects or perception of object size and time. Also, these abilities are seen only from the past few decades and not from the time of evolution. One of the ways that numerical ability is thought to be demonstrated is the transfer of the concept of numerosity across modalities. This was for example the case in the experiment of Church and Meck, [ 8 ] in which rats learned to "add" the number of light flashes to the number of tones to find out the number of expected lever presses, showing a concept of numerosity independent of visual and auditory modalities. Modern studies in number sense in animals try to control for other possible explanations of animal behavior by establishing control conditions in which the other explanations are tested. For example, when the number sense is investigated on the example of apple pieces, an alternative explanation is tested that assumes that the animal represents the volume of apple rather than a number of apple pieces. To test this alternative, an additional condition is introduced in which the volume of the apple varies and is occasionally smaller in the condition with a greater number of pieces. If the animal prefers a bigger number of pieces also in this condition, the alternative explanation is rejected, and the claim of numerical ability supported. [ 1 ] Numerosity is believed [ 9 ] to be represented by two separate systems in animals, similarly to humans. The first system is the approximate number system , an imprecise system used for estimations of quantities. This system is distinguished by distance and magnitude effects, which means that a comparison between numbers is easier and more precise when the distance between them is smaller and when the values of the numbers are smaller. The second system for representing numerosities is the parallel individuation system , which supports the exact representation of numbers from one to four. In addition, humans can represent numbers through symbolic systems, such as language. The distinction between the approximate number system and the parallel individuation system is, however, still disputed, and some experiments [ 10 ] record behavior that can be fully explained with the approximate number system, without the need to assume another separate system for smaller numbers. For example, New Zealand robins repeatedly selected larger quantities of cached food with a precision that correlated with the total number of cache pieces. However, there was no significant discontinuity in their performance between small (1 to 4) and larger (above 4) sets, which would be predicted by the parallel individuation system. On the other hand, other experiments only report knowledge of numbers up to 4, supporting the existence of the parallel individuation system and not the approximate number system. [ 1 ] Studies have shown that primates share similar cognitive algorithms for not only comparing numerical values, but also encoding those values as analogs. [ 11 ] [ 12 ] In fact, many experiments have supported that primates' capacity for numbers is comparable to human children. [ 11 ] Through these experiments, it is clear that there are several neurological processing mechanisms at work— the approximate number system (ANS), number ordinality, the parallel individuation system (PNS), and subitization. [ 9 ] The approximate number system (ANS) is fairly imprecise and relies heavily on cognitive estimation and comparison. This system does not give numbers individual value, but compares quantities based on their relative size. The efficiency of this ANS depends on Weber’s law , which states that the ability to distinguish between quantities is dictated by the ratio between two numbers, not the absolute difference between them. [ 13 ] In other words, the accuracy of the ANS depends on the size difference between two quantities being compared. And since larger quantities are more difficult to comprehend than smaller quantities, the accuracy of ANS also decreases as numerosity increases. [ 9 ] It has been found that rhesus macaques ( Macaca mulatta ), when given certain images of objects with multiple properties i.e. colors, shapes, and numbers, are quick to match the image with another of the same number of items regardless of the other properties. [ 14 ] This result supports the use of the ANS because the monkeys aren't defining numbers individually, but are rather matching sets of items of the same number using comparison of quantities. The tendency of macaques to categorize and equate groups of items by number is extremely suggestive of a functioning ANS in primates. Examples of the ANS in primates exist during natural confrontation within and between groups. In the case of chimpanzees ( Pan troglodytes ), an intruder on a group’s territory will only be attacked if the intruder is alone and the attacking party is composed of at least three males— a ratio of one-to-three. Here, they are using ANS by way of comparative analysis of the invading group and their own group to determine whether or not to attack. [ 9 ] This social numerical superiority concept exists across many primate species and displays the understanding of power in numbers, at least in a comparative way. [ 15 ] Further evidence of the ANS has been found in chimpanzees successfully identifying different quantities of food in a container. The chimpanzees listened to items of food that they were unable to see be thrown individually into separate containers. Then, they chose which container to eat from (based on which contained the higher amount of food). They were fairly successful with the task, indicating that the chimps had the capability to not only compare quantities, but also to keep track of those quantities within their minds. [ 16 ] The experiment did however break down at certain similar numbers of individual food items according to Weber's Law. [ 13 ] The number skill most thoroughly supported in primates is ordinality – the ability to recognize sequential symbols or quantities. [ 17 ] Rather than merely determining if a value is greater or less than another like the ANS, ordinality requires a more nuanced recognition of the specific order of numbers or items in a set. [ 14 ] Here, Weber's Law is no longer applicable since the values are only increasing incrementally, often by only one. [ 16 ] Primates have displayed ordinality both with arrays of items, as well as with Arabic numerals. When presented with arrays of 1-4 items, rhesus macaques were capable of consistently touching the arrays in ascending order. After this test, they were presented with arrays containing higher numbers of items and were able to extrapolate the task by touching the new arrays also in ascending sequential order. Moreover, the rate at which the monkeys performed the task was comparable to human adults. [ 18 ] [ 19 ] Primates can also recognize sequences when given only Arabic numerals. One experiment known colloquially as the "chimp challenge" this task involved teaching chimpanzees to memorize the correct order of Arabic numerals from 1-9 then to press them in that order once they've disappeared scattered on a screen. Not only could the chimps recognize the correct sequence of the scattered numbers, but also recall the correct sequence after the numbers had disappeared on the screen. [ 20 ] Furthermore, they were able to do this faster and more accurately than human adults. [ 20 ] Without being provided with visual representation of the quantity that the number represented, this task signified a more advanced cognitive ability— differentiating symbols based on how they relate to each other in a series. [ 11 ] The parallel individuation system (PIS) is the most difficult number processing system to find evidence for in primates. This is because it requires the understanding that each number is a symbolic representation of a unique quantity that can be manipulated mathematically in a distinct way. [ 11 ] The PIS unlike the ANS, is therefore independent of the need for comparison, allowing each number to exist on its own with a value defined by arithmetic. In order to use the PIS, one must have some understanding of numerals—specific symbolic representations of quantities that relate to other symbolic representations of quantities in definite ways. [ 15 ] For example, the "chimp challenge" only displayed primates' understanding that three exists before four and after two, not that three can act on its own and independently hold a consistent value. [ 9 ] Often, the experimental set up required to support the existence of the PIS is lengthy. Once a primate has been trained on a task long enough to display the PIS, the results are usually attributed to mere associative learning rather than exact number comprehension. In order to provide unequivocal evidence of the existence of the PIS in primates, researchers must find a situation where a primate performs some sort of arithmetic calculation in the wild. [ 12 ] However, the closest researchers have come to successfully supporting the PIS in primates is in Rhesus macaques. In this study, the macaques were proven to associate auditory stimuli of a certain number of individual vocalizations with the correct number of individuals. While this didn't require them to learn Arabic numerals, it required the ability to choose an exact quantity for the voice number they heard rather than merely comparing quantities by sight or within a sequence. [ 21 ] Another important phenomenon to consider regarding primates understanding of numbers is subitization . Subitization is a phenomenon where the brain automatically groups small numbers of objects together visually without requiring it to go through any explicit mental counting of the objects. In humans, subitization allows for the recognition of numbers on pairs of dice due to the dot groupings rather than explicitly counting each dot. Essentially, it can give one a number sense without needing to understand the numerical system at low quantities. [ 12 ] Subitization in primates is evident in a wide range of experiments. Rhesus monkeys have been proven to differentiate between numbers of apples in a container even when the sizes of the apple slices were manipulated (some larger but fewer slices). While this could be attributed to PIS, the act of comparing groupings of small numbers suggests subitization is likely at play, especially because the experiment broke down once the numbers reached above about four. [ 15 ] An approximate number system has been found in a number of fish species, such as guppies , green swordtails and mosquitofish . For example, preference for a bigger social group in mosquitofish was exploited to test the ability of the fish to discriminate numerosity. [ 22 ] The fish successfully discriminated between different amounts up to three, after which they could discriminate groups if the difference between them also increased so that the ratio of the two groups was one-to-two. Similarly, guppies discriminated between values up to four, after which they only detected differences when the ratio between the two quantities was one-to-two. [ 23 ] Rats have demonstrated behavior consistent with an approximate number system [ 2 ] in experiments where they had to learn to press a lever a specified number of times to obtain food. While they did learn to press the lever the amount specified by the researchers, between four and sixteen, their behavior was approximate, proportional to the number of lever presses expected from them. This means that for the target number of four, the rats' responses varied from three to seven, and for the target number of 16 the responses varied from 12 to 24, showing a much greater interval. [ 7 ] This is compatible with the approximate number system and magnitude and distance effects. Birds were one of the first animal species tested on their number sense. A raven named Jacob was able to distinguish the number 5 across different tasks in the experiments by Otto Koehler. [ 5 ] Later experiments supported the claim of existence of a number sense in birds, with Alex , a grey parrot, able to label and comprehend labels for sets with up to six elements. [ 24 ] Other studies suggest that pigeons can also represent numbers up to 6 after an extensive training. [ 25 ] A sense of number has also been found in dogs. For example, dogs were able [ 26 ] to perform simple additions of two objects, as revealed by their surprise when the result was incorrect. It is however argued that wolves perform better on quantity discrimination tasks than dogs and that this could be a result of a less demanding natural selection for number sense in dogs. [ 27 ] Ants were shown to be able to count up to 20 and add and subtract numbers within 5. [ 28 ] [ 29 ] In highly social species such as red wood ants scouting individuals can transfer to foragers the information about the number of branches of a special “counting maze” they had to go to in order to obtain syrup. The findings concerning number sense in ants are based on comparisons of duration of information contacts between scouts and foragers which preceded successful trips by the foraging teams. Similar to some archaic human languages, the length of the code of a given number in ants’ communication is proportional to its value. In experiments in which the bait appeared on different branches with different frequencies, the ants used simple additions and subtractions to optimize their messages. Striped field mice ( Apodemus agrarius ) demonstrated a sense of number consistent with precise relative-quantity judgment: some of these mice exhibit high accuracy in discriminating between quantities that differ only by one. The latter include both small (such as 2 versus 3) and relatively large (such as 5 versus 6, and 8 versus 9) quantities of elements. [ 30 ] Bees can count up to four objects encountered sequentially during flight. In a 2017 study, [ 31 ] bees seemed to navigate to food sources by maintaining a running count of prominent landmarks passed en route, effective up to four landmarks.
https://en.wikipedia.org/wiki/Number_sense_in_animals
In optics , the numerical aperture ( NA ) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface (e.g., a flat interface). The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution ), and in fiber optics , in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it. In most areas of optics, and especially in microscopy , the numerical aperture of an optical system such as an objective lens is defined by N A = n sin ⁡ θ , {\displaystyle \mathrm {NA} =n\sin \theta ,} where n is the index of refraction of the medium in which the lens is working (1.00 for air , 1.33 for pure water , and typically 1.52 for immersion oil ; [ 1 ] see also list of refractive indices ), and θ is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the NA of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that n sin θ is constant across an interface; NA = n 1 sin θ 1 = n 2 sin θ 2 . In air, the angular aperture of the lens, a = 2 θ {\displaystyle a=2\theta } , is approximately twice this value (within the paraxial approximation ). The NA is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, NA generally refers to object-space numerical aperture unless otherwise noted. In microscopy, NA is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved (the resolution ) is proportional to ⁠ λ / 2NA ⁠ , where λ is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality ( diffraction-limited ) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image but will provide shallower depth of field . Numerical aperture is used to define the "pit size" in optical disc formats. [ 2 ] Increasing the magnification and the numerical aperture of the objective reduces the working distance, i.e. the distance between front lens and specimen. Numerical aperture is not typically used in photography . Instead, the angular aperture a = 2 θ {\displaystyle a=2\theta } of a lens (or an imaging mirror) is expressed by the f-number , written f / N , where N is the f-number given by the ratio of the focal length f to the diameter of the entrance pupil D : N = f D . {\displaystyle N={\frac {f}{D}}.} This ratio is related to the image-space numerical aperture when the lens is focused at infinity. [ 3 ] Based on the diagram at the right, the image-space numerical aperture of the lens is: NA i = n sin ⁡ θ = n sin ⁡ [ arctan ⁡ ( D 2 f ) ] ≈ n D 2 f , {\displaystyle {\text{NA}}_{\text{i}}=n\sin \theta =n\sin \left[\arctan \left({\frac {D}{2f}}\right)\right]\approx n{\frac {D}{2f}},} thus N ≈ ⁠ 1 / 2NA i ⁠ , assuming normal use in air ( n = 1 ). The approximation holds when the numerical aperture is small, but it turns out that for well-corrected optical systems such as camera lenses, a more detailed analysis shows that N is almost exactly equal to 1/(2NA i ) even at large numerical apertures. As Rudolf Kingslake explains, "It is a common error to suppose that the ratio [ D /2 f ] is actually equal to tan θ , and not sin θ ... The tangent would, of course, be correct if the principal planes were really plane. However, the complete theory of the Abbe sine condition shows that if a lens is corrected for coma and spherical aberration , as all good photographic objectives must be, the second principal plane becomes a portion of a sphere of radius f centered about the focal point". [ 4 ] In this sense, the traditional thin-lens definition and illustration of f-number is misleading, and defining it in terms of numerical aperture may be more meaningful. The f-number describes the light-gathering ability of the lens in the case where the marginal rays on the object side are parallel to the axis of the lens. This case is commonly encountered in photography, where objects being photographed are often far from the camera. When the object is not distant from the lens, however, the image is no longer formed in the lens's focal plane , and the f-number no longer accurately describes the light-gathering ability of the lens or the image-side numerical aperture. In this case, the numerical aperture is related to what is sometimes called the " working f-number " or "effective f-number". The working f-number is defined by modifying the relation above, taking into account the magnification from object to image: 1 2 NA i = N w = ( 1 − m P ) N , {\displaystyle {\frac {1}{2{\text{NA}}_{\text{i}}}}=N_{\text{w}}=\left(1-{\frac {m}{P}}\right)N,} where N w is the working f-number, m is the lens's magnification for an object a particular distance away, P is the pupil magnification , and the NA is defined in terms of the angle of the marginal ray as before. [ 3 ] [ 5 ] The magnification here is typically negative, and the pupil magnification is most often assumed to be 1 — as Allen R. Greenleaf explains, "Illuminance varies inversely as the square of the distance between the exit pupil of the lens and the position of the plate or film. Because the position of the exit pupil usually is unknown to the user of a lens, the rear conjugate focal distance is used instead; the resultant theoretical error so introduced is insignificant with most types of photographic lenses." [ 6 ] In photography, the factor is sometimes written as 1 + m , where m represents the absolute value of the magnification; in either case, the correction factor is 1 or greater. The two equalities in the equation above are each taken by various authors as the definition of working f-number, as the cited sources illustrate. They are not necessarily both exact, but are often treated as if they are. Conversely, the object-side numerical aperture is related to the f-number by way of the magnification (tending to zero for a distant object): 1 2 NA o = m − P m P N . {\displaystyle {\frac {1}{2{\text{NA}}_{\text{o}}}}={\frac {m-P}{mP}}N.} In laser physics , numerical aperture is defined slightly differently. Laser beams spread out as they propagate, but slowly. Far away from the narrowest part of the beam, the spread is roughly linear with distance—the laser beam forms a cone of light in the "far field". The relation used to define the NA of the laser beam is the same as that used for an optical system, NA = n sin ⁡ θ , {\displaystyle {\text{NA}}=n\sin \theta ,} but θ is defined differently. Laser beams typically do not have sharp edges like the cone of light that passes through the aperture of a lens does. Instead, the irradiance falls off gradually away from the center of the beam. It is very common for the beam to have a Gaussian profile. Laser physicists typically choose to make θ the divergence of the beam: the far-field angle between the beam axis and the distance from the axis at which the irradiance drops to e −2 times the on-axis irradiance. The NA of a Gaussian laser beam is then related to its minimum spot size ("beam waist") by NA ≃ λ 0 π w 0 , {\displaystyle {\text{NA}}\simeq {\frac {\lambda _{0}}{\pi w_{0}}},} where λ 0 is the vacuum wavelength of the light, and 2 w 0 is the diameter of the beam at its narrowest spot, measured between the e −2 irradiance points ("Full width at e −2 maximum of the intensity"). This means that a laser beam that is focused to a small spot will spread out quickly as it moves away from the focus, while a large-diameter laser beam can stay roughly the same size over a very long distance. See also: Gaussian beam width . A multi-mode optical fiber will only propagate light that enters the fiber within a certain range of angles, known as the acceptance cone of the fiber. The half-angle of this cone is called the acceptance angle , θ max . For step-index multimode fiber in a given medium, the acceptance angle is determined only by the indices of refraction of the core, the cladding, and the medium: n sin ⁡ θ max = n core 2 − n clad 2 , {\displaystyle n\sin \theta _{\max }={\sqrt {n_{\text{core}}^{2}-n_{\text{clad}}^{2}}},} where n is the refractive index of the medium around the fiber, n core is the refractive index of the fiber core, and n clad is the refractive index of the cladding . While the core will accept light at higher angles, those rays will not totally reflect off the core–cladding interface, and so will not be transmitted to the other end of the fiber. The derivation of this formula is given below. When a light ray is incident from a medium of refractive index n to the core of index n core at the maximum acceptance angle, Snell's law at the medium–core interface gives n sin ⁡ θ max = n core sin ⁡ θ r . {\displaystyle n\sin \theta _{\max }=n_{\text{core}}\sin \theta _{r}.\ } From the geometry of the above figure we have: sin ⁡ θ r = sin ⁡ ( 90 ∘ − θ c ) = cos ⁡ θ c {\displaystyle \sin \theta _{r}=\sin \left({90^{\circ }}-\theta _{c}\right)=\cos \theta _{c}} where θ c = arcsin ⁡ n clad n core {\displaystyle \theta _{c}=\arcsin {\frac {n_{\text{clad}}}{n_{\text{core}}}}} is the critical angle for total internal reflection . Substituting cos θ c for sin θ r in Snell's law we get: n n core sin ⁡ θ max = cos ⁡ θ c . {\displaystyle {\frac {n}{n_{\text{core}}}}\sin \theta _{\max }=\cos \theta _{c}.} By squaring both sides n 2 n core 2 sin 2 ⁡ θ max = cos 2 ⁡ θ c = 1 − sin 2 ⁡ θ c = 1 − n clad 2 n core 2 . {\displaystyle {\frac {n^{2}}{n_{\text{core}}^{2}}}\sin ^{2}\theta _{\max }=\cos ^{2}\theta _{c}=1-\sin ^{2}\theta _{c}=1-{\frac {n_{\text{clad}}^{2}}{n_{\text{core}}^{2}}}.} Solving, we find the formula stated above: n sin ⁡ θ max = n core 2 − n clad 2 , {\displaystyle n\sin \theta _{\max }={\sqrt {n_{\text{core}}^{2}-n_{\text{clad}}^{2}}},} This has the same form as the numerical aperture in other optical systems, so it has become common to define the NA of any type of fiber to be N A = n core 2 − n clad 2 , {\displaystyle \mathrm {NA} ={\sqrt {n_{\text{core}}^{2}-n_{\text{clad}}^{2}}},} where n core is the refractive index along the central axis of the fiber. Note that when this definition is used, the connection between the numerical aperture and the acceptance angle of the fiber becomes only an approximation. In particular, " NA " defined this way is not relevant for single-mode fiber . [ 7 ] [ 8 ] One cannot define an acceptance angle for single-mode fiber based on the indices of refraction alone. The number of bound modes , the mode volume , is related to the normalized frequency and thus to the numerical aperture. In multimode fibers, the term equilibrium numerical aperture is sometimes used. This refers to the numerical aperture with respect to the extreme exit angle of a ray emerging from a fiber in which equilibrium mode distribution has been established.
https://en.wikipedia.org/wiki/Numerical_aperture
Numerical cognition is a subdiscipline of cognitive science that studies the cognitive, developmental and neural bases of numbers and mathematics . As with many cognitive science endeavors, this is a highly interdisciplinary topic, and includes researchers in cognitive psychology , developmental psychology , neuroscience and cognitive linguistics . This discipline, although it may interact with questions in the philosophy of mathematics , is primarily concerned with empirical questions. Topics included in the domain of numerical cognition include: A variety of research has demonstrated that non-human animals, including rats, lions and various species of primates have an approximate sense of number (referred to as " numerosity "). [ 1 ] For example, when a rat is trained to press a bar 8 or 16 times to receive a food reward, the number of bar presses will approximate a Gaussian or Normal distribution with peak around 8 or 16 bar presses. When rats are more hungry, their bar-pressing behavior is more rapid, so by showing that the peak number of bar presses is the same for either well-fed or hungry rats, it is possible to disentangle time and number of bar presses. In addition, in a few species the parallel individuation system has been shown, for example in the case of guppies which successfully discriminated between 1 and 4 other individuals. [ 2 ] Similarly, researchers have set up hidden speakers in the African savannah to test natural (untrained) behavior in lions. [ 3 ] These speakers can play a number of lion calls, from 1 to 5. If a single lioness hears, for example, three calls from unknown lions, she will leave, while if she is with four of her sisters, they will go and explore. This suggests that not only can lions tell when they are "outnumbered" but that they can do this on the basis of signals from different sensory modalities, suggesting that numerosity is a multisensory concept. Developmental psychology studies have shown that human infants, like non-human animals, have an approximate sense of number. For example, in one study, infants were repeatedly presented with arrays of (in one block) 16 dots. Careful controls were in place to eliminate information from "non-numerical" parameters such as total surface area, luminance, circumference, and so on. After the infants had been presented with many displays containing 16 items, they habituated , or stopped looking as long at the display. Infants were then presented with a display containing 8 items, and they looked longer at the novel display. Because of the numerous controls that were in place to rule out non-numerical factors, the experimenters infer that six-month-old infants are sensitive to differences between 8 and 16. Subsequent experiments, using similar methodologies showed that 6-month-old infants can discriminate numbers differing by a 2:1 ratio (8 vs. 16 or 16 vs. 32) but not by a 3:2 ratio (8 vs. 12 or 16 vs. 24). However, 10-month-old infants succeed both at the 2:1 and the 3:2 ratio, suggesting an increased sensitivity to numerosity differences with age. [ 4 ] In another series of studies, Karen Wynn showed that infants as young as five months are able to do very simple additions (e.g., 1 + 1 = 2) and subtractions (3 - 1 = 2). To demonstrate this, Wynn used a "violation of expectation" paradigm, in which infants were shown (for example) one Mickey Mouse doll going behind a screen, followed by another. If, when the screen was lowered, infants were presented with only one Mickey (the "impossible event") they looked longer than if they were shown two Mickeys (the "possible" event). Further studies by Karen Wynn and Koleen McCrink found that although infants' ability to compute exact outcomes only holds over small numbers, infants can compute approximate outcomes of larger addition and subtraction events (e.g., "5+5" and "10-5" events). There is debate about how much these infant systems actually contain in terms of number concepts, harkening to the classic nature versus nurture debate. Gelman & Gallistel (1978) suggested that a child innately has the concept of natural number, and only has to map this onto the words used in her language. Carey ( 2004 , 2009 ) disagreed, saying that these systems can only encode large numbers in an approximate way , where language-based natural numbers can be exact. Without language, only numbers 1 to 4 are believed to have an exact representation, through the parallel individuation system . One promising approach is to see if cultures that lack number words can deal with natural numbers. The results so far are mixed (e.g., Pica et al. (2004) ); Butterworth & Reeve (2008) , Butterworth, Reeve, Reynolds & Lloyd (2008) ). Human neuroimaging studies have demonstrated that regions of the parietal lobe , including the intraparietal sulcus (IPS) and the inferior parietal lobule (IPL) are activated when subjects are asked to perform calculation tasks. Based on both human neuroimaging and neuropsychology , Stanislas Dehaene and colleagues have suggested that these two parietal structures play complementary roles. The IPS is thought to house the circuitry that is fundamentally involved in numerical estimation, [ 5 ] number comparison, [ 6 ] [ 7 ] and on-line calculation, or quantity processing (often tested with subtraction) while the IPL is thought to be involved in rote memorization, such as multiplication. [ 8 ] Thus, a patient with a lesion to the IPL may be able to subtract, but not multiply, and vice versa for a patient with a lesion to the IPS. In addition to these parietal regions, regions of the frontal lobe are also active in calculation tasks. These activations overlap with regions involved in language processing such as Broca's area and regions involved in working memory and attention . Additionally, the inferotemporal cortex is implicated in processing the numerical shapes and symbols, necessary for calculations with Arabic digits. [ 9 ] More current research has highlighted the networks involved with multiplication and subtraction tasks. Multiplication is often learned through rote memorization and verbal repetitions, and neuroimaging studies have shown that multiplication uses a left lateralized network of the inferior frontal cortex and the superior-middle temporal gyri in addition to the IPL and IPS. [ 10 ] Subtraction is taught more with quantity manipulation and strategy use, more reliant upon the right IPS and the posterior parietal lobule. [ 11 ] Single-unit neurophysiology in monkeys has also found neurons in the frontal cortex and in the intraparietal sulcus that respond to numbers. Andreas Nieder trained monkeys to perform a "delayed match-to-sample" task. [ 12 ] [ 13 ] [ 14 ] For example, a monkey might be presented with a field of four dots, and is required to keep that in memory after the display is taken away. Then, after a delay period of several seconds, a second display is presented. If the number on the second display match that from the first, the monkey has to release a lever. If it is different, the monkey has to hold the lever. Neural activity recorded during the delay period showed that neurons in the intraparietal sulcus and the frontal cortex had a "preferred numerosity", exactly as predicted by behavioral studies. That is, a certain number might fire strongly for four, but less strongly for three or five, and even less for two or six. Thus, we say that these neurons were "tuned" for specific quantities. Note that these neuronal responses followed Weber's law , as has been demonstrated for other sensory dimensions, and consistent with the ratio dependence observed for non-human animals' and infants' numerical behavior. [ 15 ] It is important to note that while primates have remarkably similar brains to humans, there are differences in function, ability, and sophistication. They make for good preliminary test subjects, but do not show small differences that are the result of different evolutionary tracks and environment. However, in the realm of number, they share many similarities. As identified in monkeys, neurons selectively tuned to number were identified in the bilateral intraparietal sulci and prefrontal cortex in humans. Piazza and colleagues [ 5 ] investigated this using fMRI, presenting participants with sets of dots where they either had to make same-different judgments or larger-smaller judgments. The sets of dots consisted of base numbers 16 and 32 dots with ratios in 1.25, 1.5, and 2. Deviant numbers were included in some trials in larger or smaller amounts than the base numbers. Participants displayed similar activation patterns as Neider found in the monkeys. [ 15 ] The intraparietal sulcus and the prefrontal cortex , also implicated in number, communicate in approximating number and it was found in both species that the parietal neurons of the IPS had short firing latencies, whereas the frontal neurons had longer firing latencies. This supports the notion that number is first processed in the IPS and, if needed, is then transferred to the associated frontal neurons in the prefrontal cortex for further numerations and applications. Humans displayed Gaussian curves in the tuning curves of approximate magnitude. This aligned with monkeys, displaying a similarly structured mechanism in both species with classic Gaussian curves relative to the increasingly deviant numbers with 16 and 32 as well as habituation. The results followed Weber's Law , with accuracy decreasing as the ratio between numbers became smaller. This supports the findings made by Neider in macaque monkeys [ 14 ] and shows definitive evidence for an approximate number logarithmic scale in humans. [ 16 ] [ 17 ] With an established mechanism for approximating non-symbolic number in both humans and primates, a necessary further investigation is needed to determine if this mechanism is innate and present in children, which would suggest an inborn ability to process numerical stimuli much like humans are born ready to process language. Cantlon, Brannon, Carter & Pelphrey (2006) set out to investigate this in 4 year old healthy, normally developing children in parallel with adults. A similar task to Piazza's [ 5 ] was used in this experiment, without the judgment tasks. Dot arrays of varying size and number were used, with 16 and 32 as the base numerosities. in each block, 232 stimuli were presented with 20 deviant numerosities of a 2.0 ratio both larger and smaller. For example, out of the 232 trials, 16 dots were presented in varying size and distance but 10 of those trials had 8 dots, and 10 of those trials had 32 dots, making up the 20 deviant stimuli. The same applied to the blocks with 32 as the base numerosity. To ensure the adults and children were attending to the stimuli, they put 3 fixation points throughout the trial where the participant had to move a joystick to move forward. Their findings indicated that the adults in the experiment had significant activation of the IPS when viewing the deviant number stimuli, aligning with what was previously found in the aforementioned paragraph. In the 4 year olds, they found significant activation of the IPS to the deviant number stimuli, resembling the activation found in adults. There were some differences in the activations, with adults displaying more robust bilateral activation, where the 4 year olds primarily showed activation in their right IPS and activated 112 less voxels than the adults. This suggests that at age 4, children have an established mechanism of neurons in the IPS tuned for processing non-symbolic numerosities. Other studies have gone deeper into this mechanism in children and discovered that children do also represent approximate numbers on a logarithmic scale , aligning with the claims made by Piazza in adults. Izard, Sann, Spelke & Streri (2009) investigated abstract number representations in infants using a different paradigm than the previous researchers because of the nature and developmental stage of the infants. For infants, they examined abstract number with both auditory and visual stimuli with a looking-time paradigm. The sets used were 4vs.12, 8vs.16, and 4vs.8. The auditory stimuli consisted of tones in different frequencies with a set number of tones, with some deviant trials where the tones were shorter but more numerous or longer and less numerous to account for duration and its potential confounds. After the auditory stimuli was presented with 2 minutes of familiarization, the visual stimuli was presented with a congruent or incongruent array of colorful dots with facial features. they remained on the screen until the infant looked away. They found that infants looked longer at the stimuli that matched the auditory tones, suggesting that the system for approximating non-symbolic number, even across modalities, is present in infancy. What is important to note across these three particular human studies on nonsymbolic numerosities is that it is present in infancy and develops over the lifetime. The honing of their approximation and number sense abilities as indicated by the improving Weber fractions across time, and usage of the left IPS to provide a wider berth for processing of computations and enumerations lend support for the claims that are made for a nonsymbolic number processing mechanism in human brains. There is evidence that numerical cognition is intimately related to other aspects of thought – particularly spatial cognition. [ 18 ] One line of evidence comes from studies performed on number-form synaesthetes . [ 19 ] Such individuals report that numbers are mentally represented with a particular spatial layout; others experience numbers as perceivable objects that can be visually manipulated to facilitate calculation. Behavioral studies further reinforce the connection between numerical and spatial cognition. For instance, participants respond quicker to larger numbers if they are responding on the right side of space, and quicker to smaller numbers when on the left—the so-called "Spatial-Numerical Association of Response Codes" or SNARC effect . [ 20 ] This effect varies across culture and context, [ 21 ] however, and some research has even begun to question whether the SNARC reflects an inherent number-space association, [ 22 ] instead invoking strategic problem solving or a more general cognitive mechanism like conceptual metaphor . [ 23 ] [ 24 ] Moreover, neuroimaging studies reveal that the association between number and space also shows up in brain activity. Regions of the parietal cortex, for instance, show shared activation for both spatial and numerical processing. [ 25 ] These various lines of research suggest a strong, but flexible, connection between numerical and spatial cognition. Modification of the usual decimal representation was advocated by John Colson . The sense of complementation , missing in the usual decimal system, is expressed by signed-digit representation . Several consumer psychologists have also studied the heuristics that people use in numerical cognition. For example, Thomas & Morwitz (2009) reviewed several studies showing that the three heuristics that manifest in many everyday judgments and decisions – anchoring, representativeness, and availability – also influence numerical cognition. They identify the manifestations of these heuristics in numerical cognition as: the left-digit anchoring effect, the precision effect, and the ease of computation effect respectively. The left-digit effect refers to the observation that people tend to incorrectly judge the difference between $4.00 and $2.99 to be larger than that between $4.01 and $3.00 because of anchoring on left-most digits. The precision effect reflects the influence of the representativeness of digit patterns on magnitude judgments. Larger magnitudes are usually rounded and therefore have many zeros, whereas smaller magnitudes are usually expressed as precise numbers; so relying on the representativeness of digit patterns can make people incorrectly judge a price of $391,534 to be more attractive than a price of $390,000. The ease of computation effect shows that magnitude judgments are based not only on the output of a mental computation, but also on its experienced ease or difficulty. Usually it is easier to compare two dissimilar magnitudes than two similar magnitudes; overuse of this heuristic can make people incorrectly judge the difference to be larger for pairs with easier computations, e.g. $5.00 minus $4.00, than for pairs with difficult computations, e.g. $4.97 minus $3.96. [ 26 ] The numeracy of indigenous peoples is studied to identify universal aspects of numerical cognition in humans. Notable examples include the Pirahã people who have no words for specific numbers and the Munduruku people who only have number words up to five. Pirahã adults are unable to mark an exact number of tallies for a pile of nuts containing fewer than ten items. Anthropologist Napoleon Chagnon spent several decades studying the Yanomami in the field. He concluded that they have no need for counting in their everyday lives. Their hunters keep track of individual arrows with the same mental faculties that they use to recognize their family members. There are no known hunter-gatherer cultures that have a counting system in their language. The mental and lingual capabilities for numeracy are tied to the development of agriculture and with it large numbers of indistinguishable items. [ 27 ] The Journal of Numerical Cognition is an open-access, free-to-publish, online-only Journal outlet specifically for research in the domain of numerical cognition. Journal link
https://en.wikipedia.org/wiki/Numerical_cognition
Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, The parameter λ {\displaystyle \lambda } is usually a real scalar and the solution u {\displaystyle \mathbf {u} } is an n -vector . For a fixed parameter value λ {\displaystyle \lambda } , F ( ⋅ , λ ) {\textstyle F(\cdot ,\lambda )} maps Euclidean n-space into itself. Often the original mapping F {\displaystyle F} is from a Banach space into itself, and the Euclidean n-space is a finite-dimensional Banach space. A steady state , or fixed point , of a parameterized family of flows or maps are of this form, and by discretizing trajectories of a flow or iterating a map, periodic orbits and heteroclinic orbits can also be posed as a solution of F = 0 {\displaystyle F=0} . In some nonlinear systems, parameters are explicit. In others they are implicit, and the system of nonlinear equations is written where u {\displaystyle \mathbf {u} } is an n -vector, and its image F ( u ) {\displaystyle F(\mathbf {u} )} is an n-1 vector. This formulation, without an explicit parameter space is not usually suitable for the formulations in the following sections, because they refer to parameterized autonomous nonlinear dynamical systems of the form: However, in an algebraic system there is no distinction between unknowns u {\displaystyle \mathbf {u} } and the parameters. A periodic motion is a closed curve in phase space. That is, for some period T {\displaystyle T} , The textbook example of a periodic motion is the undamped pendulum . If the phase space is periodic in one or more coordinates, say u ( t ) = u ( t + Ω ) {\displaystyle \mathbf {u} (t)=\mathbf {u} (t+\Omega )} , with Ω {\displaystyle \Omega } a vector [ clarification needed ] , then there is a second kind of periodic motions defined by for every integer N {\displaystyle N} . The first step in writing an implicit system for a periodic motion is to move the period T {\displaystyle T} from the boundary conditions to the ODE : The second step is to add an additional equation, a phase constraint , that can be thought of as determining the period. This is necessary because any solution of the above boundary value problem can be shifted in time by an arbitrary amount (time does not appear in the defining equations—the dynamical system is called autonomous). There are several choices for the phase constraint. If u 0 ( t ) {\displaystyle \mathbf {u} _{0}(t)} is a known periodic orbit at a parameter value λ 0 {\displaystyle \lambda _{0}} near λ {\displaystyle \lambda } , then, Poincaré used which states that u {\displaystyle \mathbf {u} } lies in a plane which is orthogonal to the tangent vector of the closed curve. This plane is called a Poincaré section . For a general problem a better phase constraint is an integral constraint introduced by Eusebius Doedel, which chooses the phase so that the distance between the known and unknown orbits is minimized: A solution component Γ ( u 0 , λ 0 ) {\displaystyle \Gamma (\mathbf {u} _{0},\lambda _{0})} of the nonlinear system F {\displaystyle F} is a set of points ( u , λ ) {\displaystyle (\mathbf {u} ,\lambda )} which satisfy F ( u , λ ) = 0 {\displaystyle F(\mathbf {u} ,\lambda )=0} and are connected to the initial solution ( u 0 , λ 0 ) {\displaystyle (\mathbf {u} _{0},\lambda _{0})} by a path of solutions ( u ( s ) , λ ( s ) ) {\displaystyle (\mathbf {u} (s),\lambda (s))} for which ( u ( 0 ) , λ ( 0 ) ) = ( u 0 , λ 0 ) , ( u ( 1 ) , λ ( 1 ) ) = ( u , λ ) {\displaystyle (\mathbf {u} (0),\lambda (0))=(\mathbf {u} _{0},\lambda _{0}),\,(\mathbf {u} (1),\lambda (1))=(\mathbf {u} ,\lambda )} and F ( u ( s ) , λ ( s ) ) = 0 {\displaystyle F(\mathbf {u} (s),\lambda (s))=0} . A numerical continuation is an algorithm which takes as input a system of parametrized nonlinear equations and an initial solution ( u 0 , λ 0 ) {\displaystyle (\mathbf {u} _{0},\lambda _{0})} , F ( u 0 , λ 0 ) = 0 {\displaystyle F(\mathbf {u} _{0},\lambda _{0})=0} , and produces a set of points on the solution component Γ ( u 0 , λ 0 ) {\displaystyle \Gamma (\mathbf {u} _{0},\lambda _{0})} . A regular point of F {\displaystyle F} is a point ( u , λ ) {\displaystyle (\mathbf {u} ,\lambda )} at which the Jacobian of F {\displaystyle F} is full rank ( n ) {\displaystyle (n)} . Near a regular point the solution component is an isolated curve passing through the regular point (the implicit function theorem ). In the figure above the point ( u 0 , λ 0 ) {\displaystyle (\mathbf {u} _{0},\lambda _{0})} is a regular point. A singular point of F {\displaystyle F} is a point ( u , λ ) {\displaystyle (\mathbf {u} ,\lambda )} at which the Jacobian of F is not full rank. Near a singular point the solution component may not be an isolated curve passing through the regular point. The local structure is determined by higher derivatives of F {\displaystyle F} . In the figure above the point where the two blue curves cross is a singular point. In general solution components Γ {\displaystyle \Gamma } are branched curves . The branch points are singular points. Finding the solution curves leaving a singular point is called branch switching, and uses techniques from bifurcation theory ( singularity theory , catastrophe theory ). For finite-dimensional systems (as defined above) the Lyapunov-Schmidt decomposition may be used to produce two systems to which the Implicit Function Theorem applies. The Lyapunov-Schmidt decomposition uses the restriction of the system to the complement of the null space of the Jacobian and the range of the Jacobian. If the columns of the matrix Φ {\displaystyle \Phi } are an orthonormal basis for the null space of and the columns of the matrix Ψ {\displaystyle \Psi } are an orthonormal basis for the left null space of J {\displaystyle J} , then the system F ( x , λ ) = 0 {\displaystyle F(x,\lambda )=0} can be rewritten as where η {\displaystyle \eta } is in the complement of the null space of J {\displaystyle J} ( Φ T η = 0 ) {\displaystyle (\Phi ^{T}\,\eta =0)} . In the first equation, which is parametrized by the null space of the Jacobian ( ξ {\displaystyle \xi } ), the Jacobian with respect to η {\displaystyle \eta } is non-singular. So the implicit function theorem states that there is a mapping η ( ξ ) {\displaystyle \eta (\xi )} such that η ( 0 ) = 0 {\displaystyle \eta (0)=0} and ( I − Ψ Ψ T ) F ( x + Φ ξ + η ( ξ ) ) = 0 ) {\displaystyle (I-\Psi \Psi ^{T})F(x+\Phi \xi +\eta (\xi ))=0)} . The second equation (with η ( ξ ) {\displaystyle \eta (\xi )} substituted) is called the bifurcation equation (though it may be a system of equations). The bifurcation equation has a Taylor expansion which lacks the constant and linear terms. By scaling the equations and the null space of the Jacobian of the original system a system can be found with non-singular Jacobian. The constant term in the Taylor series of the scaled bifurcation equation is called the algebraic bifurcation equation, and the implicit function theorem applied the bifurcation equations states that for each isolated solution of the algebraic bifurcation equation there is a branch of solutions of the original problem which passes through the singular point. Another type of singular point is a turning point bifurcation , or saddle-node bifurcation , where the direction of the parameter λ {\displaystyle \lambda } reverses as the curve is followed. The red curve in the figure above illustrates a turning point. Most methods of solution of nonlinear systems of equations are iterative methods. For a particular parameter value λ 0 {\displaystyle \lambda _{0}} a mapping is repeatedly applied to an initial guess u 0 {\displaystyle \mathbf {u} _{0}} . If the method converges, and is consistent, then in the limit the iteration approaches a solution of F ( u , λ 0 ) = 0 {\displaystyle F(\mathbf {u} ,\lambda _{0})=0} . Natural parameter continuation is a very simple adaptation of the iterative solver to a parametrized problem. The solution at one value of λ {\displaystyle \lambda } is used as the initial guess for the solution at λ + Δ λ {\displaystyle \lambda +\Delta \lambda } . With Δ λ {\displaystyle \Delta \lambda } sufficiently small the iteration applied to the initial guess should converge. One advantage of natural parameter continuation is that it uses the solution method for the problem as a black box. All that is required is that an initial solution be given (some solvers used to always start at a fixed initial guess). There has been a lot of work in the area of large scale continuation on applying more sophisticated algorithms to black box solvers (see e.g. LOCA ). However, natural parameter continuation fails at turning points, where the branch of solutions turns round. So for problems with turning points, a more sophisticated method such as pseudo-arclength continuation must be used (see below). Simplicial Continuation, or Piecewise Linear Continuation (Allgower and Georg) is based on three basic results. The first is The second result is: Please see the article on piecewise linear continuation for details. With these two operations this continuation algorithm is easy to state (although of course an efficient implementation requires a more sophisticated approach. See [B1]). An initial simplex is assumed to be given, from a reference simplicial decomposition of R n {\displaystyle \mathbb {R} ^{n}} . The initial simplex must have at least one face which contains a zero of the unique linear interpolant on that face. The other faces of the simplex are then tested, and typically there will be one additional face with an interior zero. The initial simplex is then replaced by the simplex which lies across either face containing zero, and the process is repeated. References: Allgower and Georg [B1] provides a crisp, clear description of the algotihm. This method is based on the observation that the "ideal" parameterization of a curve is arclength. Pseudo-arclength is an approximation of the arclength in the tangent space of the curve. The resulting modified natural continuation method makes a step in pseudo-arclength (rather than λ {\displaystyle \lambda } ). The iterative solver is required to find a point at the given pseudo-arclength, which requires appending an additional constraint (the pseudo-arclength constraint) to the n by n+1 Jacobian. It produces a square Jacobian, and if the stepsize is sufficiently small the modified Jacobian is full rank. Pseudo-arclength continuation was independently developed by Edward Riks and Gerald Wempner for finite element applications in the late 1960s, and published in journals in the early 1970s by H.B. Keller. A detailed account of these early developments is provided in the textbook by M. A. Crisfield: Nonlinear Finite Element Analysis of Solids and Structures, Vol 1: Basic Concepts, Wiley, 1991. Crisfield was one of the most active developers of this class of methods, which are by now standard procedures of commercial nonlinear finite element programs. The algorithm is a predictor-corrector method. The prediction step finds the point (in IR^(n+1) ) which is a step Δ s {\displaystyle \Delta s} along the tangent vector at the current pointer. The corrector is usually Newton's method, or some variant, to solve the nonlinear system where ( u ˙ 0 , λ ˙ 0 ) {\displaystyle ({\dot {u}}_{0},{\dot {\lambda }}_{0})} is the tangent vector at ( u 0 , λ 0 ) {\displaystyle (u_{0},\lambda _{0})} . The Jacobian of this system is the bordered matrix At regular points, where the unmodified Jacobian is full rank, the tangent vector spans the null space of the top row of this new Jacobian. Appending the tangent vector as the last row can be seen as determining the coefficient of the null vector in the general solution of the Newton system (particular solution plus an arbitrary multiple of the null vector). This method is a variant of pseudo-arclength continuation. Instead of using the tangent at the initial point in the arclength constraint, the tangent at the current solution is used. This is equivalent to using the pseudo-inverse of the Jacobian in Newton's method, and allows longer steps to be made. [B17] The parameter λ {\displaystyle \lambda } in the algorithms described above is a real scalar. Most physical and design problems generally have many more than one parameter. Higher-dimensional continuation refers to the case when λ {\displaystyle \lambda } is a k-vector. The same terminology applies. A regular solution is a solution at which the Jacobian is full rank ( n ) {\displaystyle (n)} . A singular solution is a solution at which the Jacobian is less than full rank. A regular solution lies on a k-dimensional surface, which can be parameterized by a point in the tangent space (the null space of the Jacobian). This is again a straightforward application of the Implicit Function Theorem. Numerical continuation techniques have found a great degree of acceptance in the study of chaotic dynamical systems and various other systems which belong to the realm of catastrophe theory . The reason for such usage stems from the fact that various non-linear dynamical systems behave in a deterministic and predictable manner within a range of parameters which are included in the equations of the system. However, for a certain parameter value the system starts behaving chaotically and hence it became necessary to follow the parameter in order to be able to decipher the occurrences of when the system starts being non-predictable, and what exactly (theoretically) makes the system become unstable. Analysis of parameter continuation can lead to more insights about stable/critical point bifurcations. Study of saddle-node, transcritical, pitch-fork, period doubling, Hopf, secondary Hopf (Neimark) bifurcations of stable solutions allows for a theoretical discussion of the circumstances and occurrences which arise at the critical points. Parameter continuation also gives a more dependable system to analyze a dynamical system as it is more stable than more interactive, time-stepped numerical solutions. Especially in cases where the dynamical system is prone to blow-up at certain parameter values (or combination of values for multiple parameters). [ 2 ] It is extremely insightful as to the presence of stable solutions (attracting or repelling) in the study of nonlinear differential equations where time stepping in the form of the Crank Nicolson algorithm is extremely time consuming as well as unstable in cases of nonlinear growth of the dependent variables in the system. The study of turbulence is another field where the Numerical Continuation techniques have been used to study the advent of turbulence in a system starting at low Reynolds numbers. Also, research using these techniques has provided the possibility of finding stable manifolds and bifurcations to invariant-tori in the case of the restricted three-body problem in Newtonian gravity and have also given interesting and deep insights into the behaviour of systems such as the Lorenz equations . (Under Construction) See also The SIAM Activity Group on Dynamical Systems' list http://www.dynamicalsystems.org/sw/sw/ This problem, of finding the points which F maps into the origin appears in computer graphics as the problems of drawing contour maps (n=2), or isosurface (n=3). The contour with value h is the set of all solution components of F-h=0 [B1] " Introduction to Numerical Continuation Methods ", Eugene L. Allgower and Kurt Georg, SIAM Classics in Applied Mathematics 45. 2003. [B2] " Numerical Methods for Bifurcations of Dynamical Equilibria ", Willy J. F. Govaerts, SIAM 2000. [B3] " Lyapunov-Schmidt Methods in Nonlinear Analysis and Applications ", Nikolay Sidorov, Boris Loginov, Aleksandr Sinitsyn, and Michail Falaleev, Kluwer Academic Publishers, 2002. [B4] " Methods of Bifurcation Theory ", Shui-Nee Chow and Jack K. Hale, Springer-Verlag 1982. [B5] " Elements of Applied Bifurcation Theory ", Yuri A. Kunetsov, Springer-Verlag Applied Mathematical Sciences 112, 1995. [B6] "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields", John Guckenheimer and Philip Holmes , Springer-Verlag Applied Mathematical Sciences 42, 1983. [B7] " Elementary Stability and Bifurcation Theory ", Gerard Iooss and Daniel D. Joseph, Springer-Verlag Undergraduate Texts in Mathematics , 1980. [B8] " Singularity Theory and an Introduction to Catastrophe Theory ", Yung-Chen Lu, Springer-Verlag, 1976. [B9] " Global Bifurcations and Chaos, Analytic Methods ", S. Wiggins, Springer-Verlag Applied Mathematical Sciences 73, 1988. [B10] " Singularities and Groups in Bifurcation Theory, volume I ", Martin Golubitsky and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 51, 1985. [B11] " Singularities and Groups in Bifurcation Theory, volume II ", Martin Golubitsky , Ian Stewart and David G. Schaeffer, Springer-Verlag Applied Mathematical Sciences 69, 1988. [B12] " Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems ", Alexander Morgan, Prentice-Hall, Englewood Cliffs, N.J. 1987. [B13] " Pathways to Solutions, Fixed Points and Equilibria ", C. B. Garcia and W. I. Zangwill, Prentice-Hall, 1981. [B14] " The Implicit Function Theorem: History, Theory and Applications ", Steven G. Krantz and Harold R. Parks , Birkhauser, 2002. [B15] " Nonlinear Functional Analysis ", J. T. Schwartz, Gordon and Breach Science Publishers, Notes on Mathematics and its Applications, 1969. [B16] " Topics in Nonlinear Functional Analysis ", Louis Nirenberg (notes by Ralph A. Artino), AMS Courant Lecture Notes in Mathematics 6, 1974. [B17] " Newton Methods for Nonlinear Problems -- Affine Invariance and Adaptive Algorithms ", P. Deuflhard, Series Computational Mathematics 35, Springer, 2006. [A1] " An Algorithm for Piecewise Linear Approximation of Implicitly Defined Two-Dimensional Surfaces ", Eugene L. Allgower and Stefan Gnutzmann, SIAM Journal on Numerical Analysis, Volume 24, Number 2, 452—469, 1987. [A2] " Simplicial and Continuation Methods for Approximations, Fixed Points and Solutions to Systems of Equations ", E. L. Allgower and K. Georg, SIAM Review, Volume 22, 28—85, 1980. [A3] " An Algorithm for Piecewise-Linear Approximation of an Implicitly Defined Manifold ", Eugene L. Allgower and Phillip H. Schmidt, SIAM Journal on Numerical Analysis, Volume 22, Number 2, 322—346, April 1985. [A4] " Contour Tracing by Piecewise Linear Approximations ", David P. Dobkin , Silvio V. F. Levy, William P. Thurston and Allan R. Wilks, ACM Transactions on Graphics, 9(4) 389-423, 1990. [A5] " Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems ", H. B. Keller, in "Applications of Bifurcation Theory", P. Rabinowitz ed., Academic Press, 1977. [A6] " A Locally Parameterized Continuation Process ", W.C. Rheinboldt and J.V. Burkardt, ACM Transactions on Mathematical Software, Volume 9, 236—246, 1983. [A7] " Nonlinear Numerics " E. Doedel, International Journal of Bifurcation and Chaos , 7(9):2127-2143, 1997. [A8] " Nonlinear Computation ", R. Seydel, International Journal of Bifurcation and Chaos , 7(9):2105-2126, 1997. [A9] " On a Moving Frame Algorithm and the Triangulation of Equilibrium Manifolds ", W.C. Rheinboldt, In T. Kuper, R. Seydel, and H. Troger eds. "ISNM79: Bifurcation: Analysis, Algorithms, Applications", pages 256-267. Birkhauser, 1987. [A10] " On the Computation of Multi-Dimensional Solution Manifolds of Parameterized Equations ", W.C. Rheinboldt, Numerishe Mathematik, 53, 1988, pages 165-181. [A11] " On the Simplicial Approximation of Implicitly Defined Two-Dimensional Manifolds ", M. L. Brodzik and W.C. Rheinboldt, Computers and Mathematics with Applications, 28(9): 9-21, 1994. [A12] " The Computation of Simplicial Approximations of Implicitly Defined p-Manifolds ", M. L. Brodzik, Computers and Mathematics with Applications, 36(6):93-113, 1998. [A13] " New Algorithm for Two-Dimensional Numerical Continuation ", R. Melville and D. S. Mackey, Computers and Mathematics with Applications, 30(1):31-46, 1995. [A14] " Multiple Parameter Continuation: Computing Implicitly Defined k-manifolds ", M. E. Henderson, IJBC 12[3]:451-76, 2003. [A15] " MANPACK: a set of algorithms for computations on implicitly defined manifolds ", W. C. Rheinboldt, Comput. Math. Applic. 27 pages 15–9, 1996. [A16] " CANDYS/QA - A Software System For Qualitative Analysis Of Nonlinear Dynamical Systems ", Feudel, U. and W. Jansen, Int. J. Bifurcation and Chaos, vol. 2 no. 4, pp. 773–794, World Scientific, 1992.
https://en.wikipedia.org/wiki/Numerical_continuation
In numerical analysis , numerical differentiation algorithms estimate the derivative of a mathematical function or subroutine using values of the function and perhaps other knowledge about the function. The simplest method is to use finite difference approximations. A simple two-point estimation is to compute the slope of a nearby secant line through the points ( x , f ( x )) and ( x + h , f ( x + h )) . [ 1 ] Choosing a small number h , h represents a small change in x , and it can be either positive or negative. The slope of this line is f ( x + h ) − f ( x ) h . {\displaystyle {\frac {f(x+h)-f(x)}{h}}.} This expression is Newton 's difference quotient (also known as a first-order divided difference ). The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h . As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h . {\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.} Since immediately substituting 0 for h results in 0 0 {\displaystyle {\frac {0}{0}}} indeterminate form , calculating the derivative directly can be unintuitive. Equivalently, the slope could be estimated by employing positions x − h and x . Another two-point formula is to compute the slope of a nearby secant line through the points ( x − h , f ( x − h )) and ( x + h , f ( x + h )) . The slope of this line is f ( x + h ) − f ( x − h ) 2 h . {\displaystyle {\frac {f(x+h)-f(x-h)}{2h}}.} This formula is known as the symmetric difference quotient . In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to h 2 {\displaystyle h^{2}} . Hence for small values of h this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at x , the value of the function at x is not involved. The estimation error is given by R = − f ( 3 ) ( c ) 6 h 2 , {\displaystyle R={\frac {-f^{(3)}(c)}{6}}h^{2},} where c {\displaystyle c} is some point between x − h {\displaystyle x-h} and x + h {\displaystyle x+h} . This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision. The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82 , TI-83 , TI-84 , TI-85 , all of which use this method with h = 0.001 . [ 2 ] [ 3 ] An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, h . If chosen too small, the subtraction will yield a large rounding error . In fact, all the finite-difference formulae are ill-conditioned [ 4 ] and due to cancellation will produce a value of zero if h is small enough. [ 5 ] If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse. [ 6 ] For basic central differences, the optimal step is the cube-root of machine epsilon . [ 7 ] For the numerical derivative formula evaluated at x and x + h , a choice for h that is small without producing a large rounding error is ε x {\displaystyle {\sqrt {\varepsilon }}x} (though not when x = 0), where the machine epsilon ε is typically of the order of 2.2 × 10 −16 for double precision . [ 8 ] A formula for h that balances the rounding error against the secant error for optimum accuracy is [ 9 ] h = 2 ε | f ( x ) f ″ ( x ) | {\displaystyle h=2{\sqrt {\varepsilon \left|{\frac {f(x)}{f''(x)}}\right|}}} (though not when f ″ ( x ) = 0 {\displaystyle f''(x)=0} ), and to employ it will require knowledge of the function. For computer calculations the problems are exacerbated because, although x necessarily holds a representable floating-point number in some precision (32 or 64-bit, etc .), x + h almost certainly will not be exactly representable in that precision. This means that x + h will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that ( x + h ) − x will not equal h ; the two function evaluations will not be exactly h apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as h = 0.1 will not be a round number in binary; it is 0.000110011001100... 2 A possible approach is as follows: However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that dx and h are the same. With C and similar languages, a directive that xph is a volatile variable will prevent this. To obtain more general derivative approximation formulas for some function f ( x ) {\displaystyle f(x)} , let h > 0 {\displaystyle h>0} be a positive number close to zero. The Taylor expansion of f ( x ) {\displaystyle f(x)} about the base point x {\displaystyle x} is Replacing h {\displaystyle h} by 2 h {\displaystyle 2h} gives Multiplying identity ( 1 ) by 4 gives Subtracting identity ( 1' ) from ( 2 ) eliminates the h 2 {\displaystyle h^{2}} term: f ( x + 2 h ) − 4 f ( x + h ) = − 3 f ( x ) − 2 h f ′ ( x ) + 4 h 3 3 ! f ‴ ( x ) + . . . {\displaystyle f(x+2h)-4f(x+h)=-3f(x)-2hf'(x)+{\frac {4h^{3}}{3!}}f'''(x)+...} which can be written as f ( x + 2 h ) − 4 f ( x + h ) = − 3 f ( x ) − 2 h f ′ ( x ) + O ( h 3 ) . {\displaystyle f(x+2h)-4f(x+h)=-3f(x)-2hf'(x)+O(h^{3}).} Rearranging terms gives f ′ ( x ) = − 3 f ( x ) + 4 f ( x + h ) − f ( x + 2 h ) 2 h + O ( h 2 ) , {\displaystyle f'(x)={\frac {-3f(x)+4f(x+h)-f(x+2h)}{2h}}+O(h^{2}),} which is called the three-point forward difference formula for the derivative. Using a similar approach, one can show f ′ ( x ) = f ( x + h ) − f ( x − h ) 2 h + O ( h 2 ) {\displaystyle f'(x)={\frac {f(x+h)-f(x-h)}{2h}}+O(h^{2})} which is called the three-point central difference formula , and f ′ ( x ) = f ( x − 2 h ) − 4 f ( x − h ) + 3 f ( x ) 2 h + O ( h 2 ) {\displaystyle f'(x)={\frac {f(x-2h)-4f(x-h)+3f(x)}{2h}}+O(h^{2})} which is called the three-point backward difference formula . By a similar approach, the five point midpoint approximation formula can be derived as: [ 10 ] f ′ ( x ) = − f ( x + 2 h ) + 8 f ( x + h ) − 8 f ( x − h ) + f ( x − 2 h ) 12 h + O ( h 4 ) . {\displaystyle f'(x)={\frac {-f(x+2h)+8f(x+h)-8f(x-h)+f(x-2h)}{12h}}+O(h^{4}).} Consider approximating the derivative of f ( x ) = x sin ⁡ x {\displaystyle f(x)=x\sin {x}} at the point x 0 = π 4 {\displaystyle x_{0}={\frac {\pi }{4}}} . Since f ′ ( x ) = sin ⁡ x + x cos ⁡ x {\displaystyle f'(x)=\sin {x}+x\cos {x}} , the exact value is f ′ ( π 4 ) = sin ⁡ π 4 + π 4 cos ⁡ π 4 = 1 2 + π 4 2 ≈ 1.2624671484563432. {\displaystyle f'({\frac {\pi }{4}})=\sin {\frac {\pi }{4}}+{\frac {\pi }{4}}\cos {\frac {\pi }{4}}={\frac {1}{\sqrt {2}}}+{\frac {\pi }{4{\sqrt {2}}}}\approx 1.2624671484563432.} The following is an example of a Python implementation for finding derivatives numerically for f ( x ) = 2 x 1 + x {\displaystyle f(x)={\frac {2x}{1+{\sqrt {x}}}}} using the various three-point difference formulas at x 0 = 4 {\displaystyle x_{0}=4} . The function func has derivative func_prime . Using Newton's difference quotient, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} the following can be shown [ 11 ] (for n > 0 ): f ( n ) ( x ) = lim h → 0 1 h n ∑ k = 0 n ( − 1 ) k + n ( n k ) f ( x + k h ) {\displaystyle f^{(n)}(x)=\lim _{h\to 0}{\frac {1}{h^{n}}}\sum _{k=0}^{n}(-1)^{k+n}{\binom {n}{k}}f(x+kh)} The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if f {\displaystyle f} is a holomorphic function , real-valued on the real line, which can be evaluated at points in the complex plane near x {\displaystyle x} , then there are stable methods. For example, [ 5 ] the first derivative can be calculated by the complex-step derivative formula: [ 12 ] [ 13 ] [ 14 ] f ′ ( x ) = ℑ ( f ( x + i h ) ) h + O ( h 2 ) , i 2 := − 1. {\displaystyle f'(x)={\frac {\Im (f(x+\mathrm {i} h))}{h}}+O(h^{2}),\quad \mathrm {i^{2}} :=-1.} The recommended step size to obtain accurate derivatives for a range of conditions is h = 10 − 200 {\displaystyle h=10^{-200}} . [ 6 ] This formula can be obtained by Taylor series expansion: f ( x + i h ) = f ( x ) + i h f ′ ( x ) − 1 2 ! h 2 f ″ ( x ) − i 3 ! h 3 f ( 3 ) ( x ) + ⋯ . {\displaystyle f(x+\mathrm {i} h)=f(x)+\mathrm {i} hf'(x)-{\tfrac {1}{2!}}h^{2}f''(x)-{\tfrac {\mathrm {i} }{3!}}h^{3}f^{(3)}(x)+\cdots .} The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers , resulting in multicomplex derivatives. [ 15 ] [ 16 ] [ 17 ] f ( n ) ( x ) ≈ C n 2 − 1 ( n ) ( f ( x + i ( 1 ) h + ⋯ + i ( n ) h ) ) h n {\displaystyle f^{(n)}(x)\approx {\frac {{\mathcal {C}}_{n^{2}-1}^{(n)}(f(x+\mathrm {i} ^{(1)}h+\cdots +\mathrm {i} ^{(n)}h))}{h^{n}}}} where the i ( k ) {\displaystyle \mathrm {i} ^{(k)}} denote the multicomplex imaginary units; i ( 1 ) ≡ i {\displaystyle \mathrm {i} ^{(1)}\equiv \mathrm {i} } . The C k ( n ) {\displaystyle {\mathcal {C}}_{k}^{(n)}} operator extracts the k {\displaystyle k} th component of a multicomplex number of level n {\displaystyle n} , e.g., C 0 ( n ) {\displaystyle {\mathcal {C}}_{0}^{(n)}} extracts the real component and C n 2 − 1 ( n ) {\displaystyle {\mathcal {C}}_{n^{2}-1}^{(n)}} extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative ∂ 2 f ( x , y ) ∂ x ∂ y ≈ C 3 ( 2 ) ( f ( x + i ( 1 ) h , y + i ( 2 ) h ) ) h 2 {\displaystyle {\frac {\partial ^{2}f(x,y)}{\partial x\,\partial y}}\approx {\frac {{\mathcal {C}}_{3}^{(2)}(f(x+\mathrm {i} ^{(1)}h,y+\mathrm {i} ^{(2)}h))}{h^{2}}}} A C++ implementation of multicomplex arithmetics is available. [ 18 ] In general, derivatives of any order can be calculated using Cauchy's integral formula : [ 19 ] f ( n ) ( a ) = n ! 2 π i ∮ γ f ( z ) ( z − a ) n + 1 d z , {\displaystyle f^{(n)}(a)={\frac {n!}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{(z-a)^{n+1}}}\,\mathrm {d} z,} where the integration is done numerically . Using complex variables for numerical differentiation was started by Lyness and Moler in 1967. [ 20 ] Their algorithm is applicable to higher-order derivatives. A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner. [ 21 ] An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg. [ 4 ] Differential quadrature is the approximation of derivatives by using weighted sums of function values. [ 22 ] [ 23 ] Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with quadrature , meaning numerical integration , where weighted sums are used in methods such as Simpson's rule or the trapezoidal rule . There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter . Differential quadrature is used to solve partial differential equations . There are further methods for computing derivatives from noisy data. [ 24 ]
https://en.wikipedia.org/wiki/Numerical_differentiation
A numerical digit (often shortened to just digit ) or numeral is a single symbol used alone (such as "1"), or in combinations (such as "15"), to represent numbers in positional notation , such as the common base 10 . The name "digit" originates from the Latin digiti meaning fingers. [ 1 ] For any numeral system with an integer base , the number of different digits required is the absolute value of the base. For example, decimal (base 10) requires ten digits (0 to 9), and binary (base 2) requires only two digits (0 and 1). Bases greater than 10 require more than 10 digits, for instance hexadecimal (base 16) requires 16 digits (usually 0 to 9 and A to F). In a basic digital system, a numeral is a sequence of digits, which may be of arbitrary length. Each position in the sequence has a place value , and each digit has a value. The value of the numeral is computed by multiplying each digit in the sequence by its place value, and summing the results. Each digit in a number system represents an integer. For example, in decimal the digit "1" represents the integer one , and in the hexadecimal system, the letter "A" represents the number ten . A positional number system has one unique digit for each integer from zero up to, but not including, the radix of the number system. Thus in the positional decimal system, the numbers 0 to 9 can be expressed using their respective numerals "0" to "9" in the rightmost "units" position. The number 12 is expressed with the numeral "2" in the units position, and with the numeral "1" in the "tens" position, to the left of the "2" while the number 312 is expressed with three numerals: "3" in the "hundreds" position, "1" in the "tens" position, and "2" in the "units" position. The decimal numeral system uses a decimal separator , commonly a period in English, or a comma in other European languages, [ 2 ] to denote the "ones place" or "units place", [ 3 ] [ 4 ] [ 5 ] which has a place value one. Each successive place to the left of this has a place value equal to the place value of the previous digit times the base . Similarly, each successive place to the right of the separator has a place value equal to the place value of the previous digit divided by the base. For example, in the numeral 10.34 (written in base 10 ), The total value of the number is 1 ten, 0 ones, 3 tenths, and 4 hundredths. The zero, which contributes no value to the number, indicates that the 1 is in the tens place rather than the ones place. The place value of any given digit in a numeral can be given by a simple calculation, which in itself is a complement to the logic behind numeral systems. The calculation involves the multiplication of the given digit by the base raised by the exponent n − 1 , where n represents the position of the digit from the separator; the value of n is positive (+), but this is only if the digit is to the left of the separator. And to the right, the digit is multiplied by the base raised by a negative (−) n . For example, in the number 10.34 (written in base 10), The first true written positional numeral system is considered to be the Hindu–Arabic numeral system . This system was established by the 7th century in India, [ 11 ] but was not yet in its modern form because the use of the digit zero had not yet been widely accepted. Instead of a zero sometimes the digits were marked with dots to indicate their significance, or a space was used as a placeholder. The first widely acknowledged use of zero was in 876. [ 12 ] The original numerals were very similar to the modern ones, even down to the glyphs used to represent digits. [ 11 ] By the 13th century, Western Arabic numerals were accepted in European mathematical circles ( Fibonacci used them in his Liber Abaci ). They began to enter common use in the 15th century. [ 13 ] By the end of the 20th century virtually all non-computerized calculations in the world were done with Arabic numerals, which have replaced native numeral systems in most cultures. The exact age of the Maya numerals is unclear, but it is possible that it is older than the Hindu–Arabic system. The system was vigesimal (base 20), so it has twenty digits. The Mayas used a shell symbol to represent zero. Numerals were written vertically, with the ones place at the bottom. The Mayas had no equivalent of the modern decimal separator , so their system could not represent fractions. The Thai numeral system is identical to the Hindu–Arabic numeral system except for the symbols used to represent digits. The use of these digits is less common in Thailand than it once was, but they are still used alongside Arabic numerals. The rod numerals, the written forms of counting rods once used by Chinese and Japanese mathematicians, are a decimal positional system able to represent not only zero but also negative numbers. Counting rods themselves predate the Hindu–Arabic numeral system. The Suzhou numerals are variants of rod numerals. The binary (base 2), octal (base 8), and hexadecimal (base 16) systems, extensively used in computer science , all follow the conventions of the Hindu–Arabic numeral system . [ 14 ] The binary system uses only the digits "0" and "1", while the octal system uses the digits from "0" through "7". The hexadecimal system uses all the digits from the decimal system, plus the letters "A" through "F", which represent the numbers 10 to 15 respectively. [ 15 ] When the binary system is used, the term "bit(s)" is typically used as an alternative for "digit(s)", being a portmanteau of the term "binary digit". The ternary and balanced ternary systems have sometimes been used. They are both base 3 systems. [ 16 ] Balanced ternary is unusual in having the digit values 1, 0 and −1. Balanced ternary turns out to have some useful properties and the system has been used in the experimental Russian Setun computers. [ 17 ] Several authors in the last 300 years have noted a facility of positional notation that amounts to a modified decimal representation . Some advantages are cited for use of numerical digits that represent negative values. In 1840 Augustin-Louis Cauchy advocated use of signed-digit representation of numbers, and in 1928 Florian Cajori presented his collection of references for negative numerals . The concept of signed-digit representation has also been taken up in computer design . Despite the essential role of digits in describing numbers, they are relatively unimportant to modern mathematics . [ 18 ] Nevertheless, there are a few important mathematical concepts that make use of the representation of a number as a sequence of digits. The digital root is the single-digit number obtained by summing the digits of a given number, then summing the digits of the result, and so on until a single-digit number is obtained. [ 19 ] Casting out nines is a procedure for checking arithmetic done by hand. To describe it, let f ( x ) {\displaystyle f(x)} represent the digital root of x {\displaystyle x} , as described above. Casting out nines makes use of the fact that if A + B = C {\displaystyle A+B=C} , then f ( f ( A ) + f ( B ) ) = f ( C ) {\displaystyle f(f(A)+f(B))=f(C)} . In the process of casting out nines, both sides of the latter equation are computed, and if they are not equal, the original addition must have been faulty. [ 20 ] Repunits are integers that are represented with only the digit 1. For example, 1111 (one thousand, one hundred and eleven) is a repunit. Repdigits are a generalization of repunits; they are integers represented by repeated instances of the same digit. For example, 333 is a repdigit. The primality of repunits is of interest to mathematicians. [ 21 ] Palindromic numbers are numbers that read the same when their digits are reversed. [ 22 ] A Lychrel number is a positive integer that never yields a palindromic number when subjected to the iterative process of being added to itself with digits reversed. [ 23 ] The question of whether there are any Lychrel numbers in base 10 is an open problem in recreational mathematics ; the smallest candidate is 196 . [ 24 ] Counting aids, especially the use of body parts (counting on fingers), were certainly used in prehistoric times as today. There are many variations. Besides counting ten fingers, some cultures have counted knuckles, the space between fingers, and toes as well as fingers. The Oksapmin culture of New Guinea uses a system of 27 upper body locations to represent numbers. [ 25 ] To preserve numerical information, tallies carved in wood, bone, and stone have been used since prehistoric times. [ 26 ] Stone age cultures, including ancient indigenous American groups, used tallies for gambling, personal services, and trade-goods. A method of preserving numeric information in clay was invented by the Sumerians between 8000 and 3500 BC. [ 27 ] This was done with small clay tokens of various shapes that were strung like beads on a string. Beginning about 3500 BC, clay tokens were gradually replaced by number signs impressed with a round stylus at different angles in clay tablets (originally containers for tokens) which were then baked. About 3100  BC, written numbers were dissociated from the things being counted and became abstract numerals. Between 2700 and 2000 BC, in Sumer, the round stylus was gradually replaced by a reed stylus that was used to press wedge-shaped cuneiform signs in clay. These cuneiform number signs resembled the round number signs they replaced and retained the additive sign-value notation of the round number signs. These systems gradually converged on a common sexagesimal number system; this was a place-value system consisting of only two impressed marks, the vertical wedge and the chevron, which could also represent fractions. [ 28 ] This sexagesimal number system was fully developed at the beginning of the Old Babylonia period (about 1950 BC) and became standard in Babylonia. [ 29 ] Sexagesimal numerals were a mixed radix system that retained the alternating base 10 and base 6 in a sequence of cuneiform vertical wedges and chevrons. By 1950 BC, this was a positional notation system. Sexagesimal numerals came to be widely used in commerce, but were also used in astronomical and other calculations. This system was exported from Babylonia and used throughout Mesopotamia, and by every Mediterranean nation that used standard Babylonian units of measure and counting, including the Greeks, Romans and Egyptians. Babylonian-style sexagesimal numeration is still used in modern societies to measure time (minutes per hour) and angles (degrees). [ 30 ] In China , armies and provisions were counted using modular tallies of prime numbers . Unique numbers of troops and measures of rice appear as unique combinations of these tallies. A great convenience of modular arithmetic is that it is easy to multiply. [ 31 ] This makes use of modular arithmetic for provisions especially attractive. Conventional tallies are quite difficult to multiply and divide. In modern times modular arithmetic is sometimes used in digital signal processing . [ 32 ] The oldest Greek system was that of the Attic numerals , [ 33 ] but in the 4th century BC they began to use a quasidecimal alphabetic system (see Greek numerals ). [ 34 ] Jews began using a similar system ( Hebrew numerals ), with the oldest examples known being coins from around 100 BC. [ 35 ] The Roman empire used tallies written on wax, papyrus and stone, and roughly followed the Greek custom of assigning letters to various numbers. The Roman numerals system remained in common use in Europe until positional notation came into common use in the 16th century. [ 36 ] The Maya of Central America used a mixed base 18 and base 20 system, possibly inherited from the Olmec , including advanced features such as positional notation and a zero . [ 37 ] They used this system to make advanced astronomical calculations, including highly accurate calculations of the length of the solar year and the orbit of Venus . [ 38 ] The Incan Empire ran a large command economy using quipu , tallies made by knotting colored fibers. [ 39 ] Knowledge of the encodings of the knots and colors was suppressed by the Spanish conquistadors in the 16th century, and has not survived although simple quipu-like recording devices are still used in the Andean region. Some authorities believe that positional arithmetic began with the wide use of counting rods in China. [ 40 ] The earliest written positional records seem to be rod calculus results in China around 400. Zero was first used in India in the 7th century CE by Brahmagupta . [ 41 ] The modern positional Arabic numeral system was developed by mathematicians in India , and passed on to Muslim mathematicians , along with astronomical tables brought to Baghdad by an Indian ambassador around 773. [ 42 ] From India , the thriving trade between Islamic sultans and Africa carried the concept to Cairo . Arabic mathematicians extended the system to include decimal fractions , and Muḥammad ibn Mūsā al-Ḵwārizmī wrote an important work about it in the 9th  century. [ 43 ] The modern Arabic numerals were introduced to Europe with the translation of this work in the 12th century in Spain and Leonardo of Pisa 's Liber Abaci of 1201. [ 44 ] In Europe, the complete Indian system with the zero was derived from the Arabs in the 12th century. [ 45 ] The binary system (base 2) was propagated in the 17th century by Gottfried Leibniz . [ 46 ] Leibniz had developed the concept early in his career, and had revisited it when he reviewed a copy of the I Ching from China. [ 47 ] Binary numbers came into common use in the 20th century because of computer applications. [ 46 ]
https://en.wikipedia.org/wiki/Numerical_digit
In software engineering and mathematics , numerical error is the error in the numerical computations . It can be the combined effect of two kinds of error in a calculation. The first is referred to as Round-off error and is caused by the finite precision of computations involving floating-point numbers. The second, usually called Truncation error , is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. Floating-point numerical error is often measured in ULP ( unit in the last place ). This software-engineering -related article is a stub . You can help Wikipedia by expanding it . This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Numerical_error
A numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics . The results of this simulation can be compared with past measurements to check for accuracy and then be used to predict future positions. Its main use therefore is in preparation of almanacs. The simulations can be done in either Cartesian or in spherical coordinates. The former are easier, but extremely calculation intensive, and only practical on an electronic computer. As such only the latter was used in former times. Strictly speaking, the latter was not much less calculation intensive, but it was possible to start with some simple approximations and then to add perturbations , as much as needed to reach the wanted accuracy. In essence this mathematical simulation of the Solar System is a form of the N-body problem . The symbol N represents the number of bodies, which can grow quite large if one includes the Sun, 8 planets, dozens of moons, and countless planetoids, comets and so forth. However the influence of the Sun on any other body is so large, and the influence of all the other bodies on each other so small, that the problem can be reduced to the analytically solvable 2-body problem. The result for each planet is an orbit, a simple description of its position as function of time. Once this is solved the influences moons and planets have on each other are added as small corrections. These are small compared to a full planetary orbit. Some corrections might be still several degrees large, while measurements can be made to an accuracy of better than 1″. Although this method is no longer used for simulations, it is still useful to find an approximate ephemeris as one can take the relatively simple main solution, perhaps add a few of the largest perturbations, and arrive without too much effort at the wanted planetary position. The disadvantage is that perturbation theory is very advanced mathematics. The modern method consists of numerical integration in 3-dimensional space. One starts with a high accuracy value for the position ( x , y , z ) and the velocity ( v x , v y , v z ) for each of the bodies involved. When also the mass of each body is known, the acceleration ( a x , a y , a z ) can be calculated from Newton's law of gravitation . Each body attracts each other body, the total acceleration being the sum of all these attractions. Next one chooses a small time-step Δ t and applies Newton's second law of motion . The acceleration multiplied with Δ t gives a correction to the velocity. The velocity multiplied with Δ t gives a correction to the position. This procedure is repeated for all other bodies. The result is a new value for position and velocity for all bodies. Then, using these new values one starts over the whole calculation for the next time-step Δ t . Repeating this procedure often enough, and one ends up with a description of the positions of all bodies over time. The advantage of this method is that for a computer it is a very easy job to do, and it yields highly accurate results for all bodies at the same time, doing away with the complex and difficult procedures for determining perturbations. The disadvantage is that one must start with highly accurate figures in the first place, or the results will drift away from the reality in time; that one gets x , y , z positions which are often first to be transformed into more practical ecliptical or equatorial coordinates before they can be used; and that it is an all or nothing approach. If one wants to know the position of one planet on one particular time, then all other planets and all intermediate time-steps are to be calculated too. In the previous section it was assumed that acceleration remains constant over a small timestep Δt so that the calculation reduces to simply the addition of V × Δt to R and so forth. In reality this is not the case, except when one takes Δt so small that the number of steps to be taken would be prohibitively high. Because while at any time the position is changed by the acceleration, the value of the acceleration is determined by the instantaneous position. Evidently a full integration is needed. Several methods are available. First notice the needed equations: a → j = ∑ i ≠ j n G M i | r → i − r → j | 3 ( r → i − r → j ) {\displaystyle {\vec {a}}_{j}=\sum _{i\neq j}^{n}G{\frac {M_{i}}{|{\vec {r}}_{i}-{\vec {r}}_{j}|^{3}}}({\vec {r}}_{i}-{\vec {r}}_{j})} This equation describes the acceleration all bodies i running from 1 to N exercise on a particular body j . It is a vector equation, so it is to be split in 3 equations for each of the X, Y, Z components, yielding: ( a j ) x = ∑ i ≠ j n G M i ( ( x i − x j ) 2 + ( y i − y j ) 2 + ( z i − z j ) 2 ) 3 / 2 ( x i − x j ) {\displaystyle (a_{j})_{x}=\sum _{i\neq j}^{n}G{\frac {M_{i}}{((x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+(z_{i}-z_{j})^{2})^{3/2}}}(x_{i}-x_{j})} with the additional relationships a x = d v x d t {\displaystyle a_{x}={\frac {dv_{x}}{dt}}} , v x = d x d t {\displaystyle v_{x}={\frac {dx}{dt}}} likewise for Y and Z. The former equation (gravitation) may look foreboding, but its calculation is no problem. The latter equations (motion laws) seem simpler, but yet they cannot be calculated. Computers cannot integrate, they cannot work with infinitesimal values, so instead of dt we use Δt and bringing the resulting variable to the left: Δ v x = a x Δ t {\displaystyle \Delta v_{x}=a_{x}\Delta t} , and: Δ x = v x Δ t {\displaystyle \Delta x=v_{x}\Delta t} Remember that a is still a function of time. The simplest way to solve these is just the Euler algorithm, which in essence is the linear addition described above. Limiting ourselves to 1 dimension only in some general computer language: As in essence the acceleration used for the whole duration of the timestep, is the one as it was in the beginning of the timestep, this simple method has no high accuracy. Much better results are achieved by taking a mean acceleration, the average between the beginning value and the expected (unperturbed) end value: Of course still better results can be expected by taking intermediate values. This is what happens when using the Runge-Kutta method, especially the one of grade 4 or 5 are most useful. The most common method used is the leapfrog method due to its good long term energy conservation. A completely different method is the use of Taylor series . In that case we write: r = r 0 + r 0 ′ t + r 0 ″ t 2 2 ! + . . . {\displaystyle r=r_{0}+r'_{0}t+r''_{0}{\frac {t^{2}}{2!}}+...} but rather than developing up to some higher derivative in r only, one can develop in r and v (that is r') by writing r = f r 0 + g r 0 ′ {\displaystyle r=fr_{0}+gr'_{0}} and then write out the factors f and g in a series. To calculate the accelerations the gravitational attraction of each body on each other body is to be taken into account. As a consequence the amount of calculation in the simulation goes up with the square of the number of bodies: Doubling the number of bodies increases the work with a factor four. To increase the accuracy of the simulation not only more decimals are to be taken but also smaller timesteps, again quickly increasing the amount of work. Evidently tricks are to be applied to reduce the amount of work. Some of these tricks are given here. By far the most important trick is the use of a proper integration method, as already outlined above. The choice of units is important. Rather than to work in SI units , which would make some values extremely small and some extremely large, all units are to be scaled such that they are in the neighbourhood of 1. For example, for distances in the Solar System the astronomical unit is most straightforward. If this is not done one is almost certain to see a simulation abandoned in the middle of a calculation on a floating point overflow or underflow , and if not that bad, still accuracy is likely to get lost due to truncation errors. If N is large (not so much in Solar System simulations, but more in galaxy simulations) it is customary to create dynamic groups of bodies. All bodies in a particular direction and on large distance from the reference body, which is being calculated at that moment, are taken together and their gravitational attraction is averaged over the whole group. The total amount of energy and angular momentum of a closed system are conserved quantities. By calculating these amounts after every time step the simulation can be programmed to increase the stepsize Δt if they do not change significantly, and to reduce it if they start to do so. Combining the bodies in groups as in the previous and apply larger and thus less timesteps on the faraway bodies than on the closer ones, is also possible. To allow for an excessively rapid change of the acceleration when a particular body is close to the reference body, it is customary to introduce a small parameter e so that a = G M r 2 + e {\displaystyle a={\frac {GM}{r^{2}+e}}} If the highest possible accuracy is needed, the calculations become much more complex. In the case of comets, nongravitational forces, such as radiation pressure and gas drag, must be taken into account. In the case of Mercury, and other planets for long term calculations, relativistic effects cannot be ignored. Then also the total energy is no longer a constant (because the four vector energy with linear momentum is). The finite speed of light also makes it important to allow for light-time effects, both classical and relativistic. Planets can no longer be considered as particles, but their shape and density must also be considered. For example, the flattening of the Earth causes precession, which causes the axial tilt to change, which affects the long-term movements of all planets. Long term models, going beyond a few tens of millions of years, are not possible due to the lack of stability of the Solar System .
https://en.wikipedia.org/wiki/Numerical_model_of_the_Solar_System
In the mathematical field of linear algebra and convex analysis , the numerical range or field of values of a complex n × n {\displaystyle n\times n} matrix A is the set where x ∗ {\displaystyle \mathbf {x} ^{*}} denotes the conjugate transpose of the vector x {\displaystyle \mathbf {x} } . The numerical range includes, in particular, the diagonal entries of the matrix (obtained by choosing x equal to the unit vectors along the coordinate axes) and the eigenvalues of the matrix (obtained by choosing x equal to the eigenvectors). In engineering, numerical ranges are used as a rough estimate of eigenvalues of A . Recently, generalizations of the numerical range are used to study quantum computing . A related concept is the numerical radius , which is the largest absolute value of the numbers in the numerical range, i.e. Let sum of sets denote a sumset . General properties Normal matrices Numerical radius Most of the claims are obvious. Some are not. If A {\textstyle A} is Hermitian, then it is normal, so it is the convex hull of its eigenvalues, which are all real. Conversely, assume W ( A ) {\textstyle W(A)} is on the real line. Decompose A = B + C {\textstyle A=B+C} , where B {\textstyle B} is a Hermitian matrix, and C {\textstyle C} an anti-Hermitian matrix. Since W ( C ) {\textstyle W(C)} is on the imaginary line, if C ≠ 0 {\textstyle C\neq 0} , then W ( A ) {\textstyle W(A)} would stray from the real line. Thus C = 0 {\textstyle C=0} , and A {\textstyle A} is Hermitian. The elements of W ( A ) {\textstyle W(A)} are of the form tr ⁡ ( A P ) {\textstyle \operatorname {tr} (AP)} , where P {\textstyle P} is projection from C 2 {\textstyle \mathbb {C} ^{2}} to a one-dimensional subspace. The space of all one-dimensional subspaces of C 2 {\textstyle \mathbb {C} ^{2}} is P C 1 {\textstyle \mathbb {P} \mathbb {C} ^{1}} , which is a 2-sphere. The image of a 2-sphere under a linear projection is a filled ellipse. In more detail, such P {\textstyle P} are of the form 1 2 I + 1 2 [ cos ⁡ 2 θ e i ϕ sin ⁡ 2 θ e − i ϕ sin ⁡ 2 θ − cos ⁡ 2 θ ] = 1 2 [ 1 + z x + i y x − i y 1 − z ] {\displaystyle {\frac {1}{2}}I+{\frac {1}{2}}{\begin{bmatrix}\cos 2\theta &e^{i\phi }\sin 2\theta \\e^{-i\phi }\sin 2\theta &-\cos 2\theta \end{bmatrix}}={\frac {1}{2}}{\begin{bmatrix}1+z&x+iy\\x-iy&1-z\end{bmatrix}}} where x , y , z {\textstyle x,y,z} , satisfying x 2 + y 2 + z 2 = 1 {\textstyle x^{2}+y^{2}+z^{2}=1} , is a point on the unit 2-sphere. Therefore, the elements of W ( A ) {\textstyle W(A)} , regarded as elements of R 2 {\textstyle \mathbb {R} ^{2}} is the composition of two real linear maps ( x , y , z ) ↦ 1 2 [ 1 + z x + i y x − i y 1 − z ] {\textstyle (x,y,z)\mapsto {\frac {1}{2}}{\begin{bmatrix}1+z&x+iy\\x-iy&1-z\end{bmatrix}}} and M ↦ tr ⁡ ( A M ) {\textstyle M\mapsto \operatorname {tr} (AM)} , which maps the 2-sphere to a filled ellipse. W ( A ) {\textstyle W(A)} is the image of a continuous map x ↦ ⟨ x , A x ⟩ {\textstyle x\mapsto \langle x,Ax\rangle } from the closed unit sphere, so it is compact. For any x , y {\textstyle x,y} of unit norm, project A {\textstyle A} to the span of x , y {\textstyle x,y} as P ∗ A P {\textstyle P^{*}AP} . Then W ( P ∗ A P ) {\textstyle W(P^{*}AP)} is a filled ellipse by the previous result, and so for any θ ∈ [ 0 , 1 ] {\textstyle \theta \in [0,1]} , let z = θ x + ( 1 − θ ) y {\textstyle z=\theta x+(1-\theta )y} , we have ⟨ z , A z ⟩ = ⟨ z , P ∗ A P z ⟩ ∈ W ( P ∗ A P ) ⊂ W ( A ) {\displaystyle \langle z,Az\rangle =\langle z,P^{*}APz\rangle \in W(P^{*}AP)\subset W(A)} Let W {\textstyle W} satisfy these properties. Let W 0 {\textstyle W_{0}} be the original numerical range. Fix some matrix A {\textstyle A} . We show that the supporting planes of W ( A ) {\textstyle W(A)} and W 0 ( A ) {\textstyle W_{0}(A)} are identical. This would then imply that W ( A ) = W 0 ( A ) {\textstyle W(A)=W_{0}(A)} since they are both convex and compact. By property (4), W ( A ) {\textstyle W(A)} is nonempty. Let z {\textstyle z} be a point on the boundary of W ( A ) {\textstyle W(A)} , then we can translate and rotate the complex plane so that the point translates to the origin, and the region W ( A ) {\textstyle W(A)} falls entirely within C + {\textstyle \mathbb {C} ^{+}} . That is, for some ϕ ∈ R {\textstyle \phi \in \mathbb {R} } , the set e i ϕ ( W ( A ) − z ) {\textstyle e^{i\phi }(W(A)-z)} lies entirely within C + {\textstyle \mathbb {C} ^{+}} , while for any t > 0 {\textstyle t>0} , the set e i ϕ ( W ( A ) − z ) − t I {\textstyle e^{i\phi }(W(A)-z)-tI} does not lie entirely in C + {\textstyle \mathbb {C} ^{+}} . The two properties of W {\textstyle W} then imply that e i ϕ ( A − z ) + e − i ϕ ( A − z ) ∗ ⪰ 0 {\displaystyle e^{i\phi }(A-z)+e^{-i\phi }(A-z)^{*}\succeq 0} and that inequality is sharp, meaning that e i ϕ ( A − z ) + e − i ϕ ( A − z ) ∗ {\textstyle e^{i\phi }(A-z)+e^{-i\phi }(A-z)^{*}} has a zero eigenvalue. This is a complete characterization of the supporting planes of W ( A ) {\textstyle W(A)} . The same argument applies to W 0 ( A ) {\textstyle W_{0}(A)} , so they have the same supporting planes. For (2), if A {\textstyle A} is normal, then it has a full eigenbasis, so it reduces to (1). Since A {\textstyle A} is normal, by the spectral theorem, there exists a unitary matrix U {\textstyle U} such that A = U D U ∗ {\textstyle A=UDU^{*}} , where D {\textstyle D} is a diagonal matrix containing the eigenvalues λ 1 , λ 2 , … , λ n {\textstyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} of A {\textstyle A} . Let x = c 1 v 1 + c 2 v 2 + ⋯ + c k v k {\textstyle x=c_{1}v_{1}+c_{2}v_{2}+\cdots +c_{k}v_{k}} . Using the linearity of the inner product, that A v j = λ j v j {\textstyle Av_{j}=\lambda _{j}v_{j}} , and that { v i } {\textstyle \left\{v_{i}\right\}} are orthonormal, we have: ⟨ x , A x ⟩ = ∑ i , j = 1 k c i ∗ c j ⟨ v i , λ j v j ⟩ ∑ i = 1 k | c i | 2 λ i ∈ hull ⁡ ( λ 1 , … , λ k ) {\displaystyle \langle x,Ax\rangle =\sum _{i,j=1}^{k}c_{i}^{*}c_{j}\left\langle v_{i},\lambda _{j}v_{j}\right\rangle \sum _{i=1}^{k}\left|c_{i}\right|^{2}\lambda _{i}\in \operatorname {hull} \left(\lambda _{1},\ldots ,\lambda _{k}\right)} By affineness of W {\textstyle W} , we can translate and rotate the complex plane, so that we reduce to the case where ∂ W ( A ) {\textstyle \partial W(A)} has a sharp point at 0 {\textstyle 0} , and that the two supporting planes at that point both make an angle ϕ 1 , ϕ 2 {\textstyle \phi _{1},\phi _{2}} with the imaginary axis, such that ϕ 1 < ϕ 2 , e i ϕ 1 ≠ e i ϕ 2 {\textstyle \phi _{1}<\phi _{2},e^{i\phi _{1}}\neq e^{i\phi _{2}}} since the point is sharp. Since 0 ∈ W ( A ) {\textstyle 0\in W(A)} , there exists a unit vector x 0 {\textstyle x_{0}} such that x 0 ∗ A x 0 = 0 {\textstyle x_{0}^{*}Ax_{0}=0} . By general property (4), the numerical range lies in the sectors defined by: Re ⁡ ( e i θ ⟨ x , A x ⟩ ) ≥ 0 for all θ ∈ [ ϕ 1 , ϕ 2 ] and nonzero x ∈ C n . {\displaystyle \operatorname {Re} \left(e^{i\theta }\langle x,Ax\rangle \right)\geq 0\quad {\text{for all }}\theta \in [\phi _{1},\phi _{2}]{\text{ and nonzero }}x\in \mathbb {C} ^{n}.} At x = x 0 {\textstyle x=x_{0}} , the directional derivative in any direction y {\textstyle y} must vanish to maintain non-negativity. Specifically: d d t Re ⁡ ( e i θ ⟨ x 0 + t y , A ( x 0 + t y ) ⟩ ) | t = 0 = 0 ∀ y ∈ C n , θ ∈ [ ϕ 1 , ϕ 2 ] . {\displaystyle \left.{\frac {d}{dt}}\operatorname {Re} \left(e^{i\theta }\langle x_{0}+ty,A(x_{0}+ty)\rangle \right)\right|_{t=0}=0\quad \forall y\in \mathbb {C} ^{n},\theta \in [\phi _{1},\phi _{2}].} Expanding this derivative: Re ⁡ ( e i θ ( ⟨ y , A x 0 ⟩ + ⟨ x 0 , A y ⟩ ) ) = 0 ∀ y ∈ C n , θ ∈ [ ϕ 1 , ϕ 2 ] . {\displaystyle \operatorname {Re} \left(e^{i\theta }\left(\langle y,Ax_{0}\rangle +\langle x_{0},Ay\rangle \right)\right)=0\quad \forall y\in \mathbb {C} ^{n},\theta \in [\phi _{1},\phi _{2}].} Since the above holds for all θ ∈ [ ϕ 1 , ϕ 2 ] {\textstyle \theta \in [\phi _{1},\phi _{2}]} , we must have: ⟨ y , A x 0 ⟩ + ⟨ x 0 , A y ⟩ = 0 ∀ y ∈ C n . {\displaystyle \langle y,Ax_{0}\rangle +\langle x_{0},Ay\rangle =0\quad \forall y\in \mathbb {C} ^{n}.} For any y ∈ C n {\textstyle y\in \mathbb {C} ^{n}} and α ∈ C {\textstyle \alpha \in \mathbb {C} } , substitute α y {\textstyle \alpha y} into the equation: α ⟨ y , A x 0 ⟩ + α ∗ ⟨ x 0 , A y ⟩ = 0. {\displaystyle \alpha \langle y,Ax_{0}\rangle +\alpha ^{*}\langle x_{0},Ay\rangle =0.} Choose α = 1 {\textstyle \alpha =1} and α = i {\textstyle \alpha =i} , then simplify, we obtain ⟨ y , A x 0 ⟩ = 0 {\displaystyle \langle y,Ax_{0}\rangle =0} for all y {\displaystyle y} , thus A x 0 = 0 {\textstyle Ax_{0}=0} . Let v = arg ⁡ max ‖ x ‖ 2 = 1 | ⟨ x , A x ⟩ | {\textstyle v=\arg \max _{\|x\|_{2}=1}|\langle x,Ax\rangle |} . We have r ( A ) = | ⟨ v , A v ⟩ | {\textstyle r(A)=|\langle v,Av\rangle |} . By Cauchy–Schwarz, | ⟨ v , A v ⟩ | ≤ ‖ v ‖ 2 ‖ A v ‖ 2 = ‖ A v ‖ 2 ≤ ‖ A ‖ o p {\displaystyle |\langle v,Av\rangle |\leq \|v\|_{2}\|Av\|_{2}=\|Av\|_{2}\leq \|A\|_{op}} For the other one, let A = B + i C {\textstyle A=B+iC} , where B , C {\textstyle B,C} are Hermitian. ‖ A ‖ o p ≤ ‖ B ‖ o p + ‖ C ‖ o p {\displaystyle \|A\|_{op}\leq \|B\|_{op}+\|C\|_{op}} Since W ( B ) {\textstyle W(B)} is on the real line, and W ( i C ) {\textstyle W(iC)} is on the imaginary line, the extremal points of W ( B ) , W ( i C ) {\textstyle W(B),W(iC)} appear in W ( A ) {\textstyle W(A)} , shifted, thus both ‖ B ‖ o p = r ( B ) ≤ r ( A ) , ‖ C ‖ o p = r ( i C ) ≤ r ( A ) {\textstyle \|B\|_{op}=r(B)\leq r(A),\|C\|_{op}=r(iC)\leq r(A)} .
https://en.wikipedia.org/wiki/Numerical_range
Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes , gravitational waves , neutron stars and many other phenomena described by Albert Einstein's theory of general relativity . A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical , stationary or static and may contain matter fields or vacuum. In the case of stationary and static solutions, numerical methods may also be used to study the stability of the equilibrium spacetimes. In the case of dynamical spacetimes, the problem may be divided into the initial value problem and the evolution, each requiring different methods. Numerical relativity is applied to many areas, such as cosmological models , critical phenomena , perturbed black holes and neutron stars , and the coalescence of black holes and neutron stars, for example. In any of these cases, Einstein's equations can be formulated in several ways that allow us to evolve the dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface , the initial data, and evolve these data to neighboring hypersurfaces. [ 1 ] Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions , coordinates, and various formulations of the Einstein equations and the effect they have on the ability to produce accurate numerical solutions. Numerical relativity research is distinct from work on classical field theories as many techniques implemented in these areas are inapplicable in relativity. Many facets are however shared with large scale problems in other computational sciences like computational fluid dynamics , electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis , scientific computation , partial differential equations , and geometry among other mathematical areas of specialization. Albert Einstein published his theory of general relativity in 1915. [ 2 ] It, like his earlier theory of special relativity , described space and time as a unified spacetime subject to what are now known as the Einstein field equations . These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations. The field of numerical relativity emerged from the desire to construct and study more general solutions to the field equations by approximately solving the Einstein equations numerically. A necessary precursor to such attempts was a decomposition of spacetime back into separated space and time. This was first published by Richard Arnowitt , Stanley Deser , and Charles W. Misner in the late 1950s in what has become known as the ADM formalism . [ 3 ] Although for technical reasons the precise equations formulated in the original ADM paper are rarely used in numerical simulations, most practical approaches to numerical relativity use a "3+1 decomposition" of spacetime into three-dimensional space and one-dimensional time that is closely related to the ADM formulation, because the ADM procedure reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies . At the time that ADM published their original paper, computer technology would not have supported numerical solution to their equations on any problem of any substantial size. The first documented attempt to solve the Einstein field equations numerically appears to be by S. G. Hahn and R. W. Lindquist in 1964, [ 4 ] followed soon thereafter by Larry Smarr [ 5 ] [ 6 ] and by K. R. Eppley. [ 7 ] These early attempts were focused on evolving Misner data in axisymmetry (also known as "2+1 dimensions"). At around the same time Tsvi Piran wrote the first code that evolved a system with gravitational radiation using a cylindrical symmetry. [ 8 ] In this calculation Piran has set the foundation for many of the concepts used today in evolving ADM equations, like "free evolution" versus "constrained evolution", [ clarification needed ] which deal with the fundamental problem of treating the constraint equations that arise in the ADM formalism. Applying symmetry reduced the computational and memory requirements associated with the problem, allowing the researchers to obtain results on the supercomputers available at the time. The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran [ 9 ] in which the gravitational wave forms resulting from formation of a rotating black hole were calculated for the first time. For nearly 20 years following the initial results, there were fairly few other published results in numerical relativity, probably due to the lack of sufficiently powerful computers to address the problem. In the late 1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations. [ 10 ] Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole , which is described by a static and spherically symmetric solution to the Einstein field equations. This provides an excellent test case in numerical relativity because it does have a closed-form solution so that numerical results can be compared to an exact solution, because it is static, and because it contains one of the most numerically challenging features of relativity theory, a physical singularity . One of the earliest groups to attempt to simulate this solution was Peter Anninos et al. in 1995. [ 11 ] In their paper they point out that In the years that followed, not only did computers become more powerful, but also various research groups developed alternate techniques to improve the efficiency of the calculations. With respect to black hole simulations specifically, two techniques were devised to avoid problems associated with the existence of physical singularities in the solutions to the equations: (1) Excision, and (2) the "puncture" method. In addition the Lazarus group developed techniques for using early results from a short-lived simulation solving the nonlinear ADM equations, in order to provide initial data for a more stable code based on linearized equations derived from perturbation theory . More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity. In the excision technique, which was first proposed in the late 1990s, [ 12 ] a portion of a spacetime inside of the event horizon surrounding the singularity of a black hole is simply not evolved. In theory this should not affect the solution to the equations outside of the event horizon because of the principle of causality and properties of the event horizon (i.e. nothing physical inside the black hole can influence any of the physics outside the horizon). Thus if one simply does not solve the equations inside the horizon one should still be able to obtain valid solutions outside. One "excises" the interior by imposing ingoing boundary conditions on a boundary surrounding the singularity but inside the horizon. While the implementation of excision has been very successful, the technique has two minor problems. The first is that one has to be careful about the coordinate conditions. While physical effects cannot propagate from inside to outside, coordinate effects could. For example, if the coordinate conditions were elliptical, coordinate changes inside could instantly propagate out through the horizon. This then means that one needs hyperbolic type coordinate conditions with characteristic velocities less than that of light for the propagation of coordinate effects (e.g., using harmonic coordinates coordinate conditions). The second problem is that as the black holes move, one must continually adjust the location of the excision region to move with the black hole. The excision technique was developed over several years including the development of new gauge conditions that increased stability and work that demonstrated the ability of the excision regions to move through the computational grid. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005. [ 19 ] In the puncture method the solution is factored into an analytical part, [ 20 ] which contains the singularity of the black hole, and a numerically constructed part, which is then singularity free. This is a generalization of the Brill-Lindquist [ 21 ] prescription for initial data of black holes at rest and can be generalized to the Bowen-York [ 22 ] prescription for spinning and moving black hole initial data. Until 2005, all published usage of the puncture method required that the coordinate position of all punctures remain fixed during the course of the simulation. Of course black holes in proximity to each other will tend to move under the force of gravity, so the fact that the coordinate position of the puncture remained fixed meant that the coordinate systems themselves became "stretched" or "twisted," and this typically led to numerical instabilities at some stage of the simulation. In 2005, a group of researchers demonstrated for the first time the ability to allow punctures to move through the coordinate system, thus eliminating some of the earlier problems with the method. This allowed accurate long-term evolutions of black holes. [ 19 ] [ 23 ] [ 24 ] By choosing appropriate coordinate conditions and making crude analytic assumption about the fields near the singularity (since no physical effects can propagate out of the black hole, the crudeness of the approximations does not matter), numerical solutions could be obtained to the problem of two black holes orbiting each other, as well as accurate computation of gravitational radiation (ripples in spacetime) emitted by them. 2005 was renamed the " annus mirabilis " of numerical relativity, 100 years after the annus mirabilis papers of special relativity (1905). The Lazarus project (1998–2005) was developed as a post-Grand Challenge technique to extract astrophysical results from short lived full numerical simulations of binary black holes. It combined approximation techniques before (post-Newtonian trajectories) and after (perturbations of single black holes) with full numerical simulations attempting to solve Einstein's field equations. [ 25 ] All previous attempts to numerically integrate in supercomputers the Hilbert-Einstein equations describing the gravitational field around binary black holes led to software failure before a single orbit was completed. The Lazarus project approach, in the meantime, gave the best insight into the binary black hole problem and produced numerous and relatively accurate results, such as the radiated energy and angular momentum emitted in the latest merging state, [ 26 ] [ 27 ] the linear momentum radiated by unequal mass holes, [ 28 ] and the final mass and spin of the remnant black hole. [ 29 ] The method also computed detailed gravitational waves emitted by the merger process and predicted that the collision of black holes is the most energetic single event in the Universe, releasing more energy in a fraction of a second in the form of gravitational radiation than an entire galaxy in its lifetime. Adaptive mesh refinement (AMR) as a numerical method has roots that go well beyond its first application in the field of numerical relativity. Mesh refinement first appears in the numerical relativity literature in the 1980s, through the work of Choptuik in his studies of critical collapse of scalar fields . [ 30 ] [ 31 ] The original work was in one dimension, but it was subsequently extended to two dimensions. [ 32 ] In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies , [ 33 ] [ 34 ] and to the study of Schwarzschild black holes . [ 35 ] The technique has now become a standard tool in numerical relativity and has been used to study the merger of black holes and other compact objects in addition to the propagation of gravitational radiation generated by such astronomical events. [ 36 ] [ 37 ] In the 21st century, hundreds of research papers have been published leading to a wide spectrum of mathematical relativity, gravitational wave, and astrophysical results for the orbiting black hole problem. This technique extended to astrophysical binary systems involving neutron stars and black holes, [ 38 ] and multiple black holes. [ 39 ] One of the most surprising predictions is that the merger of two black holes can give the remnant hole a speed of up to 4000 km/s that can allow it to escape from any known galaxy. [ 40 ] [ 41 ] The simulations also predict an enormous release of gravitational energy in this merger process, amounting up to 8% of its total rest mass. [ 42 ]
https://en.wikipedia.org/wiki/Numerical_relativity
The numerical response in ecology is the change in predator density as a function of change in prey density. The term numerical response was coined by M. E. Solomon in 1949. [ 1 ] It is associated with the functional response , which is the change in predator's rate of prey consumption with change in prey density. As Holling notes, total predation can be expressed as a combination of functional and numerical response. [ 2 ] The numerical response has two mechanisms: the demographic response and the aggregational response . The numerical response is not necessarily proportional to the change in prey density, usually resulting in a time lag between prey and predator populations. [ 3 ] For example, there is often a scarcity of predators when the prey population is increasing. The demographic response consists of changes in the rates of predator reproduction or survival due to a changes in prey density. The increase in prey availability translates into higher energy intake and reduced energy output. This is different from an increase in energy intake due to increased foraging efficiency, which is considered a functional response. This concept can be articulated in the Lotka-Volterra Predator-Prey Model. d P / d t = a c V P − m P {\displaystyle dP/dt=acVP-mP} a = conversion efficiency: the fraction of prey energy assimilated by the predator and turned into new predators P = predator density V = prey density m = predator mortality c = capture rate Demographic response consists of a change in dP/dt due to a change in V and/or m. For example, if V increases, then predator growth rate (dP/dt) will increase. Likewise if the energy intake increases (due to greater food availability) and a decrease in energy output (from foraging), then predator mortality (m) will decrease and predator growth rate (dP/dt) will increase. In contrast, the functional response consists of a change in conversion efficiency (a) or capture rate (c). The relationship between available energy and reproductive efforts can be explained with the life history theory in the trade-off between fecundity and growth/survival. If an organism has more net energy, then the organism will sacrifice less energy dedicated to survival per reproductive effort and will therefore increase its reproduction rate. In parasitism , functional response is measured by the rate of infection or laying of eggs in host, rather than the rate of prey consumption as it is measured in predation. Numerical response in parasitism is still measured by the change in number of adult parasites relative to change in host density. Parasites can demonstrate a more pronounced numerical response to changes in host density since there is often a more direct connection (less time lag) between food and reproduction in that both needs are immediately satisfied by its interaction with the host. [ 4 ] The aggregational response, as defined by Readshaw in 1973, is a change in predator population due to immigration into an area with increased prey population. [ 5 ] In an experiment conducted by Turnbull in 1964, he observed the consistent migration of spiders from boxes without prey to boxes with prey. He proved that hunger impacts predator movement. [ 6 ] Riechert and Jaeger studied how predator competition interferes with the direct correlation between prey density and predator immigration. [ 7 ] [ 8 ] One way this can occur is through exploitation competition: the differential efficiency in use of available resources, for example, an increase in spiders' web size (functional response). The other possibility is interference competition where site owners actively prevent other foragers from coming in vicinity. The concept of numerical response becomes practically important when trying to create a strategy for pest control . The study of spiders as a biological mechanism for pest control has driven much of the research on aggregational response. Antisocial predator populations that display territoriality, such as spiders defending their web area, may not display the expected aggregational response to increased prey density. [ 9 ] A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model . [ 10 ] The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation. [ 11 ]
https://en.wikipedia.org/wiki/Numerical_response
In mathematics, a numerical semigroup is a special kind of a semigroup . Its underlying set is the set of all nonnegative integers except a finite number of integers and the binary operation is the operation of addition of integers. Also, the integer 0 must be an element of the semigroup. For example, while the set {0, 2, 3, 4, 5, 6, ...} is a numerical semigroup, the set {0, 1, 3, 5, 6, ...} is not because 1 is in the set and 1 + 1 = 2 is not in the set. Numerical semigroups are commutative monoids and are also known as numerical monoids . [ 1 ] [ 2 ] The definition of numerical semigroup is intimately related to the problem of determining nonnegative integers that can be expressed in the form x 1 n 1 + x 2 n 2 + ... + x r n r for a given set { n 1 , n 2 , ..., n r } of positive integers and for arbitrary nonnegative integers x 1 , x 2 , ..., x r . This problem had been considered by several mathematicians like Frobenius (1849–1917) and Sylvester (1814–1897) at the end of the 19th century. [ 3 ] During the second half of the twentieth century, interest in the study of numerical semigroups resurfaced because of their applications in algebraic geometry . [ 4 ] Let N be the set of nonnegative integers. A subset S of N is called a numerical semigroup if the following conditions are satisfied. There is a simple method to construct numerical semigroups. Let A = { n 1 , n 2 , ..., n r } be a nonempty set of positive integers. The set of all integers of the form x 1 n 1 + x 2 n 2 + ... + x r n r is the subset of N generated by A and is denoted by ⟨ A ⟩. The following theorem fully characterizes numerical semigroups. Let S be the subsemigroup of N generated by A . Then S is a numerical semigroup if and only if gcd ( A ) = 1. Moreover, every numerical semigroup arises in this way. [ 5 ] The following subsets of N are numerical semigroups. The set A is a set of generators of the numerical semigroup ⟨ A ⟩. A set of generators of a numerical semigroup is a minimal system of generators if none of its proper subsets generates the numerical semigroup. It is known that every numerical semigroup S has a unique minimal system of generators and also that this minimal system of generators is finite. The cardinality of the minimal set of generators is called the embedding dimension of the numerical semigroup S and is denoted by e ( S ). The smallest member in the minimal system of generators is called the multiplicity of the numerical semigroup S and is denoted by m ( S ). There are several notable numbers associated with a numerical semigroup S . Let S = ⟨ 5, 7, 9 ⟩. Then we have: Numerical semigroups with small Frobenius number or genus The following general results were known to Sylvester. [ 7 ] Let a and b be positive integers such that gcd ( a , b ) = 1. Then There is no known general formula to compute the Frobenius number of numerical semigroups having embedding dimension three or more. No polynomial formula can be found to compute the Frobenius number or genus of a numerical semigroup with embedding dimension three. [ 8 ] Every positive integer is the Frobenius number of some numerical semigroup with embedding dimension three. [ 9 ] The following algorithm, known as Rödseth's algorithm, [ 10 ] [ 11 ] can be used to compute the Frobenius number of a numerical semigroup S generated by { a 1 , a 2 , a 3 } where a 1 < a 2 < a 3 and gcd ( a 1 , a 2 , a 3 ) = 1. Its worst-case complexity is not as good as Greenberg's algorithm [ 12 ] but it is much simpler to describe. An irreducible numerical semigroup is a numerical semigroup such that it cannot be written as the intersection of two numerical semigroups properly containing it. A numerical semigroup S is irreducible if and only if S is maximal, with respect to set inclusion, in the collection of all numerical semigroups with Frobenius number F ( S ). A numerical semigroup S is symmetric if it is irreducible and its Frobenius number F ( S ) is odd. We say that S is pseudo-symmetric provided that S is irreducible and F(S) is even. Such numerical semigroups have simple characterizations in terms of Frobenius number and genus:
https://en.wikipedia.org/wiki/Numerical_semigroup
In applied mathematics , the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy . The sign problem is one of the major unsolved problems in the physics of many-particle systems . It often arises in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions , or in field theories involving a non-zero density of strongly interacting fermions. In physics the sign problem is typically (but not exclusively) encountered in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions. Because the particles are strongly interacting, perturbation theory is inapplicable, and one is forced to use brute-force numerical methods. Because the particles are fermions, their wavefunction changes sign when any two fermions are interchanged (due to the anti-symmetry of the wave function, see Pauli principle ). So unless there are cancellations arising from some symmetry of the system, the quantum-mechanical sum over all multi-particle states involves an integral over a function that is highly oscillatory, hence hard to evaluate numerically, particularly in high dimension. Since the dimension of the integral is given by the number of particles, the sign problem becomes severe in the thermodynamic limit . The field-theoretic manifestation of the sign problem is discussed below. The sign problem is one of the major unsolved problems in the physics of many-particle systems, impeding progress in many areas: [ a ] In a field-theory approach to multi-particle systems, the fermion density is controlled by the value of the fermion chemical potential μ {\displaystyle \mu } . One evaluates the partition function Z {\displaystyle Z} by summing over all classical field configurations, weighted by exp ⁡ ( − S ) {\displaystyle \exp(-S)} , where S {\displaystyle S} is the action of the configuration. The sum over fermion fields can be performed analytically, and one is left with a sum over the bosonic fields σ {\displaystyle \sigma } (which may have been originally part of the theory, or have been produced by a Hubbard–Stratonovich transformation to make the fermion action quadratic) where D σ {\displaystyle D\sigma } represents the measure for the sum over all configurations σ ( x ) {\displaystyle \sigma (x)} of the bosonic fields, weighted by where S {\displaystyle S} is now the action of the bosonic fields, and M ( μ , σ ) {\displaystyle M(\mu ,\sigma )} is a matrix that encodes how the fermions were coupled to the bosons. The expectation value of an observable A [ σ ] {\displaystyle A[\sigma ]} is therefore an average over all configurations weighted by ρ [ σ ] {\displaystyle \rho [\sigma ]} : If ρ [ σ ] {\displaystyle \rho [\sigma ]} is positive, then it can be interpreted as a probability measure, and ⟨ A ⟩ ρ {\displaystyle \langle A\rangle _{\rho }} can be calculated by performing the sum over field configurations numerically, using standard techniques such as Monte Carlo importance sampling . The sign problem arises when ρ [ σ ] {\displaystyle \rho [\sigma ]} is non-positive. This typically occurs in theories of fermions when the fermion chemical potential μ {\displaystyle \mu } is nonzero, i.e. when there is a nonzero background density of fermions. If μ ≠ 0 {\displaystyle \mu \neq 0} , there is no particle–antiparticle symmetry, and det ( M ( μ , σ ) ) {\displaystyle \det(M(\mu ,\sigma ))} , and hence the weight ρ ( σ ) {\displaystyle \rho (\sigma )} , is in general a complex number , so Monte Carlo importance sampling cannot be used to evaluate the integral. A field theory with a non-positive weight can be transformed to one with a positive weight by incorporating the non-positive part (sign or complex phase) of the weight into the observable. For example, one could decompose the weighting function into its modulus and phase: where p [ σ ] {\displaystyle p[\sigma ]} is real and positive, so Note that the desired expectation value is now a ratio where the numerator and denominator are expectation values that both use a positive weighting function p [ σ ] {\displaystyle p[\sigma ]} . However, the phase exp ⁡ ( i θ [ σ ] ) {\displaystyle \exp(i\theta [\sigma ])} is a highly oscillatory function in the configuration space, so if one uses Monte Carlo methods to evaluate the numerator and denominator, each of them will evaluate to a very small number, whose exact value is swamped by the noise inherent in the Monte Carlo sampling process. The "badness" of the sign problem is measured by the smallness of the denominator ⟨ exp ⁡ ( i θ [ σ ] ) ⟩ p {\displaystyle \langle \exp(i\theta [\sigma ])\rangle _{p}} : if it is much less than 1, then the sign problem is severe. It can be shown [ 5 ] that where V {\displaystyle V} is the volume of the system, T {\displaystyle T} is the temperature, and f {\displaystyle f} is an energy density. The number of Monte Carlo sampling points needed to obtain an accurate result therefore rises exponentially as the volume of the system becomes large, and as the temperature goes to zero. The decomposition of the weighting function into modulus and phase is just one example (although it has been advocated as the optimal choice since it minimizes the variance of the denominator [ 6 ] ). In general one could write where p [ σ ] {\displaystyle p[\sigma ]} can be any positive weighting function (for example, the weighting function of the μ = 0 {\displaystyle \mu =0} theory). [ 7 ] The badness of the sign problem is then measured by which again goes to zero exponentially in the large-volume limit. The sign problem is NP-hard , implying that a full and generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time. [ 8 ] If (as is generally suspected) there are no polynomial-time solutions to NP problems (see P versus NP problem ), then there is no generic solution to the sign problem. This leaves open the possibility that there may be solutions that work in specific cases, where the oscillations of the integrand have a structure that can be exploited to reduce the numerical errors. In systems with a moderate sign problem, such as field theories at a sufficiently high temperature or in a sufficiently small volume, the sign problem is not too severe and useful results can be obtained by various methods, such as more carefully tuned reweighting, analytic continuation from imaginary μ {\displaystyle \mu } to real μ {\displaystyle \mu } , or Taylor expansion in powers of μ {\displaystyle \mu } . [ 3 ] [ 9 ] There are various proposals for solving systems with a severe sign problem:
https://en.wikipedia.org/wiki/Numerical_sign_problem
A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution , by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares . When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures, [ 1 ] [ 2 ] was popularized by Abraham Savitzky and Marcel J. E. Golay , who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. [ 3 ] [ 4 ] Some errors in the tables have been corrected. [ 5 ] The method has been extended for the treatment of 2- and 3-dimensional data. Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry [ 6 ] and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article". [ 7 ] The data consists of a set of points { x j {\displaystyle \{x_{j}} , y j } ; j = 1 , . . . , n {\displaystyle y_{j}\};j=1,...,n} , where x j {\displaystyle x_{j}} is an independent variable and y j {\displaystyle y_{j}} is an observed value. They are treated with a set of m {\displaystyle m} convolution coefficients, C i {\displaystyle C_{i}} , according to the expression Selected convolution coefficients are shown in the tables, below . For example, for smoothing by a 5-point quadratic polynomial, m = 5 , i = − 2 , − 1 , 0 , 1 , 2 {\displaystyle m=5,i=-2,-1,0,1,2} and the j t h {\displaystyle j^{th}} smoothed data point, Y j {\displaystyle Y_{j}} , is given by where, C − 2 = − 3 / 35 , C − 1 = 12 / 35 {\displaystyle C_{-2}=-3/35,C_{-1}=12/35} , etc. There are numerous applications of smoothing, such as avoiding the propagation of noise through an algorithm chain, or sometimes simply to make the data appear to be less noisy than it really is. The following are applications of numerical differentiation of data. [ 8 ] Note When calculating the n th derivative, an additional scaling factor of n ! h n {\displaystyle {\frac {n!}{h^{n}}}} may be applied to all calculated data points to obtain absolute values (see expressions for d n Y d x n {\displaystyle {\frac {d^{n}Y}{dx^{n}}}} , below, for details). The "moving average filter" is a trivial example of a Savitzky–Golay filter that is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. Each subset of the data set is fit with a straight horizontal line as opposed to a higher order polynomial. An unweighted moving average filter is the simplest convolution filter. The moving average is often used for a quick technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series. It was not included in some tables of Savitzky-Golay convolution coefficients as all the coefficient values are identical, with the value 1 m {\displaystyle {\frac {1}{m}}} . When the data points are equally spaced, an analytical solution to the least-squares equations can be found. [ 2 ] This solution forms the basis of the convolution method of numerical smoothing and differentiation. Suppose that the data consists of a set of n points ( x j , y j ) ( j = 1, ..., n ), where x j is an independent variable and y j is a datum value. A polynomial will be fitted by linear least squares to a set of m (an odd number) adjacent data points, each separated by an interval h . Firstly, a change of variable is made where x ¯ {\displaystyle {\bar {x}}} is the value of the central point. z takes the values 1 − m 2 , ⋯ , 0 , ⋯ , m − 1 2 {\displaystyle {\tfrac {1-m}{2}},\cdots ,0,\cdots ,{\tfrac {m-1}{2}}} (e.g. m = 5 → z = −2, −1, 0, 1, 2). [ note 1 ] The polynomial, of degree k is defined as The coefficients a 0 , a 1 etc. are obtained by solving the normal equations (bold a represents a vector , bold J represents a matrix ). where J {\displaystyle \mathbf {J} } is a Vandermonde matrix , that is i {\displaystyle i} -th row of J {\displaystyle \mathbf {J} } has values 1 , z i , z i 2 , … {\displaystyle 1,z_{i},z_{i}^{2},\dots } . For example, for a cubic polynomial fitted to 5 points, z = −2, −1, 0, 1, 2 the normal equations are solved as follows. Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with Expressions for the inverse of each of these matrices can be obtained using Cramer's rule The normal equations become and Multiplying out and removing common factors, The coefficients of y in these expressions are known as convolution coefficients. They are elements of the matrix In general, In matrix notation this example is written as Tables of convolution coefficients, calculated in the same way for m up to 25, were published for the Savitzky–Golay smoothing filter in 1964, [ 3 ] [ 5 ] The value of the central point, z = 0, is obtained from a single set of coefficients, a 0 for smoothing, a 1 for 1st derivative etc. The numerical derivatives are obtained by differentiating Y . This means that the derivatives are calculated for the smoothed data curve . For a cubic polynomial In general, polynomials of degree (0 and 1), [ note 3 ] (2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives. It is not necessary always to use the Savitzky–Golay tables. The summations in the matrix J T J can be evaluated in closed form , so that algebraic formulae can be derived for the convolution coefficients. [ 13 ] [ note 4 ] Functions that are suitable for use with a curve that has an inflection point are: Simpler expressions that can be used with curves that don't have an inflection point are: Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function. [ 14 ] An alternative to fitting m data points by a simple polynomial in the subsidiary variable, z , is to use orthogonal polynomials . where P 0 , ..., P k is a set of mutually orthogonal polynomials of degree 0, ..., k . Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest. [ 2 ] Expressions for the convolution coefficients are easily obtained because the normal equations matrix, J T J , is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by using recursion to build orthogonal Gram polynomials . The whole calculation can be coded in a few lines of PASCAL , a computer language well-adapted for calculations involving recursion. [ 15 ] Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point, z = 0, using a single set of convolution coefficients. ( m − 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience. It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function being minimized in the least-squares process has unit weights, w i = 1. When weights are not all the same the normal equations become If the same set of diagonal weights is used for all data subsets, W = diag ( w 1 , w 2 , . . . , w m ) {\displaystyle W={\text{diag}}(w_{1},w_{2},...,w_{m})} , an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial, An explicit expression for the inverse of this matrix can be obtained using Cramer's rule . A set of convolution coefficients may then be derived as Alternatively the coefficients, C , could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme. It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval. [ 16 ] Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels. [ 17 ] [ 18 ] Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m × n kernel nodes. The following example, for a bivariate polynomial of total degree 3, m = 7, and n = 5, illustrates the process, which parallels the process for the one dimensional case, above. [ 19 ] The rectangular kernel of 35 data values, d 1 − d 35 becomes a vector when the rows are placed one after another. The Jacobian has 10 columns, one for each of the parameters a 00 − a 03 , and 35 rows, one for each pair of v and w values. Each row has the form The convolution coefficients are calculated as The first row of C contains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficient a 00 {\displaystyle a_{00}} , which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes. Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly. [ 20 ] They recommend using the polynomial of the form because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem, p < m and q < n . For software that calculates the two-dimensional coefficients and for a database of such C 's, see the section on multi-dimensional convolution coefficients, below. The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner, [ 17 ] [ 21 ] by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi [ 20 ] in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases: where D is the dimension of the space, a {\displaystyle a} 's are the polynomial coefficients, and u 's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression. [ 21 ] Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian. Accurate computation of C in multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements, which, in turn, severely degrades its accuracy and renders it useless. Chandra Shekhar has brought forth two open source software, Advanced Convolution Coefficient Calculator (ACCC) and Precise Convolution Coefficient Calculator (PCCC) , which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner. [ 22 ] The precision of the floating-point numbers is gradually increased in each iteration, by using GNU MPFR . Once the obtained C 's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurate C . PCCC employs rational number calculations, by using GNU Multiple Precision Arithmetic Library , and yields a fully accurate C , in the rational number format. [ 23 ] In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits. A database of C 's that are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available. [ 24 ] Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes, [ 21 ] allowing usage of C provided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak. It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and the half-width will be increased. Both the extent of the distortion and S/N ( signal-to-noise ratio ) improvement: For example, If the noise in all data points is uncorrelated and has a constant standard deviation , σ , the standard deviation on the noise will be decreased by convolution with an m -point smoothing function to [ 26 ] [ note 5 ] These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(see Frequency characteristics of convolution filters , below). Although the moving average function gives better noise reduction it is unsuitable for smoothing data which has curvature over m points. A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion. [ 28 ] One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself. [ 29 ] For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients (1/9, 2/9, 3/9, 2/9, 1/9). The disadvantage of multipassing is that the equivalent filter width for n {\displaystyle n} passes of an m {\displaystyle m} –point function is n ( m − 1 ) + 1 {\displaystyle n(m-1)+1} so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results. [ 30 ] The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass. Convolution maps to multiplication in the Fourier co-domain . The discrete Fourier transform of a convolution filter is a real-valued function which can be represented as θ runs from 0 to 180 degrees , after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as a low-pass filter : the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter. [ 31 ] Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data [ 32 ] and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering. [ 33 ] Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation. By the law of error propagation , the variance-covariance matrix of the data, A will be transformed into B according to To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points, Y 2 − Y 4 , assuming that the data points have equal variance and that there is no correlation between them. A will be an identity matrix multiplied by a constant, σ 2 , the variance at each point. In this case the correlation coefficients , between calculated points i and j will be In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends over m − 1 calculated points at a time. [ 34 ] To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass [ note 6 ] After two passes, the standard deviation of the central point has decreased to 19 81 σ = 0.48 σ {\displaystyle {\sqrt {\tfrac {19}{81}}}\sigma =0.48\sigma } , compared to 0.58 σ for one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45 σ . Correlation now extends over a span of 4 sequential points with correlation coefficients The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data. Compared with other smoothing filters, e.g. convolution with a Gaussian or multi-pass moving-average filtering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (see frequency characteristics ). For data with limited signal bandwidth , this means that Savitzky–Golay filtering can provide better signal-to-noise ratio than many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poor stopband suppression) and artifacts when using polynomial fits for the first and last points . [ 16 ] Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen alternative fitting weights , Whittaker–Henderson smoothing and Hodrick–Prescott filter (equivalent methods closely related to smoothing splines ), and convolution with a windowed sinc function . [ 16 ] Consider a set of data points ⁠ ( x j , y j ) 1 ≤ j ≤ n {\displaystyle (x_{j},y_{j})_{1\leq j\leq n}} ⁠ . The Savitzky–Golay tables refer to the case that the step ⁠ x j − x j − 1 {\displaystyle x_{j}-x_{j-1}} ⁠ is constant, h . Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size, m , of 5 points are as follows. Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables [ note 7 ] The values were calculated using the PASCAL code provided in Gorry. [ 15 ]
https://en.wikipedia.org/wiki/Numerical_smoothing_and_differentiation
The convection–diffusion equation describes the flow of heat, particles, or other physical quantities in situations where there is both diffusion and convection or advection . For information about the equation, its derivation, and its conceptual importance and consequences, see the main article convection–diffusion equation . This article describes how to use a computer to calculate an approximate numerical solution of the discretized equation, in a time-dependent situation. In order to be concrete, this article focuses on heat flow , an important example where the convection–diffusion equation applies. However, the same mathematical analysis works equally well to other situations like particle flow. A general discontinuous finite element formulation is needed. [ 1 ] The unsteady convection–diffusion problem is considered, at first the known temperature T is expanded into a Taylor series with respect to time taking into account its three components. Next, using the convection diffusion equation an equation is obtained from the differentiation of this equation. The following convection diffusion equation is considered here [ 2 ] c ρ [ ∂ T ( x , t ) ∂ t + ϵ u ∂ T ( x , t ) ∂ x ] = λ ∂ 2 T ( x , t ) ∂ x 2 + Q ( x , t ) {\displaystyle c\rho \left[{\frac {\partial T(x,t)}{\partial t}}+\epsilon u{\frac {\partial T(x,t)}{\partial x}}\right]=\lambda {\frac {\partial ^{2}T(x,t)}{\partial x^{2}}}+Q(x,t)} In the above equation, four terms represents transience , convection , diffusion and a source term respectively, where The equation above can be written in the form ∂ T ∂ t = a ∂ 2 T ∂ x 2 − ϵ u ∂ T ∂ x + Q c ρ {\displaystyle {\frac {\partial T}{\partial t}}=a{\frac {\partial ^{2}T}{\partial x^{2}}}-\epsilon u{\frac {\partial T}{\partial x}}+{\frac {Q}{c\rho }}} where a = ⁠ λ / cρ ⁠ is the diffusion coefficient. A solution of the transient convection–diffusion equation can be approximated through a finite difference approach, known as the finite difference method (FDM). An explicit scheme of FDM has been considered and stability criteria are formulated. In this scheme, temperature is totally dependent on the old temperature (the initial conditions) and θ , a weighting parameter between 0 and 1. Substitution of θ = 0 gives the explicit discretization of the unsteady conductive heat transfer equation. T i f − T i f − 1 Δ t = a T i − 1 f − 1 − 2 T i f − 1 + T i + 1 f − 1 h 2 − ϵ u T i + 1 f − 1 − T i − 1 f − 1 2 h + Q i f − 1 c ρ {\displaystyle {\frac {T_{i}^{f}-T_{i}^{f-1}}{\Delta t}}=a{\frac {T_{i-1}^{f-1}-2T_{i}^{f-1}+T_{i+1}^{f-1}}{h^{2}}}-\epsilon u{\frac {T_{i+1}^{f-1}-T_{i-1}^{f-1}}{2h}}+{\frac {Q_{i}^{f-1}}{c\rho }}} where T i f = ( 1 − 2 a Δ t h 2 ) T i f − 1 + ( a Δ t h 2 + ϵ u Δ t 2 h ) T i − 1 f − 1 + ( a Δ t h 2 − ϵ u Δ t 2 h ) T i + 1 f − 1 + Q i f − 1 c ρ Δ t {\displaystyle T_{i}^{f}=\left(1-{\frac {2a\Delta t}{h^{2}}}\right)T_{i}^{f-1}+\left({\frac {a\Delta t}{h^{2}}}+{\frac {\epsilon u\Delta t}{2h}}\right)T_{i-1}^{f-1}+\left({\frac {a\Delta t}{h^{2}}}-{\frac {\epsilon u\Delta t}{2h}}\right)T_{i+1}^{f-1}+{\frac {Q_{i}^{f-1}}{c\rho }}\Delta t} The inequalities h < 2 a | ϵ u | , Δ t < a ϵ 2 u 2 / 4 + a 2 / h 2 < h 2 2 a < h | ϵ u | {\displaystyle {\begin{aligned}h&<{\frac {2a}{|\epsilon u|}},&\Delta t&<{\frac {a}{\epsilon ^{2}u^{2}/4+a^{2}/h^{2}}}<{\frac {h^{2}}{2a}}<{\frac {h}{|\epsilon u|}}\end{aligned}}} follow from setting Q i f → 0 {\displaystyle Q_{i}^{f}\to 0} and requiring the ansatz T i f → T 0 f exp ⁡ ( − 1 i θ ) {\displaystyle T_{i}^{f}\to T_{0}^{f}\exp({\sqrt {-1}}i\theta )} not to gain amplitude as f increases for any θ {\displaystyle \theta } . They set a stringent maximum limit to the time step that represents a serious limitation for the explicit scheme. This method is not recommended for general transient problems because the maximum possible time step has to be reduced as the square of h . In implicit scheme, the temperature is dependent at the new time level t + Δ t . After using implicit scheme, it was found that all coefficients are positive. It makes the implicit scheme unconditionally stable for any size of time step. This scheme is preferred for general purpose transient calculations because of its robustness and unconditional stability. [ 3 ] The disadvantage of this method is that more procedures are involved and due to larger Δ t , truncation error is also larger. In the Crank–Nicolson method , the temperature is equally dependent on t and t + Δ t . It is a second- order method in time and this method is generally used in diffusion problems. Δ t < h 2 a {\displaystyle \Delta t<{\frac {h^{2}}{a}}} This time step limitation is less restricted than the explicit method . The Crank–Nicolson method is based on the central differencing and hence it is second-order accurate in time. [ 4 ] Unlike the conduction equation (a finite element solution is used), a numerical solution for the convection–diffusion equation has to deal with the convection part of the governing equation in addition to diffusion. When the Péclet number (Pe) exceeds a critical value, the spurious oscillations result in space and this problem is not unique to finite elements as all other discretization techniques have the same difficulties. In a finite difference formulation, the spatial oscillations are reduced by a family of discretization schemes like upwind scheme . [ 5 ] In this method, the basic shape function is modified to obtain the upwinding effect. This method is an extension of Runge–Kutta discontinuous for a convection-diffusion equation. For time-dependent equations, a different kind of approach is followed. The finite difference scheme has an equivalent in the finite element method ( Galerkin method ). Another similar method is the characteristic Galerkin method (which uses an implicit algorithm). For scalar variables, the above two methods are identical.
https://en.wikipedia.org/wiki/Numerical_solution_of_the_convection–diffusion_equation
Numerical weather prediction ( NWP ) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes , weather satellites and other observing systems as inputs. Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change . The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires . Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions. A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation , moist processes (clouds and precipitation ), heat exchange , soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models. The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson , who used procedures originally developed by Vilhelm Bjerknes [ 1 ] to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. [ 2 ] [ 1 ] [ 3 ] It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. [ 4 ] [ 5 ] In 1954, Carl-Gustav Rossby 's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). [ 6 ] Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force , Navy and Weather Bureau . [ 7 ] In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model . [ 8 ] [ 9 ] Following Phillips' work, several groups began working to create general circulation models . [ 10 ] The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory . [ 11 ] As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. [ 6 ] In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models , followed by the United Kingdom in 1972 and Australia in 1977. [ 1 ] [ 12 ] The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. [ 13 ] [ 14 ] By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts. [ 15 ] The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). [ 16 ] [ 17 ] Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. [ 18 ] [ 19 ] [ 20 ] The atmosphere is a fluid . As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization . On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. [ 21 ] One main source of input is observations from devices (called radiosondes ) in weather balloons which rise through the troposphere and well into the stratosphere that measure various atmospheric parameters and transmits them to a fixed receiver. [ 22 ] Another main input is data from weather satellites . The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, [ 23 ] or every six hours in SYNOP reports. [ 24 ] These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. [ 25 ] The data are then used in the model as the starting point for a forecast. [ 26 ] Commercial aircraft provide pilot reports along travel routes [ 27 ] and ship reports along shipping routes. [ 28 ] Commercial aircraft also submit automatic reports via the WHO's Aircraft Meteorological Data Relay (AMDAR) system, using VHF radio to ground stations or satellites. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones . [ 29 ] [ 30 ] Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. [ 31 ] Sea ice began to be initialized in forecast models in 1971. [ 32 ] Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific. [ 33 ] An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations , used to predict the future state of the atmosphere. [ 34 ] These equations—along with the ideal gas law —are used to evolve the density , pressure , and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. [ 35 ] The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, [ 36 ] with the exception of a few idealized cases. [ 37 ] Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical. [ 36 ] These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step . This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability . [ 38 ] Time steps for global models are on the order of tens of minutes, [ 39 ] while time steps for regional models are between one and four minutes. [ 40 ] The global models are run at varying times into the future. The UK Met Office 's Unified Model is run six days into the future, [ 41 ] while the European Centre for Medium-Range Weather Forecasts ' Integrated Forecast System and Environment Canada 's Global Environmental Multiscale Model both run out to ten days into the future, [ 42 ] and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. [ 43 ] The visual output produced by a model solution is known as a prognostic chart , or prog . [ 44 ] Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between 5 kilometers (3 mi) and 300 kilometers (200 mi) in length. A typical cumulus cloud has a scale of less than 1 kilometer (0.6 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between 5 and 25 kilometers (3 and 16 mi) can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. [ 45 ] The formation of large-scale ( stratus -type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity. [ 46 ] The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. [ 47 ] This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. [ 48 ] Sun angle as well as the impact of multiple cloud layers is taken into account. [ 49 ] Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. [ 50 ] Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes. [ 51 ] The horizontal domain of a model is either global , covering the entire Earth, or regional , covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain ( boundary conditions ) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself. [ 52 ] The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height ( z {\displaystyle z} ) as the vertical coordinate. Later models substituted the geometric z {\displaystyle z} coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables , greatly simplifying the primitive equations. [ 53 ] This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere . [ 54 ] The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about 5,500 m (18,000 ft)) level, [ 4 ] and thus was essentially two-dimensional. High-resolution models—also called mesoscale models —such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates . [ 55 ] This coordinate system receives its name from the independent variable σ {\displaystyle \sigma } used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain. [ 56 ] Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), [ 57 ] and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s. [ 16 ] [ 58 ] Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. [ 59 ] MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds. [ 60 ] In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. [ 61 ] Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, [ 61 ] making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill . Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations , exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. [ 62 ] These uncertainties limit forecast model accuracy to about five or six days into the future. [ 63 ] [ 64 ] Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. [ 65 ] Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. [ 66 ] Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. [ 62 ] Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction , model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. [ 18 ] [ 19 ] [ 20 ] The ECMWF model, the Ensemble Prediction System, [ 19 ] uses singular vectors to simulate the initial probability density , while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding . [ 18 ] [ 20 ] The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts. [ 67 ] In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams , which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram , which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; [ 68 ] this problem becomes particularly severe for forecasts of the weather about ten days in advance. [ 69 ] When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. [ 68 ] Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7. [ 70 ] The relationship between ensemble spread and forecast skill varies substantially depending on such factors as the forecast model and the region for which the forecast is made. [ 70 ] In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting , and it has been shown to improve forecasts when compared to a single model-based approach. [ 71 ] Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting . This type of forecast significantly reduces errors in model output. [ 72 ] Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport , or mean velocity of movement through the atmosphere, their diffusion , chemical transformation , and ground deposition . [ 73 ] In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. [ 74 ] Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, [ 75 ] which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts. [ 74 ] A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change . For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. [ 76 ] Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey . [ 77 ] When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved. [ 78 ] The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. [ 79 ] The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling , refraction , energy transfer between waves, and wave dissipation. [ 80 ] Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. [ 81 ] Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models. [ 82 ] In 1978, the first hurricane-tracking model based on atmospheric dynamics —the movable fine-mesh (MFM) model—began operating. [ 13 ] Within the field of tropical cyclone track forecasting , despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill , and until the 1990s when it consistently outperformed statistical or simple dynamical models. [ 83 ] Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. [ 84 ] Because weather drifts across the world, producing forecasts a week or more in advance typically involves running a numerical prediction model for the entire planet. Agencies use various software to do this, including: The global models can be used to supply boundary conditions to higher-resolution models that provide more accurate forecasts for an area of interest, such as the country served by a government agency, or an area where military action or rescue efforts are planned. The output of higher-resolution models may be further modified by model output statistics to take into quirky local phenomena that general models do not handle well, such as mountain waves . On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose , or wood fuels, in wildfires . When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion . When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere. [ 91 ] A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations . [ 92 ] [ 93 ] More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. [ 91 ] The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under 1 kilometer (0.6 mi), forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally. [ 94 ] [ 95 ] [ 96 ]
https://en.wikipedia.org/wiki/Numerical_weather_prediction
The numero sign or numero symbol , № (also represented as Nº , No̱ , No. , or no. ), [ 1 ] [ 2 ] is a typographic abbreviation of the word number ( s ) indicating ordinal numeration , especially in names and titles. For example, using the numero sign, the written long-form of the address "Number 29 Acacia Road" is shortened to "№ 29 Acacia Rd", yet both forms are spoken long. Typographically, the numero sign combines as a single ligature the uppercase Latin letter ⟨N⟩ with a usually superscript lowercase letter ⟨o⟩ , sometimes underlined, resembling the masculine ordinal indicator ⟨º⟩ . The ligature has a code point in Unicode as a precomposed character , U+2116 № NUMERO SIGN . [ 3 ] The Oxford English Dictionary derives the numero sign from Latin numero , the ablative form of numerus ("number", with the ablative denotations of "by the number, with the number"). In Romance languages , the numero sign is understood as an abbreviation of the word for "number", e.g. Italian numero , French numéro , and Portuguese and Spanish número . [ 4 ] This article describes other typographical abbreviations for "number" in different languages, in addition to the numero sign proper. The numero sign's non-ligature substitution by the two separate letters ⟨N⟩ and ⟨o⟩ is common. A capital or lower-case "n" may be used, followed by "o.", superscript "o", ordinal indicator, or the degree sign; this will be understood in most languages. In Bulgarian the numero sign is often used and it is present in three widely used keyboard layouts accessible with Shift-0 in BDS and prBDS and with Shift-3 on the Phonetic layout. In many forms of English, the non-ligature form No. is typical and is often used to abbreviate the word "number". [ 2 ] In North America, the number sign , # , is more prevalent. The ligature form does not appear on British or American QWERTY keyboards . The numero symbol is not in common use in France and does not appear on a standard AZERTY keyboard. Instead, the French Imprimerie nationale recommends the use of the form "n o " (an "n" followed by a superscript lowercase "o"). The plural form "n os " can also be used. [ 5 ] In practice, the "o" is often replaced by the degree symbol (°), which is visually similar to the superscript "o" and is easily accessible on an AZERTY keyboard. "Nomor" in Indonesian and "nombor" in Malaysian; therefore "No." is commonly used as an abbreviation with standard spelling and full stop. The sign is usually replaced with the abbreviations "n." or "nº", the latter using a masculine ordinal indicator , rather than a superscript "O". [ 6 ] Because of more than three centuries of Spanish colonisation , the word número is found in almost all Philippine languages. "No." is its common notation in local languages as well as English. In Portugal, the similar-looking notation n.º is often used. [ 7 ] In Brazil, where Portuguese is the official language, nº is often used on official documents. [ 8 ] In both cases, the symbol used ( º ) is the masculine ordinal indicator . However, the Brazilian National Standards Organization (ABNT) determines that the word "número" should be abbreviated "n." only. Although the letter ⟨ N ⟩ is not in the Cyrillic alphabet , the numero sign № is typeset in Russian publishing, and is available on Russian computer and typewriter keyboards . The numero sign is very widely used in Russia and other post-Soviet states in many official and casual contexts. Examples include usage for law and other official documents numbering, names of institutions (hospitals, kindergartens, schools, libraries, organization departments and so on), numbering of periodical publications (such as newspapers and magazines), numbering of public transport routes, etc. "№ п/п" ( номер по порядку , "sequential number") is universally used as a table header to denote a column containing the table row number. The № sign is sometimes used in Russian medical prescriptions (which according to the law must be written in Latin language [ 9 ] ) as an abbreviation for the Latin word numero to indicate the number of prescribed dosages (for example, tablets or capsules), and on the price tags in drugstores and pharmacy websites to indicate number of unit doses in drug packages, although the standard abbreviation for use in prescriptions is the Latin N. The numero sign is not typically used in Iberian Spanish, and it is not present on standard keyboard layouts. According to the Real Academia Española [ 10 ] and the Fundéu BBVA , [ 11 ] the word número (number) is abbreviated per the Spanish typographic convention of letras voladas ("flying letters"). The first letter(s) of the word to be abbreviated are followed by a period; then, the final letter(s) of the word are written as lowercase superscripts. This gives the abbreviations n. o (singular) and n. os (plural). The abbreviation "no." is not used (it might be mistaken for the Spanish negative word no ). The abbreviations nro. and núm. are also acceptable. The numero sign, either as a one-character symbol № or composed of the letter N plus superscript "o" (sometimes underlined or substituted by the ordinal indicator , º ), is common in Latin America, where the interpolated period is sometimes not used in abbreviations. In some languages, Nr. , nr. , nr or NR is used instead, reflecting the abbreviation of the language's word for "number". In German, which capitalises all nouns and abbreviations of nouns, the word Nummer is abbreviated as Nr. Lithuanian uses this spelling as well, and it is usually capitalised in bureaucratic contexts, especially with the meaning "reference number" (such as sutarties Nr. , "contract No.") but in other contexts it follows the usual sentence capitalisation (such as tel. nr. , abbreviation for telefono numeris , "telephone number"). It is commonly lowercase in other languages, such as Dutch, Danish, Norwegian, Polish, Romanian, Estonian and Swedish. Some languages, such as Polish, omit the dot in abbreviations if the abbreviation ends with the last letter of the original word. The sign is encoded in Unicode as U+2116 № NUMERO SIGN and many platforms and languages have methods to enter it. See Unicode input and the relevant keyboard articles for further details.
https://en.wikipedia.org/wiki/Numero_sign
The Nun Study of Aging and Alzheimer's Disease is a continuing longitudinal study , begun in 1986, to examine the onset of Alzheimer's disease . [ 1 ] [ 2 ] David Snowdon , an Epidemiologist and the founding Nun Study investigator, started the Nun Study at the University of Minnesota , later transferring the study to the University of Kentucky in 1990. [ 3 ] In 2008, with Snowdon's retirement, the study returned to the University of Minnesota. [ 4 ] The Nun Study was very briefly moved from the University of Minnesota to Northwestern University in 2021 under the directorship of Dr. Margaret Flanagan. [ 5 ] The Nun Study is currently housed at the University of Texas Health San Antonio in the Bigg's Institute for Alzheimer's and Neurodegenerative diseases under the continued directorship of Neuropathologist, Dr. Margaret Flanagan. [ 6 ] [ 7 ] The Sisters' autobiographies written just before they took their vows (ages 19–21) revealed that positivity was closely related to longevity and idea density, which is related to conversation and writing. [ 1 ] [ 8 ] [ 9 ] [ 10 ] This research found that higher idea density scores correlated with a higher chance of having sufficient mental capacity in late-life, despite neurological evidence that showed the onset of Alzheimer's disease. [ 2 ] [ 9 ] [ 10 ] In 1992, researchers at Rush University Medical Center Rush Alzheimer's Disease Center (RADC), building on the success of the Nun Study, proposed the Rush Religious Orders Study . The Religious Orders Study was funded by the National Institute on Aging in 1993, and was ongoing, as of 2012. [ 11 ] The Nun Study began in 1986 with funding by the National Institute on Aging . This study was focused on a group of 678 American Roman Catholic Sisters who were members of the School Sisters of Notre Dame . The purpose of the study was to conclude if activities, academics, past experiences, and disposition are correlated to continued cognitive, neurological, and physical ability as individuals got older, as well as overall longevity. [ 12 ] [ 13 ] The Nun Study participants were gathered on a volunteer basis following a presentation on the importance of donating one's brain for research purposes after death. Prior to the study's beginning, researchers required the participants to be cognitively intact and at least 75 years of age and for the Sisters to participate in the study until time of death. Participation in the study included the following; all participants gave permission for researchers to have access to their autobiographies and personal documented information and to participate in regular physical and mental examinations. [ 13 ] These examinations were designed to test the subject's proficiency with object identification, memory, orientation, and language. These categories were tested through a series of mental state examinations with the data being recorded with each passing test. Nun Study participants were asked to give permission for their brains to be donated at time of death so that their brains could be neuropathologically evaluated for changes related to Alzheimer's disease and other dementias. [ 13 ] Neuropathology evaluations for the Nun Study were performed by creating microscope slides from brain autopsy samples. Microscope slides that were created from the Nun Study brain autopsy samples were carefully evaluated for changes of Alzheimer's disease by specialized Physicians called Neuropathologists. [ 14 ] [ 15 ] All Nun Study participants willingly signed a form agreeing to the terms of the study. [ 1 ] As of 2017, there were three participants still living. Studying a relatively homogeneous group (no drug use, little or no alcohol, similar housing and reproductive histories) minimized the extraneous variables that may confound other similar research studies. [ 16 ] During the examination process Snowdon was able to compare the collected cognitive test scores with the neuropathology data that was obtained from examining the brains of the subject and quantifying microscopic changes. [ 2 ] These results assisted in giving new layers of understanding to the nature of Alzheimer's disease and other dementias. He concluded that Alzheimer's disease is likely caused by early childhood experiences or trauma instead of something from adulthood. [ 2 ] Researchers accessed the convent archive to review documents amassed throughout the lives of the nuns in the study. They also collected data via annual cognitive and physical function examinations conducted throughout the remainder of the participants' lives. After the death of a participant, the researchers would evaluate the brains of the deceased to assess any brain pathology. [ 17 ] Neuropathology evaluations for the Nun Study were performed by creating microscope slides from brain autopsy samples. Microscope slides that were created from the Nun Study brain autopsy samples were carefully evaluated for changes of Alzheimer's disease by specialized Physicians called Neuropathologists. [ 18 ] The original Neuropathologist for the Nun Study was Dr. William Markesbery. [ 19 ] [ 20 ] One of the major findings from the nun study was how the participants' lifestyle and education may deter Alzheimer's symptoms. Participants who had an education level of a bachelor's degree or higher were less likely to develop Alzheimer's later in life. They also lived longer than their colleagues who did not have higher education. [ 2 ] Furthermore, the participants' word choice and vocabulary were also correlated to the development of Alzheimer's. Among the documents reviewed were autobiographical essays that were written by the nuns upon joining the sisterhood. Upon review, it was found that an essay's lack of linguistic density (e.g., complexity, vivacity, fluency) functioned as a significant predictor of its author's risk for developing Alzheimer's disease in old age. However, the study also found that the Sisters who wrote positively in their personal journals were more likely to live longer than their counterparts. [ 21 ] Snowdon and associates found three indicators of longer life when coding the sister's autobiographies: the number of positive sentences, positive words, and the variety of positive emotions used. [ 8 ] The less positivity in writing correlated with greater mortality. There were many variables this study was unable to glean from the autobiographies of the sisters, such as long term hopefulness or bleakness in one's personality, optimism, pessimism, ambition, and others. [ 8 ] The average age of nuns who began an autobiography was 22 years. [ 8 ] Some participants who used more advanced words in their autobiography had less symptoms of Alzheimer's in older years. Roughly 80% of nuns whose writing was measured as lacking in linguistic density went on to develop Alzheimer's disease in old age; meanwhile, of those whose writing was not lacking, only 10% later developed the disease. [ 21 ] This was found when researchers examined neuropathology after nuns died, confirming that most of those who had a low idea density had Alzheimer's disease, and most of those with high idea density did not. [ 21 ] Snowdon found that exercise was inversely correlated with development of Alzheimer's disease, showing that participants who engaged in some sort of daily exercise were more likely to retain cognitive abilities during aging. [ 22 ] Participants who started exercising later in life were more likely to retain cognitive abilities, even if not having exercised before. [ 23 ] In 1992, a gene called apolipoprotein E was established as a possible factor in Alzheimer's disease, [ 24 ] but its presence did not predict disease with certainty. [ 25 ] Existence of amyloid beta plaques and Tau neurofibrillary tangles in the brain are required for the diagnosis of Alzheimer's Disease Neuropathologic Change to be made. [ 26 ] [ 27 ] Results from the Nun Study indicated that Tau neurofibrillary tangles located in regions of the brain outside the neocortex and hippocampus may have less of an effect than amyloid beta plaques located within those same areas. Another important factor was brain weight, as subjects with brains weighing under 1000 grams were seen as higher risk than those in a higher weight class. [ 28 ] Overall, findings of the Nun Study indicated multiple factors concerning expression of Alzheimer's traits. The data primarily stated that age and disease do not always guarantee impaired cognitive ability and "that traits in early, mid, and late life have strong relationships with the risk of Alzheimer's disease, as well as the mental and cognitive disabilities of old age." [ 29 ] The findings influenced other scientific studies and discoveries, one of which indicated that if a person has a stroke, there is a smaller requirement of Alzheimer's brain lesions necessary to diagnose a person with dementia. [ 30 ] Another is that postmortem MRI scans of the hippocampus can help distinguish that some nondemented individuals fit the criteria for Alzheimer's disease. [ 31 ] Researchers have also used the autopsy data to determine that there is a relationship between the number of teeth an individual has at death with how likely they were to have had dementia. Those with fewer teeth were more likely to have dementia while living. [ 32 ] Another study reaffirmed the findings of The Nun Study that higher idea density is correlated with better cognition during aging, even if the individual had brain lesions resembling those of Alzheimer's disease. [ 9 ] A 2019 study combined The Nun Study and Max Weber's vocational lectures, indicating that the vocation and lifestyle of nuns correlated with higher potential for developing dementia. [ 33 ] [ 34 ] Using research from the original study, Weinstein et al. found a correlation between longevity, and autonomy. Subjects were shown to have a longer lifespan based on the amount of purposeful and reflective behavior shown in their writing. [ 35 ]
https://en.wikipedia.org/wiki/Nun_Study
In biogeography , particularly phytogeography , the nunatak hypothesis about the origin of a biota in formerly glaciated areas is the idea that some or many species have survived the inhospitable period on icefree land such as nunataks . [ 1 ] Its antithesis is the tabula rasa hypothesis , which posits that all species have immigrated into completely denuded land after the retreat of glaciers. [ 2 ] By the mid-20th Century, the nunatak hypothesis was widely accepted among biologists working on the floras of Greenland and Scandinavia . [ 3 ] However, while modern geology has established the presence of ice-free areas during the last glacial maximum in both Greenland and Scandinavia , molecular techniques have revealed limited between-region genetic differentiation in many Arctic taxa , strongly suggesting a general capacity for long-distance dispersal among polar organisms. [ 4 ] This does not directly disprove glacial survival. But it makes it less necessary as an explanation. Moreover, populations that survived on icefree land have probably in most cases been genetically flooded by postglacial immigrants. This biology article is a stub . You can help Wikipedia by expanding it . This glaciology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nunatak_hypothesis
The nuocyte is a cell of the innate immune system that plays an important role in type 2 immune responses that are induced in response to helminth worm infection or in conditions such as asthma and atopic disease. [ 1 ] Nuocytes are amongst the first cells activated in type 2 immune responses and are thought to play important roles in activating and recruiting other cells types through their production of type 2 cytokines interleukin 4 , 5 and 13 . [ 1 ] Nuocytes have been observed to proliferate in the presence of interleukin 7 (IL-7) in vitro . [ 2 ] Nuocytes contribute to the expulsion of helminth worms [ 1 ] and to the pathology of colitis [ 3 ] and allergic airways disease. [ 4 ] The nuocyte was identified at the same time as several other immune cells that play similar roles in type 2 immunity. These include Natural Helper Cells (NHCs), [ 5 ] Innate Helper 2 (Ih2) cells [ 6 ] and multi-potent progenitor (MPP) type 2 cells. [ 7 ] The exact relationship between these cell types remains contentious [ 8 ] [ 9 ] but all share a type-2-inducing phenotype. MPP type 2 cells appear to differ from the other populations in that they have a myeloid , rather than lymphoid , origin. [ 7 ] Nuocytes have been shown to have a lymphoid origin and a developmental pathway that is dependent upon the transcription factor RORα and Notch signalling. [ 10 ] Pro-T cell progenitors retain nuocyte developmental potential but, unlike T cells, the thymus is dispensable for their development.
https://en.wikipedia.org/wiki/Nuocyte
Nuprl is a proof development system, providing computer-mediated analysis and proofs of formal mathematical statements, and tools for software verification and optimization. Originally developed in the 1980s by Robert Lee Constable and others, the system is now maintained by the PRL Project at Cornell University . The currently supported version, Nuprl 5, is also known as FDL (Formal Digital Library). Nuprl functions as an automated theorem proving system and can also be used to provide proof assistance . Nuprl uses a type system based on Martin-Löf intuitionistic type theory to model mathematical statements in a digital library . Mathematical theories can be constructed and analyzed with a variety of editors, including a graphical user interface , a web-based editor, and an Emacs mode. A variety of evaluators and inference engines can operate on the statements in the library. Translators also allow statements to be manipulated with Java and OCaml programs. [ 1 ] The overall system is controlled with a variant of ML . Nuprl 5's architecture is described as "distributed open architecture " [ 1 ] and intended primarily to be used as a web service rather than as standalone software. Those interested in using the web service, or migrating theories from older versions of Nuprl, can contact the email address given on the Nuprl System web page. [ 2 ] Nuprl was first released in 1984, and was first described in detail in the book Implementing Mathematics with the Nuprl Proof Development System , [ 3 ] published in 1986. Nuprl 2 was the first Unix version. Nuprl 3 provided machine proof for mathematical problems related to Girard's paradox and Higman's lemma . Nuprl 4, the first version developed for the World Wide Web , was used to verify cache coherency protocols and other computer systems. [ 4 ] The current system architecture, implemented in Nuprl 5, was first proposed in a 2000 conference paper . A reference manual for Nuprl 5 was published in 2002. [ 5 ] Nuprl has been the subject of many computer science publications. Both the JonPRL and RedPRL systems are also based on computational type theory. [ 6 ] RedPRL is explicitly "inspired by Nuprl". [ 7 ]
https://en.wikipedia.org/wiki/Nuprl
Nuremberg kitchen is the traditional English name for a specific type of dollhouse , similar to a room box , usually limited to a single room depicting a kitchen. The name references the city of Nuremberg , the center of the nineteenth-century German toy industry. In German the toy is known as a Puppenküche (literally "dolls' kitchen"). [ 1 ] [ 2 ] Most surviving examples show variations on a standard form: a single reduced-scale room with the front wall and ceiling missing, rather like a miniaturized stage set, allowing convenient access to the interior and an unobstructed view of the minuscule items within. Often the side walls flair out from the back at wide angles, creating a trapezoid floorplan and presenting a more dramatic display of the contents. Some might have a roof above or a pantry to one side, but these are exceptions. [ 3 ] Typically, but not always, the fittings are arranged symmetrically, with a cooking range in the center of the rear wall (a raised masonry hearth with a chimney in early versions, or a metal stove in later ones), with cupboards, shelves, and other storage furniture to either side. They often house an abundant collection of pots, pans, and dishes filling or even overflowing the space. Later nineteenth-century examples are often highly embellished with decorative trimmings. Many of these features pertain more to making these miniatures seem attractive than they do to accurately depicting full-scale kitchens. [ 4 ] Nuremberg kitchens date back at least to 1572, when one was given to Dorothea and Anna , the Princesses of Saxony, daughters of Augustus, Elector of Saxony aged five and ten. [ 5 ] Since then, many adult collectors as well as children have owned multi-room dollhouses, but these one-room kitchens seem to have almost always been thought of as girls' playthings. They reached the height of their popularity in the 1800s. In the early part of the century they were assembled by artisans working from their homes, who produced a remarkably large volume of toys made by hand. By the later part of the century they were being manufactured in even greater numbers in industrialized factories by such firms as Moritz Gottschalk, Gebrüder Bing , and Märklin . [ 6 ] German mothers would pass on their childhood kitchens to their daughters, which became a widespread practice by the nineteenth century. By this custom, Nuremberg kitchens that might have been very up-to-date when first made would be noticeably old-fashioned after decades of being handed down as a family heirloom. Similarly, while many nineteenth-century German toy manufacturers offered miniature versions of all the latest kitchen gadgets, their catalogs also showed toy kitchens that went virtually unchanged for decades, as did many of their pots, pans, and dishes. Thus, late nineteenth- and early twentieth-century examples often incorporate components that were distinctly anachronistic by that time. [ 3 ] Nuremberg kitchens were also often associated with the Christmas holidays. In many German families, they were only brought out to be played with at Christmastime, when they served as part of the traditional holiday decorations and as a seasonal toy. It was popular to give little girls items for their toy kitchens as Christmas presents, on their birthdays and similar occasions. [ 3 ] The purpose of Nuremberg kitchens has usually been explained by dolls' house historians as meant to teach girls lessons in housekeeping and cooking. [ 7 ] However, these model kitchens are probably better understood as meant to encourage girls to adopt traditionally gendered social roles by making housekeeping seem fascinating through the appeal of attractive and impressive playthings. It would have been much easier for mothers to teach their daughters how to cook by taking them to the real kitchens in their homes and having them observe and assist with preparing meals than to provide miniaturized counterparts. Also, given that these toy kitchens had layouts that were more aesthetic and theatrical than accurately representational of real kitchens in full scale houses, that they often evoked nostalgia as family antiques or as deliberately old-fashioned new products, and that they were often associated more with the festivities of Christmas than with the practicalities of everyday life, Nuremberg kitchens were probably not truly meant primarily to provide girls with practical training in the skills of homemaking. Instead, they were intended to generate wonder and amusement, to make kitchens seem magical, and thereby inspire girls to anticipate and desire their traditionally expected future roles as homemakers. [ 4 ]
https://en.wikipedia.org/wiki/Nuremberg_kitchen
Nurse crops are a subtype of nurse plants , facilitating the growth of other species of plants. The term is used primarily in agriculture , but also in forestry . Cover crops are a type of nurse crop. In agriculture, a nurse crop is an annual crop used to assist in establishment of a perennial crop. [ 1 ] The widest use of nurse crops is in the establishment of legumaceous plants such as alfalfa , clover , and trefoil . [ 1 ] [ 2 ] Occasionally, nurse crops are used for establishment of perennial grasses. [ citation needed ] Nurse crops reduce the incidence of weeds, prevent erosion, and prevent excessive sunlight from reaching tender seedlings. Often, the nurse crop can be harvested for grain , straw , hay , or pasture . [ 1 ] Oats are the most common nurse crop, though other annual grains are also used. [ 3 ] Nurse cropping of tall or dense-canopied plants can protect more vulnerable species through shading or by providing a wind break. [ 4 ] However, if ill-maintained, nurse crops can block sunlight from reaching seedlings. [ 3 ] Trap crops prevent pests from affecting the desired plant. [ 1 ] In forestry, 'nurse crop' can be applied to trees or shrubs that help the development of other species of trees. Wind breaking , frost protection, thermal insulation , and shade can all be provided by nurse crops in forests. [ 4 ] Aspens especially provide partial shade , allowing understory growth. [ 4 ] This agriculture article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nurse_crop
Nurse grafting is a method of plant propagation that is used for hard-to-root plant material. If a desirable selection cannot be grown from seed (because a seed-grown plant will be genetically different from the parent), it must be propagated asexually ( cloned ) in order to be genetically identical to the parent. Nurse grafting allows a scion to develop its own roots instead of being grafted to a rootstock . A large-seeded woody species, e.g. the chestnut , retains the cotyledons inside the seed coat below ground while the radicle grows downward and the shoot appears aboveground. To make a nurse seed graft, a germinating seed is needed. A knife is used to cut an opening between the petioles of the cotyledons. The scion, taken from dormant wood of the previous season's growth, is cut to a wedge shape at the end and inserted into the cut between the cotyledons, so that the cambium surfaces of each can join. The grafted plant is then set in a rooting medium with the union about 1.5 inch below the surface. [ 1 ] This graft allows the scion to live on the seed's roots long enough to form adventitious roots of its own. This technique is used for camellias , avocados , and chestnuts. [ 1 ] In this technique, a scion is grafted to a piece of root to keep it alive long enough for it to form its own roots. The graft union is planted below the surface of the growing medium, as with the nurse seed method. Once the scion has formed roots of its own, the rootstock can be removed, or it will die off, as will happen in situations when the scion and rootstock are not closely related. [ 2 ] This method works well with apple cultivars , cherries , plums , nectarines , and pears . It is also useful for propagating rare isolated plants that may be unique and should not be moved from the wild. Such a plant can be propagated by taking a small amount of material that will not harm the parent plant by its removal. [ 3 ] Nurse root grafting is the best method for propagating tree peonies . [ 4 ] Unlike herbaceous peonies with fleshy roots, which are propagated by division, tree peonies have woody stems and extensive root systems that are impractical for division. Their woody stems have few stored reserves and stem cuttings fail before roots are produced. The problem of keeping the scion alive long enough to produce roots (generally a year) is solved by grafting a tree peony shoot on an herbaceous peony root section; the section of root sustains the scion with its ample stored energy. (In contrast, Itoh peony hybrids, which are crosses between herbaceous and tree peonies, are best propagated by stem cuttings, which root easily.) The basic process for nurse-root grafting of peonies is as follows:
https://en.wikipedia.org/wiki/Nurse_grafting
A nurse log is a fallen tree which, as it decays, provides ecological facilitation to seedlings. Broader definitions include providing shade or support to other plants. Some of the advantages a nurse log offers to a seedling are: water , moss thickness, leaf litter , mycorrhizae , disease protection, nutrients, and sunlight . Recent research into soil pathogens suggests that in some forest communities, pathogens hostile to a particular tree species appear to gather in the vicinity of that species, and to a degree inhibit seedling growth. [ 1 ] Nurse logs may therefore provide some measure of protection from these pathogens, thus promoting greater seedling survivorship. Various mechanical and biological processes contribute to the breakdown of lignin in fallen trees, resulting in the formation of niches of increasing size, which tend to fill with forest litter such as soil from spring floods, needles, moss , mushrooms and other flora. Mosses also can cover the outside of a log, hastening its decay and supporting other species as rooting media and by retaining water. Small animals such as various squirrels often perch or roost on nurse logs, adding to the litter by food debris and scat . The decay of this detritus contributes to the formation of a rich humus that provides a seedbed and adequate conditions for germination. Nurse logs often provide a seedbed to conifers in a temperate rain forest ecosystem. The oldest nurse log fossils date to the earliest Permian , approximately 300 million years ago. [ 2 ]
https://en.wikipedia.org/wiki/Nurse_log
A nurse plant is one that serves the ecological role of helping seedlings establish themselves, and of protecting young plants from harsh conditions. This effect is particularly well studied among plant communities in xeric environments. Overstory trees and shrubs have a facilitative effect on the establishment of understory plants. This effect is also seen in some interactions between herbaceous plant species. [ 1 ] Nurse plants are important in xeric environments because they provide shaded microhabitats for the survival of several other plant species, buffering temperature extremes and reducing moisture loss. For example, in the Sonoran Desert , nurse plants canopies provide reduced summer daytime temperatures, soil surface temperatures, and direct sunlight, higher soil fertility, protection from the wind and browsing animals, reduced evapotranspiration rates in the nursed species, elevated nighttime temperatures, and post-fire resprouting in some species... [ 2 ] [ 3 ] This means that nurse plants provide a positive interaction between themselves and the organisms they protect, and are often crucial in maintaining biodiversity in water-scarce environments. [ 4 ] [ 5 ] It has been suggested that the assistance provided by nurse plants can enhance the performance of stress intolerant species on green roofs. [ 6 ] There is also evidence suggesting that phenotypic traits affect how protégé plants that grow on large green roofs react to nurse plants. [ 7 ] Nurse plants also aid with recovery after herbivore grazing, because they provide higher levels of resources to the plant. [ 8 ] The effect of nurse plants on any particular species is dependent upon species richness and the dispersal strategy of the organism. [ 8 ] Nurse plants can help with seedling recruitment and protect plants from granivory. [ 9 ] [ 10 ] A saguaro's root system is restricted to 15 cm of soil surface and the Palo Verde's ( Cercidium microphyllum ) roots go deeper under the surface. Studies suggest that a saguaro's network of roots intercept moisture before it can reach a Palo Verde's roots. [ citation needed ] When analyzing the contributions of Nurse trees, the prevention of herbivory is reduced in arid environments because the herbivores are at a much lower density, so the contribution of herbivore defense is excluded from arid environments. [ 11 ] Nurse plants also have better soil under their canopies than what is out in the open. "Soil properties under nurse plants were always better than outside them, which are in concordance with the generalized existence of fertility islands in high mountains" [ 12 ] Palo Verde ( Cercidium spp.), mesquite ( Prosopis spp.), and ironwood ( Olneya tesota ) trees all provide positive interactions among other plants species like facilitating seedling survival and germination. [ 13 ] The richness and abundance of many plant species is greater under the canopies of these trees than in surrounding areas [ 13 ] The density of plant species that depend on nurse plants depends on the number of nurse plants in a community. For example, the density of the senita cactus ( Pachycereus schottii ) was higher when there was a higher density of nurse plants. [ 9 ] But the study by Holland et al., [ 9 ] found that "there was not a significant main factor effect of nurse plants on the germination and seedling recruitment of senita cacti." The positive effects of nurse plants with this plant species depended on rainfall [ 9 ] Study of ironwood trees ( O. tesota ) has shown that a nurse plant's importance is not only as a temperature buffer, but also as a water buffer. In terms of water stress, there was a difference in the facilitative effects between mesic and xeric sites. In xeric sites, the richness and abundance of perennial plants was higher, whereas, ephemerals saw no difference. In mesic sites, the abundance of perennial and ephemeral plants was no different, but the ephemeral richness was lower. [ citation needed ] The size of the canopies of ironwood trees was no different between xeric and mesic sites. But the canopy size did affect perennial plants more than ephemeral plants. With the perennial plants, there was a positive effect. The richness, abundance, and size of the plants was greater under the canopies. With ephemeral plants the richness was unaffected, and the abundance increased in xeric sites. [ 13 ] Ironwood tree canopies have provided facilitative effects on plant species richness and abundance in xeric sites in the Sonoran Desert. [ 13 ] Two factors, water stress and benefactor size, had effects on facilitation and are factors to consider when looking at the richness, abundance, and size of the plants under nurse plant canopies in xeric and mesic habitats. [ citation needed ] The ironwood was often the only tree growing in xeric areas and their canopies had the largest effect on plant community structure and richness even when water stress was high. [ 13 ] Thus, ironwood trees creative diversity that is absent in other desert microhabitats. [ citation needed ] An example of a nurse plant would be the Palo Verde tree ( C. microphyllum ), found in the Sonoran Desert, that may have saguaro cacti underneath its canopy. Other examples of nurses are grasses and cacti. [ 2 ] Trees and shrubs are the more common nurse plants. [ citation needed ] Nurse plants provide the ideal microclimatic environment for species like the saguaros. They allow them to extend their ranges "in otherwise inhospitably cold areas". [ 2 ] Some of the benefits described above can limit saguaros during establishment, but subfreezing temperatures is one variable the cactus is susceptible to. [ 2 ] These temperatures in the northern part of Arizona are why saguaros haven't established there. Nurse plants also have better soil under their canopies than what is out in the open. "Soil properties under nurse plants were always better than outside them, which are in concordance with the generalized existence of fertility islands in high mountains" [ 12 ] Saguaros are established on the south side of a nurse plant's canopy more than the north side. According to Drezner and Garrity, [ 14 ] the south side of canopies have higher minimum temperatures and the north sides have colder temperatures. Saguaros establish under denser canopies than plants with a more open canopy because of better microclimatic conditions. [ 14 ] It might have been unexpected that the saguaro established on the south side because of higher minimal temperature. Saguaros can handle higher temperatures but are susceptible to subfreezing temperatures. With the temperatures on the south side of the nurse, the risk of experiencing subfreezing temperatures is reduced in the winter. [ citation needed ] Ambrosia deltoidea and Cercidium microphyllum were the two main nurse plants observed. The study found that maximum temperatures under C. microphyllum were lower and the minimum temperatures were higher, showing that nurse plants provide a microclimate under their canopy and protects the saguaros from extreme cold or hot temperatures. [ citation needed ] The death of a nurse plant generally precedes the plant species it protects. [ 15 ] There is evidence of competitive interactions between saguaro cacti and paloverde trees. The saguaros under a paloverde's canopy negatively impacted the vigor of the tree. [ 15 ] Trees in the absence of saguaros ( Carnegiea gigantea ) did not die as quickly. One factor of this competitive interaction is root competition. [ 15 ] The saguaro's roots exist in shallow soil, whereas, a paloverde's roots go deeper. The saguaro's roots are like an umbrella and capture most of the moisture before it can reach the paloverde's roots. [ citation needed ] Larrea tridentata , commonly known as creosotebush , exhibits characteristics of a nurse plant by facilitating the establishment and growth of other species in harsh desert environments. Its dense canopy provides shade and shelter, creating a microclimate that reduces temperature extremes and moisture loss, thus promoting the survival of seedlings and young plants. Creosotebushes also trap seeds beneath their canopies, increasing seedling recruitment and contributing to vegetation recovery. Studies have shown that many plant species in desert ecosystems are positively associated with Larrea tridentata , indicating its role in providing a hospitable environment for diverse plant communities. [ 16 ] Overall, the presence of creosotebushes enhances biodiversity and ecosystem resilience in arid regions, underscoring their significance as nurse plants. [ citation needed ] Salvia officinalis subsp. lavandulifolia (syn. S. lavandulifolia ) serves as a crucial nurse plant in the challenging terrain of Mediterranean mountains, facilitating the growth and survival of newly planted pine seedlings. As a member of the Lamiaceae family, this shrubby species, typically ranging from 20 to 35 cm in height, creates an optimal microenvironment for young pines to thrive. Its shallow root system minimizes competition with developing seedlings, ensuring their access to vital nutrients and water. Additionally, its modest stature enables the pine trees to outgrow it over time, making it an advantageous companion during the initial stages of reforestation efforts. [ 17 ] Prosopis species, particularly Prosopis flexuosa , demonstrate a notable capacity to function as nurse plants in arid and semiarid ecosystems . Some research underscores Prosopis' role in improving challenging environmental conditions, particularly in areas characterized by low forage quality and water scarcity. A study demonstrated that Opuntia ellisiana , when planted beneath the canopy of Prosopis flexuosa , exhibited enhanced productivity and nutrient content compared to those outside the canopy. [ 18 ] Noteworthy increases in cladode production and higher levels of key nutrients, such as moisture, organic matter, and potassium, underscore the facilitative effects of Prosopis on the growth and nutritional quality of associated plant species. Furthermore, the observed mitigation of frost damage beneath the Prosopis canopy highlights its additional protective function, further solidifying its status as a beneficial nurse plant in arid landscapes. [ citation needed ] Badano et al. used two hypotheses to look at the invasibility by alien species with nurse plants in the area. They used the biotic resistance hypothesis where a new species that arrives is more likely to find strong competitors that impede their success as the number of native species increases and local diversity acts as a barrier for biological invasions, and the biotic acceptance hypothesis, which is described as the main force that regulates native and alien species' performance and diversity and increased availability of resources and habitat heterogeneity associated with increased surface area. [ 19 ] Neither of these hypotheses considered the variations of harsh environments. [ 19 ] This harshness could reduce competition in plant communities and the overall performance of plants. [ citation needed ] The nurse plant in this study was the field chickweed, Cerastium arvense L . For natural assemblages, there was a positive relationship between C. arvense and the abundance and diversity of plant assemblages growing within cushion plants, or cushions. [ 19 ] Cushion plants are plants that grow a few inches in height, three meters in diameter, and form a compact mat of closely spaced stems. It was found from this study that invasive species grew under nurse plant canopies and that they were providing protection for those species. [ citation needed ] According to Badano et al., "This study indicated that the performance of the invasive plant C. arvense was positively affected by increasing diversity of native species within the habitat patches created by the cushion plant A. madreporica , while these relationships were negative or absent in the surrounding open areas." [ citation needed ] These nurse plants also help the composition of ant communities. They provide protection and food to the different ant communities in the Sonoran Desert. Four ant species ( Camponotus atriceps , Pheidole sciophila , and Pheidole titanis ) "were associated with tree habitats, whereas Pheidole sp. A was associated with open areas". [ 20 ] In the Sonoran Desert, ant species are greater than in the Mojave Desert , Chihuahuan Desert , or Chihuahuan desert grassland, and that is due to greater precipitation. [ 20 ] When rainfall increases, so does the ant diversity. [ 20 ]
https://en.wikipedia.org/wiki/Nurse_plant
Nurture is usually defined as the process of caring for an organism, as it grows, usually a human. [ 1 ] [ 2 ] It is often used in debates as the opposite of "nature", [ a ] whereby nurture means the process of replicating learned cultural information from one mind to another, and nature means the replication of genetic non-learned behavior. [ 3 ] Nurture is important in the nature versus nurture debate as some people see either nature or nurture as the final outcome of the origins of most of humanity's behaviours. There are many agents of socialization that are responsible, in some respects the outcome of a child's personality, behaviour, thoughts, social and emotional skills, feelings, and mental priorities. [ 1 ] Nurture contributes to our attachment and socioemotional development via bonding and interactions with caregivers, who are responsible for early-year socialisation. These environmental experiences can have long-term consequences across the life course. Bowlby’s attachment theory explores the effects of early caregiver relationships, whereby parental nurture affects bond formation with infants. The resulting attachment from the degree of caregiver responsiveness or deprivation influences psychological development and interactions with others beyond infancy. [ 4 ] This is substantiated in Ainsworth’s Strange Situation study, which assigned attachment styles of secure, avoidant and ambivalent, according to the behaviour observed when infants were separated from and then reunited with their mothers. [ 5 ] These can profoundly influence adult personality and life outcomes. The role of nurture is also reflected in different parental styles which may correspond with attachment. For example, evidence from Kuppens & Ceulemans (2018) shows that authoritative parenting (offering emotional support) is linked to more favourable behavioural outcomes in children than with authoritarian parenting, which is more punishing. [ 6 ] Secure attachment during infancy highlights the importance of early nurturing environments in our middle childhood emotional operations. Securely attached children express stronger emotional stability, as measured by reduced emotional change when switching from distressing to positive discussions. It is thought that this difference arises due to secure attachment promoting enhanced appraisal of volatile situations, leading to behavioural responses considered more appropriate. [ 7 ] Additionally, children’s experiences of trauma, for instance neglect or abuse, may have detrimental impacts on their development, representing a lack of nurture. This trauma can increase later vulnerability to post traumatic stress disorder, which may be mediated by emotional dysregulation , manifesting as challenges in coordinating goal-directed behaviour and putting them at a greater risk of substance abuse disorders or self-injury. [ 8 ] However, the solely environmental perspective has been criticised by some who address the substantial genetic component governing the development of relationships in early-attachment. Children’s variable susceptibility to socialization , including parenting approaches, is evidenced by the complex interplay between gene-environment interaction effects, such as chemical transmission across neurons. [ 9 ] The behaviorist approach, as initially discussed by Skinner, explores the role of operant conditioning , whereby actions are learnt and subsequently reinforced through imitating others. [ 10 ] Behaviours associated with rewards, such as praise when repeating the correct words when learning how to speak, have a greater likelihood of being positively reinforced than those generating punishment (negative reinforcement). Since families and educational settings determine which behaviours are reinforced, this model refutes the view that higher-order cognitive functions are biologically programmed and are instead contextually conditioned. The social interactionist model of learning, as posited by Vygotsky , affirms the role of nurture in our cognitive development through education systems providing supportive learning environments rather than purely through reinforcement. Children actively learn through engaging with their peers and teachers, considered more knowledgeable others, who scaffold information so that learners can grasp information and complete tasks in which they previously lacked the capacity. This is supported by the zone of proximal development , referring to the cluster of skills and information which the learner has almost understood and can subsequently achieve independently through social interaction, highlighting the importance of external guidance in nurturing development through learning. [ 11 ] The Vygotskian intelligence hypothesis further explains that intelligence, rather than existing as an individual trait, is influenced by sociocultural contexts. Education facilitates social cognition through providing cooperative and cultural interactions, in which we communicate with others. This results in potent cognitive representations unique to our species, chiefly perspective-taking, as mentioned by Moll and Tomasello (2007) [ 12 ] Language, acquired through domestic and educational environments, acts as a cognitive tool directed by social context. As is consistent with the Whorfian hypothesis of linguistic relativity , the languages we speak influence our interpretations and perceptions of the world, signifying nurture. A study conducted by Winawer et al (2007) showed that Russian speakers display stronger colour discrimination aptitudes than English speakers due to their vocabulary distinguishing between light and dark shades of blue. This repeated colour differentiation resulted in quicker categorisations in colour perception tasks, showing the influence of nurture in cognitive processes. [ 13 ] Normative peer influence is particularly salient in the adolescent years, in which people are most sensitive to social scrutiny and acceptance, so must gauge who to use social information from. [ 14 ] The resulting reward-oriented social behaviour demonstrates that locally adaptive traits can shape our trajectories. Cultural neuroscience therefore investigates how cultural environments affect brain function and development, demonstrating the psychological impact of nurture in various societies. These cultural differences can manifest through emotional expression, which can contribute to variation in our experiences of emotion. Evidence from a sample of young adults from China and the United States (Immordino-Yang et al., 2016) revealed a cultural difference in that the Americans typically showed greater magnitudes of emotional expression. This correlated to differential activation of neural mechanisms in the construction of emotions. [ 15 ] Cultural nurture also categories our thinking styles and display rules, which vary across societies. Individualist societies, such as the United States, stress independence and self-expression, whereas collectivist cultures, including Japan, highlight the importance of community and obedience. [ 16 ] Research around responses to the COVID-19 pandemic (Xiao, 2021) showed those with a vertical collectivist orientation, which emphasises group harmony, expressed a greater willingness to comply with health guidance, alluding to the role of nurture from wider society in shaping our psychology. [ 17 ] Neuroplasticity refers to the ability of the brain in reorganizing and forming novel neuronal connections following environmental changes. A notable study by Maguire et al (2000) found that London taxi drivers, who are expected to learn detailed maps of London roads, seeing an increase in the size of their posterior hippocampi , which are utilised in spatial memory, correlating to time spent in the occupation. [ 18 ] This evidences a capacity for the brain to remould itself based on demand, in this case navigation, showing the importance of nurture. A growing body of research speaking to the cross-pollination of environmental factors and cognitive processes, has studied the role of epigenetics in demonstrating how nurture can affect our behaviour and development. This refers to the mechanisms by which various life experiences can contribute to heritable alterations in the expression of genes while preserving DNA sequences, contributing to our understanding of psychopathology. [ 19 ] Epigenetics also play a role in fostering long-term psychological resilience , in which protective environmental factors, such as parental care, and positive factors, like diet and exercise, may all promote better responses to experienced adversities. [ 20 ]
https://en.wikipedia.org/wiki/Nurture
Nusinersen , [ 7 ] marketed as Spinraza , [ 4 ] is a medication used in treating spinal muscular atrophy (SMA), a rare neuromuscular disorder . [ 8 ] [ 4 ] In December 2016, it became the first approved drug used in treating this disorder. Since the condition it treats is so rare, Nusinersen has so-called " orphan drug " designation in the United States and the European Union. [ 9 ] The drug is used to treat spinal muscular atrophy associated with a mutation in the SMN1 gene. It is administered directly to the central nervous system (CNS) using intrathecal injection. [ 4 ] In clinical trials, the drug halted the disease progression. In around 60% of infants affected by type 1 spinal muscular atrophy, it improves motor function. [ 4 ] People treated with nusinersen had an increased risk of upper and lower respiratory infections and congestion, ear infections, constipation, pulmonary aspiration , teething, and scoliosis . There is a risk that growth of infants and children might be stunted . In older clinical trial subjects, the most common adverse events were headache, back pain, and other adverse effects from the spinal injection , such as post-dural-puncture headache . [ 4 ] Although not observed in the trial patients, a reduction in platelets as well as a risk of kidney damage are theoretical risks for antisense drugs and therefore platelets and kidney function should be monitored during treatment. [ 4 ] In 2018, several cases of communicating hydrocephalus in children and adults treated with nusinersen emerged; it remains unclear whether this was drug related. [ 10 ] Spinal muscular atrophy is caused by loss-of-function mutations in the SMN1 gene which codes for survival motor neuron (SMN) protein . People survive owing to low amounts of the SMN protein produced from the SMN2 gene. Nusinersen modulates alternative splicing of the SMN2 gene, functionally converting it into SMN1 gene, thus increasing the level of SMN protein in the CNS. [ 11 ] The drug distributes to CNS and peripheral tissues. [ 4 ] The half-life is estimated to be 135 to 177 days in cerebrospinal fluid (CSF) and 63 to 87 days in blood plasma . The drug is metabolized via exonuclease (3′- and 5′)-mediated hydrolysis and does not interact with CYP450 enzymes. [ 4 ] The primary route of elimination is likely by urinary excretion for nusinersen and its metabolites. [ 4 ] Nusinersen is an antisense oligonucleotide in which the 2'-hydroxy groups of the ribofuranosyl rings are replaced with 2'- O -2-methoxyethyl groups and the phosphate linkages are replaced with phosphorothioate linkages. [ 4 ] [ 11 ] [ 12 ] Nusinersen was developed in a collaboration between Adrian Krainer at Cold Spring Harbor Laboratory and Ionis Pharmaceuticals (formerly called Isis Pharmaceuticals). [ 13 ] [ 14 ] [ 15 ] [ 16 ] Initial work of target discovery of nusinersen was done by Ravindra N. Singh and co-workers at the University of Massachusetts Medical School funded by Cure SMA. [ 17 ] [ 18 ] Starting in 2012, Ionis partnered with Biogen on development and, in 2015, Biogen acquired an exclusive license to the drug for a US$75 million license fee, milestone payments up to US$150 million , and tiered royalties thereafter; Biogen also paid the costs of development subsequent to taking the license. [ 19 ] The license to Biogen included licenses to intellectual property that Ionis had acquired from Cold Spring Harbor Laboratory and University of Massachusetts. [ 20 ] In November 2016, the new drug application was accepted under the FDA 's priority review process on the strength of the Phase III trial and the unmet need, and was also accepted for review at the European Medicines Agency (EMA) at that time. [ 21 ] [ 22 ] It was approved by the FDA in December 2016 and by EMA in May 2017 as the first drug to treat SMA. [ 23 ] [ 24 ] Subsequently, nusinersen was approved to treat SMA in Canada (July 2017), [ 25 ] Japan (July 2017), [ 26 ] Brazil (August 2017), [ 27 ] Switzerland (September 2017), [ 28 ] and China (February 2019). [ 29 ] In 2023, additional clinical trials continued to validate the efficacy of nusinersen, particularly emphasizing the benefits of early intervention. The trials demonstrated significant improvements in motor function and survival rates among infants with SMA Type 1, underscoring the importance of prompt treatment to achieve optimal clinical outcomes. [ 30 ] Nusinersen list price in the USA is US$125,000 per injection which puts the treatment cost at US$750,000 in the first year and US$375,000 annually after that. [ 31 ] According to The New York Times , this places nusinersen "among the most expensive drugs in the world". [ 22 ] In October 2017, the authorities in Denmark recommended nusinersen for use only in a small subset of people with SMA type 1 (young babies) and refused to offer it as a standard treatment for all other people with SMA quoting an "unreasonably high price" compared to the benefit. [ 32 ] Norwegian authorities rejected the funding in October 2017 because the price of the medicine was "unethically high". [ 33 ] In February 2018, the funding was approved for people under 18 years old. [ 33 ] In April 2023 funding was expanded to include adults. [ 34 ] In August 2018, the National Institute for Health and Care Excellence (NICE), which weighs the cost-effectiveness of therapies for the NHS in England and Wales, recommended against offering nusinersen to people with SMA. [ 35 ] Children with SMA type 1 were treated in the UK under a Biogen-funded expanded access programme ; after enrolling 80 children, the scheme closed to new people in November 2018. [ 36 ] In May 2019, however, NICE reversed its stance and announced its decision to recommend nusinersen for use across a wide spectrum of SMA for a 5-year period. [ 37 ] [ 38 ] The Irish Health Service Executive decided in February 2019 that nusinersen was too expensive to fund, saying the cost would be about €600,000 per patient in the first year and around €380,000 a year thereafter "with an estimated budget impact in excess of €20 million over a five-year period" for the 25 children with SMA living in Ireland. Both the manufacturer and patient groups disputed the numbers and pointed out that actual pricing arrangements for Ireland are in line with the negotiated price for the BeneluxA initiative which Ireland has been a member of since June 2018. [ 39 ] As of May 2019, nusinersen was available in public healthcare in more than 40 countries. [ 40 ] In December 2021, nusinersen was included in the extended insurance coverage of China, and the price was reduced from ¥697,000 per vial to around ¥33,000 (~US$5,100) per vial. [ 41 ] [ 42 ] [ 43 ]
https://en.wikipedia.org/wiki/Nusinersen
In thermal fluid dynamics , the Nusselt number ( Nu , after Wilhelm Nusselt [ 1 ] : 336 ) is the ratio of total heat transfer to conductive heat transfer at a boundary in a fluid . Total heat transfer combines conduction and convection . Convection includes both advection (fluid motion) and diffusion (conduction). The conductive component is measured under the same conditions as the convective but for a hypothetically motionless fluid. It is a dimensionless number , closely related to the fluid's Rayleigh number . [ 1 ] : 466 A Nusselt number of order one represents heat transfer by pure conduction. [ 1 ] : 336 A value between one and 10 is characteristic of slug flow or laminar flow . [ 2 ] A larger Nusselt number corresponds to more active convection, with turbulent flow typically in the 100–1000 range. [ 2 ] A similar non-dimensional property is the Biot number , which concerns thermal conductivity for a solid body rather than a fluid. The mass transfer analogue of the Nusselt number is the Sherwood number . The Nusselt number is the ratio of total heat transfer (convection + conduction) to conductive heat transfer across a boundary. The convection and conduction heat flows are parallel to each other and to the surface normal of the boundary surface, and are all perpendicular to the mean fluid flow in the simple case. where h is the convective heat transfer coefficient of the flow, L is the characteristic length , and k is the thermal conductivity of the fluid. In contrast to the definition given above, known as average Nusselt number , the local Nusselt number is defined by taking the length to be the distance from the surface boundary [ 1 ] [ page needed ] to the local point of interest. The mean , or average , number is obtained by integrating the expression over the range of interest, such as: [ 3 ] An understanding of convection boundary layers is necessary to understand convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference. The heat transfer rate can be written using Newton's law of cooling as where h is the heat transfer coefficient and A is the heat transfer surface area. Because heat transfer at the surface is by conduction, the same quantity can be expressed in terms of the thermal conductivity k : These two terms are equal; thus Rearranging, Multiplying by a representative length L gives a dimensionless expression: The right-hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient, while the left-hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu. The Nusselt number may be obtained by a non-dimensional analysis of Fourier's law since it is equal to the dimensionless temperature gradient at the surface: Indeed, if: ∇ ′ = L ∇ {\displaystyle \nabla '=L\nabla } and T ′ = T − T h T h − T c {\displaystyle T'={\frac {T-T_{h}}{T_{h}-T_{c}}}} we arrive at then we define so the equation becomes By integrating over the surface of the body: N u ¯ = − 1 S ′ ∫ S ′ N u d S ′ {\displaystyle {\overline {\mathrm {Nu} }}=-{{1} \over {S'}}\int _{S'}^{}\mathrm {Nu} \,\mathrm {d} S'\!} , where S ′ = S L 2 {\displaystyle S'={\frac {S}{L^{2}}}} . Typically, for free convection, the average Nusselt number is expressed as a function of the Rayleigh number and the Prandtl number , written as: Otherwise, for forced convection, the Nusselt number is generally a function of the Reynolds number and the Prandtl number , or Empirical correlations for a wide variety of geometries are available that express the Nusselt number in the aforementioned forms. Cited [ 4 ] : 493 as coming from Churchill and Chu: If the characteristic length is defined where A s {\displaystyle \mathrm {A} _{s}} is the surface area of the plate and P {\displaystyle P} is its perimeter. Then for the top surface of a hot object in a colder environment or bottom surface of a cold object in a hotter environment [ 4 ] : 493 And for the bottom surface of a hot object in a colder environment or top surface of a cold object in a hotter environment [ 4 ] : 493 Cited [ 5 ] as coming from Bejan: This equation "holds when the horizontal layer is sufficiently wide so that the effect of the short vertical sides is minimal." It was empirically determined by Globe and Dropkin in 1959: [ 6 ] "Tests were made in cylindrical containers having copper tops and bottoms and insulating walls." The containers used were around 5" in diameter and 2" high. The local Nusselt number for laminar flow over a flat plate, at a distance x {\displaystyle x} downstream from the edge of the plate, is given by [ 4 ] : 490 The average Nusselt number for laminar flow over a flat plate, from the edge of the plate to a downstream distance x {\displaystyle x} , is given by [ 4 ] : 490 In some applications, such as the evaporation of spherical liquid droplets in air, the following correlation is used: [ 7 ] Gnielinski's correlation for turbulent flow in tubes: [ 4 ] : 490, 515 [ 8 ] where f is the Darcy friction factor that can either be obtained from the Moody chart or for smooth tubes from correlation developed by Petukhov: [ 4 ] : 490 The Gnielinski Correlation is valid for: [ 4 ] : 490 The Dittus–Boelter equation (for turbulent flow) as introduced by W.H. McAdams [ 9 ] is an explicit function for calculating the Nusselt number. It is easy to solve but is less accurate when there is a large temperature difference across the fluid. It is tailored to smooth tubes, so use for rough tubes (most commercial applications) is cautioned. The Dittus–Boelter equation is: where: The Dittus–Boelter equation is valid for [ 4 ] : 514 The Dittus–Boelter equation is a good approximation where temperature differences between bulk fluid and heat transfer surface are minimal, avoiding equation complexity and iterative solving. Taking water with a bulk fluid average temperature of 20 °C (68 °F), viscosity 10.07 × 10 −4 Pa.s and a heat transfer surface temperature of 40 °C (104 °F) (viscosity 6.96 × 10 −4 Pa.s , a viscosity correction factor for ( μ / μ s ) {\displaystyle ({\mu }/{\mu _{s}})} can be obtained as 1.45. This increases to 3.57 with a heat transfer surface temperature of 100 °C (212 °F) (viscosity 2.82 × 10 −4 Pa.s ), making a significant difference to the Nusselt number and the heat transfer coefficient. The Sieder–Tate correlation for turbulent flow is an implicit function , as it analyzes the system as a nonlinear boundary value problem . The Sieder–Tate result can be more accurate as it takes into account the change in viscosity ( μ {\displaystyle \mu } and μ s {\displaystyle \mu _{s}} ) due to temperature change between the bulk fluid average temperature and the heat transfer surface temperature, respectively. The Sieder–Tate correlation is normally solved by an iterative process, as the viscosity factor will change as the Nusselt number changes. [ 10 ] where: The Sieder–Tate correlation is valid for [ 4 ] : 493 For fully developed internal laminar flow, the Nusselt numbers tend towards a constant value for long pipes. For internal flow: where: From Incropera & DeWitt, [ 4 ] : 486–487 OEIS sequence A282581 gives this value as N u D = 3.6567934577632923619... {\displaystyle \mathrm {Nu} _{D}=3.6567934577632923619...} . For the case of constant surface heat flux, [ 4 ] : 486–487
https://en.wikipedia.org/wiki/Nusselt_number
A nut shell filter is a device to remove oil from water. In the oil and gas industry , the term walnut shell filter is common since black walnuts are most often used. Typically nut shell filters are designed for loadings under 100 mg/L oil and 100 mg/L suspended solids and operate with 90–95% removal efficiency. [ 1 ] High oil and solids loadings reduce run times between backwashes and results in reduced effluent quality. A bed of nut shell media is contained in a vessel. Vessels are typically vertical, but may also be horizontal.  Particles are captured as flow penetrates through the media bed.  Although it is possible to use other medias for this purpose, walnut and pecan shells are most commonly used since they have several desirable properties making them well suited for oil removal.  First, nut shells are hard with a high modulus of elasticity , resulting in a low attrition rate and minimal media replacement, typically <5% per year. [ 1 ] Nut shells also have an equal affinity for water and oil, allowing oil to be adsorbed during normal operation, but also enable oil removed from the bed during agitation allowing for media reuse. [ 2 ] During normal operation, water typically flows down through the media bed where oil is coalesced and attracted to the nut shells and accumulates in the interstitial spaces between the media. [ 3 ] Typical nut shell media is 12/20 (0.8 to 1.7 mm) and 12/16 mesh (1.2 to 1.7 mm). Although not designed for solids removal, an added benefit is that solids accumulate in the bed. As solids are collected, the differential pressure across the bed increases. Periodical backwashes are initiated to regenerate the media. Typically, backwash is triggered by one of the following: [ 3 ] Backwash occurs through mechanical agitation such as [ 2 ] If backwash is not sufficient, oil can cause media to agglomerate, known as mudballing. [ 3 ] Typical flux of nut shell filters is 7 to 27 gpm/ft 2 . [ 4 ] Commercial vessels are sized to accommodate the flow rate of water and range up to 14 feet in diameter. [ 4 ] For continuous operation, multiple vessels are frequently used so flow can continue to be treated while backwash occurs in one vessel. For large flows, several vessels may be used. Unlike some oil / water separators, no chemicals are required for oil removal in nut shell filters. Nut shell filters were designed to separate crude oil from oilfield produced water in the 1970s, which remains the principal use. [ 3 ] Nut shell filters can be used onshore and offshore , but are more common onshore where the treatment requirements are typically more stringent and footprint is not limited. Nut shell filters are used for tertiary treatment following primary and secondary treatment which removes the bulk of the oil and suspended solids. [ 4 ] Typically, effluent is reinjected for reuse or disposal or discharged to a surface body of water.
https://en.wikipedia.org/wiki/Nut_shell_filter
A nutating disc engine (also sometimes called a disc engine ) is an internal combustion engine comprising fundamentally of one moving part and a direct drive onto the crankshaft . Initially patented in 1993, it differs from earlier internal combustion engines in a number of ways and uses a circular rocking or wobbling nutating motion , drawing heavily from similar steam-powered engines developed in the 19th century, and similar to the motion of the non-rotating portion of a swash plate on a swash plate engine . In its basic configuration the core of the engine is a nutating non-rotating disc, with the center of its hub mounted in the middle of a Z-shaped shaft. The two ends of the shaft rotate , while the disc " nutates " (performs a wobbling motion without rotating around its axis). The motion of the disc circumference describes a portion of a sphere . A portion of the area of the disc is used for intake and compression , a portion is used to seal against a center casing, and the remaining portion is used for expansion and exhaust . The compressed air is admitted to an external accumulator , and then into an external combustion chamber before it is admitted to the power side of the disc. The external combustion chamber enables the engine to use diesel fuel in small engine sizes, giving it unique capabilities for unmanned aerial vehicle propulsion and other applications. One significant benefit of the nutating engine is the overlap of the power strokes. Power is transmitted directly to the output shaft (the crankshaft ), completely eliminating the need for complicated linkages essential in a conventional piston engine (to convert the piston's linear motion to rotating output motion). Since the disc does not rotate, the seal velocities are lower than in an equivalent IC piston engine. The total seal length is rather long, however, which may negate this advantage. The disc wobbles inside a housing and, in its simplest version, half of the single disc (one lobe) performs the intake/compression function while the other lobe performs the power/exhaust function. The disc lobes can be configured to have equal compression and expansion volumes, or to have the compression volume greater than or less than the expansion volume. This means that the engine can be self supercharged (see supercharger ), or operate as a Miller cycle / Atkinson cycle . U.S. patent number 5,251,594 was granted to Leonard Meyer of Illinois in 1993 for a "nutating internal combustion disc engine". [ 1 ] The Meyer Nutating Engine is a new type of internal combustion engine with higher power density than conventional reciprocating piston engines and which can operate on a variety of fuels, including gasoline, heavy fuels and hydrogen. The patent made reference to various 20th-century nutating engines in the United States, but no reference at all to the original Dakeyne engine, described below, in its prior art. The similarity to its 166-year-old hydraulic predecessor is strikingly evident, the main change being that the disc is not entirely flat but slightly convex. The details of operation and potential of the Meyer nutating disk engine have been described by Professor T. Alexander (publishes as T. Korakianitis) and co-workers. [ 2 ] [ 3 ] [ 4 ] [ 5 ] A single prototype has been run briefly under its own power, with a power- to-weight ratio equal to those of typical current four-stroke engines. It is claimed by the authors of the developer/ US Army Research Laboratory / NASA technical evaluation report that a production version of the new engine (for UAV applications) might provide a power-to-weight ratio of 1.6 hp/lb or 2.7 kW/kg. [ 6 ] This is slightly better than current automotive production engines [ 7 ] but nowhere near the Graupner G58 [ 8 ] or the Desert Air DA 150. [ 9 ] A company called McMasters, previously headed by successful American entrepreneur Harold McMaster , is also developing a nutating motor burning a mixture of pure hydrogen and pure oxygen that, it claims, will give 200 hp but weigh only one-tenth that of gasoline/air production automotive engines with the same output. So far the McMasters company claims to have spent $10 million on its development. Plans are also being made to develop a version "the size of a coffee can" that can be built directly into wheel hubs, eliminating the traditional drive train entirely. This concept was first attempted in the British Leyland Mini Moke [ citation needed ] but was, at that time, severely hampered by lack of reliable synchronization – which is now more commonplace because of ubiquitous miniaturized embedded modern-day computer chips . A gasoline -powered version is also planned by McMasters, which is claimed to give substantially cleaner operation than traditional engines. [ 10 ] In the 1820s the mill owners Edward and James Dakeyne of Darley Dale , Derbyshire , designed and had constructed a hydraulic engine (a water engine ) known as "The Romping Lion", based on the same principles, to make use of the high-pressure water available near their mill. Little is known of their engine other than from the somewhat unclear description accompanying the patent, which was granted in 1830. Its main castings were made at the Morley Park foundry near Heage , and it weighed 7 tons and generated 35 horsepower at a head of 96 feet of water. Frank Nixon in his book "The Industrial Archaeology of Derbyshire" (1969) commented that "The most striking characteristic of this ingenious machine is perhaps the difficulty experienced by those trying to describe it; the patentees & Stephen Glover only succeeded in producing descriptions of monumental incomprehensibility". [ 11 ] A larger model was constructed to drain lead mines at Alport near Youlgreave and many steam versions were subsequently built by other people. The first people to develop steam-powered disc engines based on the Dakeynes' design were George Davies and Henry Taylor who patented their engine in 1836. It was fitted with valves to control the admission of steam and also differed from the Dakeynes' version in that the axis of the engine was horizontal and the casing of the engine rotated around the disc, the opposite of the original. More patents followed over the next eight years, mainly introducing expansive working and improving the engine's sealing. In 1836 Davies and Taylor granted manufacturing rights for the engine to Fardon and Gossage, owners of a salt works. At the same time Davies was working on a canal tug with a disc engine driving a paddle wheel at the stern. By 1838 a 5 hp engine was in use at the salt works pumping brine. In 1839 Davies, Taylor, Fardon and Gossage conveyed manufacturing rights to the engine to the Birmingham Patent Disc Engine company. As Superintendent of the Company, Henry Davies was responsible for all design and manufacture, while Gossage was a director. In February 1841 the Board reported that 26 engines had been completed, further engines totalling 260 horsepower were in progress, and a total of 500 horsepower were on order. They could make engines ranging from 5 to 30 horsepower and were currently making engines for a railway carriage. An article in a French journal of 1841 reported that a 12 hp engine had been in use for six months as a winding engine at Corbyn's Hall Mine, Dudley , which could lift a load of 1 ton 180 ft in 1 minute. The disc engines cost from £96 for an 8 hp machine to £300 for a 30 hp model. Ransomes of Ipswich (who were later to become the well-known agricultural engineers Ransomes and Sims ) exhibited a portable steam engine at the Royal Liverpool Show in 1841, powered by a 5 hp BPDE disc engine. By 1840 a canal boat, The Experiment , powered by a Davies engine, was being used for propeller testing, and in 1842 Davies installed a disc engine and disc pump in a canal barge which he demonstrated by draining half a mile of the Stourbridge canal. The same year, a 5 hp engine was fitted in one of HMS Geyser's pinnaces . However, trials on the Thames and for the Directors of the Grand Junction Canal failed to convince either the Admiralty or the canal owners. Nevertheless, there was a growing interest in using steam power on the canals, and the small beam of canal boats very much favoured disc engines. Davies saw his opportunity and built an iron-hulled canal tug with a 16 hp BPDE engine in 1843. To minimise wash he fitted four propellers spaced along a shaft the length of the boat and enclosed in a tube below the waterline. There were two of these propulsion units side by side for a total of 8 propellers. It worked well enough to convince the Directors of the Birmingham and Liverpool Junction Canal to order six tugs which could tow as many as sixteen barges a day at a reasonable speed. In use, a train of six to eight barges left Ellesmere Port and Wolverhampton each day, carrying an average of 100 tons. Unfortunately nobody had considered how the barge train was to transit through the canal locks and shallows. Each such obstruction meant that the train had to be uncoupled and the barges individually manhandled or towed by horse through the obstruction before the train was reassembled on the other side. This negated the benefits of the tug and train and in 1845 the canal's Directors removed the tugs from service. In 1844 the BPDE collapsed. [ 12 ] The workshop equipment, various completed engines and quantities of work in progress were offered for sale. During legal proceedings in 1851 following the bankruptcy of two of the BPDE's principal investors, it was said that the disc engine had not made a profit and that to have relied on it as a realisable asset "was absurd". A competitor to Davies and Taylor was former locomotive engineer George Daniell Bishopp , who had Donkin & Co build his first engine in 1840, and a patent was granted in 1845. The partners Barnard William Farey and Bryan Donkin Jr. patented improvements to the basic design; Donkin had worked with Bishopp on his original engine, while Farey was an employee of Donkins. Bishopp's engine met with some scepticism from the trade press when it was launched on the market. But Bishopp had opted to revert to the Dakeynes' original design which had a yoke which took most of the dynamic forces and greatly reduced the load on the bearings and seals. In the event that there was any leakage, the seals were adjustable. In addition, Bishopp had his engines produced by companies with recognised engineering capabilities rather than carrying out his own manufacturing; as well as Donkin's, some of his first engines were built by Joseph Whitworth & Co of Manchester. Another engineering company with a very good reputation was G. Rennie and Son of London who were so convinced of the engine's potential that in 1849 they employed Bishopp as their foreman of works with specific responsibility for the disc engine. By 1849 a number of Bishopp engines had been sold, and one was used with great success to run the printing presses of the Times newspaper, while another produced by G. Rennie and Son was used to power the iron gunboat HMS Minx . The Times engine had been built by Whitworth and had been shown at the Great Exhibition of 1851 where it ran smoothly and quietly and impressed all who saw it. In 1853 a disc engine 13 inches in diameter was purchased from Rennie to propel a 55 foot Russian gunboat, which it did at a speed of 7 knots (13 km/h; 8.1 mph). [ 13 ] At the time the advantages of the disc engine were listed in 1855 by The Mechanics' Magazine as: [ 13 ] Disc engines ultimately fell into disuse because of competition from modern high-speed steam engines, which were small and light and could offer features such as compounding. Additionally, conventional engines did not require the same precision manufacture as disc engines and steam leakage was not a problem. The nutating disc meter, which uses the same geometry and concept as the Dakeynes' original engine, [ 14 ] [ 15 ] is probably the most widely used flowmeter in the world, and it is claimed that more than half the water meters installed in domestic premises in the US and Europe are of this type. Used for 150 years, it is essentially a Dakeyne Disc Engine and was most probably developed by Farey and Donkin who mentioned a "fluid measurement meter" in their 1850 disc engine patent granted in 1850. By 1859 they were being manufactured by the Buffalo Meter Company of Buffalo, New York .
https://en.wikipedia.org/wiki/Nutating_disc_engine
Nutation refers to the bending movements of stems, roots, leaves and other plant organs caused by differences in growth in different parts of the organ. Circumnutation refers specifically to the circular movements often exhibited by the tips of growing plant stems, caused by repeating cycles of differences in growth around the sides of the elongating stem. [ 1 ] Nutational movements are usually distinguished from 'variational' movements caused by temporary differences in the water pressure inside plant cells ( turgor ). Simple nutation occurs in flat leaves and flower petals, caused by unequal growth of the two sides of the surface. For example, in young leaf buds the outer surface of each leaflet grows faster, causing it to curve over its neighbors and form a compact bud. As the bud expands, growth becomes more rapid on the inner surface of the leaves, causing the bud to open and the leaves to flatten out. Similar inequality of growth, but more sharply localized, leads to the folding and rolling of the leaf in the bud, and to the changing shapes of flower petals. Circumnutational movements are most obvious in growing seedlings, where the combination of circular movement and upward growth causes the tip to move up in a spiral path. The first detailed analysis of circumnutation was Charles Darwin's The Power of Movement in Plants ; [ 2 ] [ 3 ] he concluded that most plant movements were modifications of circumnutation, but many counterexamples are now known. Circumnutation is not a direct response to gravity or the direction of illumination, but these factors and many physiological processes can influence its direction, timing and amplitude. [ 1 ] Although the function of circumnutation in most plants is not known, many twining plants have adapted these movements to help them find and twine around vertical objects such as tree trunks, and to help tendrils find and wind around smaller supports. [ 4 ] [ 5 ] The growing tips of the vine or tendril initially swings in wide circles that maximize its chance of bumping into an obstacle (a potential support). Once the obstacle is encountered the circles tighten, causing the vine or tendril to wind around the support as it grows. Over the last century, studies on plant nutations gave rise to three main theories about their origin: [ 1 ] [ 5 ] New experiments in space showed that the presence of gravity involves and amplifies oscillations of plant shoots, while confirming the occurrence of reduced nutations. [ 11 ] [ 12 ] These findings support the "two-oscillator" hypothesis, which has been revisited to account for the effect of elastic deflections due to gravity loading, previously disregarded. [ 13 ] By means of a morphoelastic rod model, some studies showed that a Hopf-like bifurcation phenomenon occurs and elasticity plays an important role in determining the onset of oscillations. [ 14 ] [ 15 ] In particular, the plant shoot might undergo "exogenous" oscillations - which sum to the "endogenous" ones - as it reaches a critical length. [ 15 ]
https://en.wikipedia.org/wiki/Nutation_(botany)
In engineering , a nutating motion is similar to that seen in a swashplate mechanism. In general, a nutating plate is carried on a skewed bearing on the main shaft and does not itself rotate, whereas a swashplate is fixed to the shaft and rotates with it. The motion is similar to the motions of coin or a tire wobbling on the ground after being dropped with the flat side down. Precession is the physical term for this kind of motion. Nutating mixers are used in gentle three-dimensional (gyrating) agitation of chemical or biological scientific procedures by repetitively moving the vessels holding the liquids. [ 1 ] The nutating motion is widely employed in flowmeters and pumps . The displacement of volume for one revolution is first determined. The speed of the device in revolutions per unit time is measured. In the case of flowmeters, the product of the rotational speed and the displacement per revolution is then taken to find the flow rate. A nutating disc engine was patented in 1993 and a prototype was reported in 2006. The engine consisted of an internal disk wobbling on a Z-shaped shaft. [ 2 ] Nutation has also been used in drive systems for gearboxes , with proposed uses including helicopter rotors , seat recliners and a European Space Agency probe [ which? ] to Mercury. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Nutation_(engineering)
A nutrient cycle (or ecological recycling ) is the movement and exchange of inorganic and organic matter back into the production of matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle , sulfur cycle , nitrogen cycle , water cycle , phosphorus cycle , oxygen cycle , among others that continually recycle along with other mineral nutrients into productive ecological nutrition. The nutrient cycle is nature's recycling system. All forms of recycling have feedback loops that use energy in the process of putting material resources back into use. Recycling in ecology is regulated to a large extent during the process of decomposition . [ 1 ] Ecosystems employ biodiversity in the food webs that recycle natural materials, such as mineral nutrients , which includes water . Recycling in natural systems is one of the many ecosystem services that sustain and contribute to the well-being of human societies. [ 2 ] [ 3 ] [ 4 ] There is much overlap between the terms for the biogeochemical cycle and nutrient cycle. Most textbooks integrate the two and seem to treat them as synonymous terms. [ 5 ] However, the terms often appear independently. The nutrient cycle is more often used in direct reference to the idea of an intra-system cycle, where an ecosystem functions as a unit. From a practical point, it does not make sense to assess a terrestrial ecosystem by considering the full column of air above it as well as the great depths of Earth below it. While an ecosystem often has no clear boundary, as a working model it is practical to consider the functional community where the bulk of matter and energy transfer occurs. [ 6 ] Nutrient cycling occurs in ecosystems that participate in the "larger biogeochemical cycles of the earth through a system of inputs and outputs." [ 6 ] : 425 All systems recycle. The biosphere is a network of continually recycling materials and information in alternating cycles of convergence and divergence. As materials converge or become more concentrated they gain in quality, increasing their potentials to drive useful work in proportion to their concentrations relative to the environment. As their potentials are used, materials diverge, or become more dispersed in the landscape, only to be concentrated again at another time and place. [ 7 ] : 2 Ecosystems are capable of complete recycling. Complete recycling means that 100% of the waste material can be reconstituted indefinitely. This idea was captured by Howard T. Odum when he penned that "it is thoroughly demonstrated by ecological systems and geological systems that all the chemical elements and many organic substances can be accumulated by living systems from background crustal or oceanic concentrations without limit as to concentration so long as there is available solar or another source of potential energy" [ 8 ] : 29 In 1979 Nicholas Georgescu-Roegen proposed the fourth law of entropy stating that complete recycling is impossible. Despite Georgescu-Roegen's extensive intellectual contributions to the science of ecological economics , the fourth law has been rejected in line with observations of ecological recycling. [ 9 ] [ 10 ] However, some authors state that complete recycling is impossible for technological waste. [ 11 ] Ecosystems execute closed loop recycling where demand for the nutrients that adds to the growth of biomass exceeds supply within that system. There are regional and spatial differences in the rates of growth and exchange of materials, where some ecosystems may be in nutrient debt (sinks) where others will have extra supply (sources). These differences relate to climate, topography, and geological history leaving behind different sources of parent material. [ 6 ] [ 12 ] In terms of a food web, a cycle or loop is defined as "a directed sequence of one or more links starting from, and ending at, the same species." [ 13 ] : 185 An example of this is the microbial food web in the ocean, where "bacteria are exploited, and controlled, by protozoa, including heterotrophic microflagellates which are in turn exploited by ciliates. This grazing activity is accompanied by excretion of substances which are in turn used by the bacteria so that the system more or less operates in a closed circuit." [ 14 ] : 69–70 An example of ecological recycling occurs in the enzymatic digestion of cellulose . "Cellulose, one of the most abundant organic compounds on Earth, is the major polysaccharide in plants where it is part of the cell walls. Cellulose-degrading enzymes participate in the natural, ecological recycling of plant material." [ 17 ] Different ecosystems can vary in their recycling rates of litter, which creates a complex feedback on factors such as the competitive dominance of certain plant species. Different rates and patterns of ecological recycling leaves a legacy of environmental effects with implications for the future evolution of ecosystems. [ 18 ] A large fraction of the elements composing living matter reside at any instant of time in the world's biota. Because the earthly pool of these elements is limited and the rates of exchange among the various components of the biota are extremely fast with respect to geological time, it is quite evident that much of the same material is being incorporated again and again into different biological forms. This observation gives rise to the notion that, on the average, matter (and some amounts of energy) are involved in cycles. [ 19 ] : 219 Ecological recycling is common in organic farming, where nutrient management is fundamentally different compared to agri-business styles of soil management . Organic farms that employ ecosystem recycling to a greater extent support more species (increased levels of biodiversity) and have a different food web structure. [ 20 ] [ 21 ] Organic agricultural ecosystems rely on the services of biodiversity for the recycling of nutrients through soils instead of relying on the supplementation of synthetic fertilizers . [ 22 ] [ 23 ] The model for ecological recycling agriculture adheres to the following principals: Where produce from an organic farm leaves the farm gate for the market the system becomes an open cycle and nutrients may need to be replaced through alternative methods. The persistent legacy of environmental feedback that is left behind by or as an extension of the ecological actions of organisms is known as niche construction or ecosystem engineering. Many species leave an effect even after their death, such as coral skeletons or the extensive habitat modifications to a wetland by a beaver, whose components are recycled and re-used by descendants and other species living under a different selective regime through the feedback and agency of these legacy effects. [ 26 ] [ 27 ] Ecosystem engineers can influence nutrient cycling efficiency rates through their actions. Earthworms , for example, passively and mechanically alter the nature of soil environments. The bodies of dead worms passively contribute mineral nutrients to the soil. The worms also mechanically modify the physical structure of the soil as they crawl about ( bioturbation ) and digest on the molds of organic matter they pull from the soil litter . These activities transport nutrients into the mineral layers of soil . Worms discard wastes that create worm castings containing undigested materials where bacteria and other decomposers gain access to the nutrients. The earthworm is employed in this process and the production of the ecosystem depends on their capability to create feedback loops in the recycling process. [ 29 ] [ 30 ] Shellfish are also ecosystem engineers because they: 1) Filter suspended particles from the water column; 2) Remove excess nutrients from coastal bays through denitrification ; 3) Serve as natural coastal buffers, absorbing wave energy and reducing erosion from boat wakes, sea level rise and storms; 4) Provide nursery habitat for fish that are valuable to coastal economies. [ 31 ] Fungi contribute to nutrient cycling [ 32 ] and nutritionally rearrange patches of ecosystem creating niches for other organisms. [ 33 ] In that way fungi in growing dead wood allow xylophages to grow and develop and xylophages , in turn, affect dead wood, contributing to wood decomposition and nutrient cycling in the forest floor . [ 34 ] Nutrient cycling has a historical foothold in the writings of Charles Darwin in reference to the decomposition actions of earthworms. Darwin wrote about "the continued movement of the particles of earth". [ 28 ] [ 36 ] [ 37 ] Even earlier, in 1749 Carl Linnaeus wrote in "the economy of nature we understand the all-wise disposition of the creator in relation to natural things, by which they are fitted to produce general ends, and reciprocal uses" in reference to the balance of nature in his book Oeconomia Naturae . [ 38 ] In this book he captured the notion of ecological recycling: "The 'reciprocal uses' are the key to the whole idea, for 'the death, and destruction of one thing should always be subservient to the restitution of another;' thus mould spurs the decay of dead plants to nourish the soil, and the earth then 'offers again to plants from its bosom, what it has received from them.'" [ 39 ] The basic idea of a balance of nature, however, can be traced back to the Greeks: Democritus , Epicurus , and their Roman disciple Lucretius . [ 40 ] Following the Greeks, the idea of a hydrological cycle (water is considered a nutrient) was validated and quantified by Halley in 1687. Dumas and Boussingault (1844) provided a key paper that is recognized by some to be the true beginning of biogeochemistry, where they talked about the cycle of organic life in great detail. [ 40 ] [ 41 ] From 1836 to 1876, Jean Baptiste Boussingault demonstrated the nutritional necessity of minerals and nitrogen for plant growth and development. Prior to this time influential chemists discounted the importance of mineral nutrients in soil. [ 42 ] Ferdinand Cohn is another influential figure. "In 1872, Cohn described the 'cycle of life' as the "entire arrangement of nature" in which the dissolution of dead organic bodies provided the materials necessary for new life. The amount of material that could be molded into living beings was limited, he reasoned, so there must exist an "eternal circulation" (ewigem kreislauf) that constantly converts the same particle of matter from dead bodies into living bodies." [ 43 ] : 115–116 These ideas were synthesized in the Master's research of Sergei Vinogradskii from 1881–1883. [ 43 ] In 1926 Vernadsky coined the term biogeochemistry as a sub-discipline of geochemistry . [ 40 ] However, the term nutrient cycle predates biogeochemistry in a pamphlet on silviculture in 1899: "These demands by no means pass over the fact that at places where sufficient quantities of humus are available and where, in case of continuous decomposition of litter, a stable, nutrient humus is present, considerable quantities of nutrients are also available from the biogenic nutrient cycle for the standing timber. [ 44 ] : 12 In 1898 there is a reference to the nitrogen cycle in relation to nitrogen fixing microorganisms . [ 45 ] Other uses and variations on the terminology relating to the process of nutrient cycling appear throughout history: Water is also a nutrient. [ 51 ] In this context, some authors also refer to precipitation recycling, which "is the contribution of evaporation within a region to precipitation in that same region." [ 52 ] These variations on the theme of nutrient cycling continue to be used and all refer to processes that are part of the global biogeochemical cycles. However, authors tend to refer to natural, organic, ecological, or bio-recycling in reference to the work of nature, such as it is used in organic farming or ecological agricultural systems. [ 24 ] An endless stream of technological waste accumulates in different spatial configurations across the planet and becomes hazardous in our soils, our streams, and our oceans. [ 53 ] [ 54 ] This idea was similarly expressed in 1954 by ecologist Paul Sears : "We do not know whether to cherish the forest as a source of essential raw materials and other benefits or to remove it for the space it occupies. We expect a river to serve as both vein and artery carrying away waste but bringing usable material in the same channel. Nature long ago discarded the nonsense of carrying poisonous wastes and nutrients in the same vessels." [ 55 ] : 960 Ecologists use population ecology to model contaminants as competitors or predators. [ 56 ] Rachel Carson was an ecological pioneer in this area as her book Silent Spring inspired research into biomagnification and brought to the world's attention the unseen pollutants moving into the food chains of the planet. [ 57 ] In contrast to the planet's natural ecosystems, technology (or technoecosystems ) is not reducing its impact on planetary resources. [ 58 ] [ 59 ] Only 7% of total plastic waste (adding up to millions upon millions of tons) is being recycled by industrial systems; the 93% that never makes it into the industrial recycling stream is presumably absorbed by natural recycling systems [ 60 ] In contrast and over extensive lengths of time (billions of years) ecosystems have maintained a consistent balance with production roughly equaling respiratory consumption rates. The balanced recycling efficiency of nature means that production of decaying waste material has exceeded rates of recyclable consumption into food chains equal to the global stocks of fossilized fuels that escaped the chain of decomposition. [ 61 ] Pesticides soon spread through everything in the ecosphere-both human technosphere and nonhuman biosphere-returning from the 'out there' of natural environments back into plant, animal, and human bodies situated at the 'in here' of artificial environments with unintended, unanticipated, and unwanted effects. By using zoological, toxicological, epidemiological, and ecological insights, Carson generated a new sense of how 'the environment' might be seen. [ 62 ] : 62 Microplastics and nanosilver materials flowing and cycling through ecosystems from pollution and discarded technology are among a growing list of emerging ecological concerns. [ 63 ] For example, unique assemblages of marine microbes have been found to digest plastic accumulating in the world's oceans. [ 64 ] Discarded technology is absorbed into soils and creates a new class of soils called technosols . [ 65 ] Human wastes in the Anthropocene are creating new systems of ecological recycling, novel ecosystems that have to contend with the mercury cycle and other synthetic materials that are streaming into the biodegradation chain. [ 66 ] Microorganisms have a significant role in the removal of synthetic organic compounds from the environment empowered by recycling mechanisms that have complex biodegradation pathways. The effect of synthetic materials, such as nanoparticles and microplastics, on ecological recycling systems is listed as one of the major concerns for ecosystems in this century. [ 63 ] [ 67 ] Recycling in human industrial systems (or technoecosystems ) differs from ecological recycling in scale, complexity, and organization. Industrial recycling systems do not focus on the employment of ecological food webs to recycle waste back into different kinds of marketable goods, but primarily employ people and technodiversity instead. Some researchers have questioned the premise behind these and other kinds of technological solutions under the banner of 'eco-efficiency' are limited in their capability, harmful to ecological processes, and dangerous in their hyped capabilities. [ 11 ] [ 68 ] Many technoecosystems are competitive and parasitic toward natural ecosystems. [ 61 ] [ 69 ] Food web or biologically based "recycling includes metabolic recycling (nutrient recovery, storage, etc.) and ecosystem recycling (leaching and in situ organic matter mineralization, either in the water column, in the sediment surface, or within the sediment)." [ 70 ] : 243
https://en.wikipedia.org/wiki/Nutrient_cycle
Nutrient depletion is a form of resource depletion and refers to the loss of nutrients and micronutrients in a habitat or parts of the biosphere , most often the soil ( soil depletion , soil degradation ). [ 1 ] On the level of a complete ecological niche or ecosystem , nutrient depletion can also come about via the loss of the nutrient substrate ( soil loss , wetland loss , etc.). Nutrients are usually the first link in the food chain , thus a loss of nutrients in a habitat will affect nutrient cycling and eventually the entire food chain. [ 2 ] [ 3 ] Nutrient depletion can refer to shifts in the relative nutrient composition and overall nutrient quantity (i.e. food abundance ). Human activity has changed both in the natural environment extensively, usually with negative effects on wildlife flora and fauna. [ 4 ] [ 5 ] The opposite effect is known as eutrophication or nutrient pollution . [ 6 ] Both depletion and eutrophication lead to shifts in biodiversity and species abundance (usually a decline). [ 7 ] The effects are bidirectional in that a shift in species composition in a habitat may also lead to shift in the nutrient composition. [ 8 ]
https://en.wikipedia.org/wiki/Nutrient_depletion
Nutrient management is the science and practice directed to link soil , crop , weather , and hydrologic factors with cultural, irrigation , and soil and water conservation practices to achieve optimal nutrient use efficiency, crop yields , crop quality, and economic returns , while reducing off-site transport of nutrients ( fertilizer ) that may impact the environment . [ 1 ] It involves matching a specific field soil, climate, and crop management conditions to rate, source, timing, and place (commonly known as the 4R nutrient stewardship ) of nutrient application. [ 2 ] Important factors that need to be considered when managing nutrients include (a) the application of nutrients considering the achievable optimum yields and, in some cases, crop quality; (b) the management, application, and timing of nutrients using a budget based on all sources and sinks active at the site; and (c) the management of soil, water, and crop to minimize the off-site transport of nutrients from nutrient leaching out of the root zone, surface runoff , and volatilization (or other gas exchanges). There can be potential interactions because of differences in nutrient pathways and dynamics. For instance, practices that reduce the off-site surface transport of a given nutrient may increase the leaching losses of other nutrients. These complex dynamics present nutrient managers the difficult task of achieve the best balance for maximizing profit while contributing to the conservation of our biosphere . A crop nutrient management plan is a tool that farmers can use to increase the efficiency of all the nutrient sources a crop uses while reducing production and environmental risk , ultimately increasing profit . Increasingly, growers as well as agronomists use digital tools like SST or Agworld to create their nutrient management plan so they can capitalize on information gathered over a number of years. [ 3 ] It is generally agreed that there are ten fundamental components of a crop nutrient management plan. Each component is critical to helping analyze each field and improve nutrient efficiency for the crops grown. These components include: [ 4 ] When such a plan is designed for animal feeding operations (AFO), it may be termed a "manure management plan." In the United States, some regulatory agencies recommend or require that farms implement these plans in order to prevent water pollution . The U.S. Natural Resources Conservation Service (NRCS) has published guidance documents on preparing a comprehensive nutrient management plan (CNMP) for AFOs. [ 5 ] [ 6 ] The International Plant Nutrition Institute has published a 4R plant nutrition manual for improving the management of plant nutrition. The manual outlines the scientific principles behind each of the four Rs or "rights" (right source of nutrient, right application rate, right time, right place) and discusses the adoption of 4R practices on the farm, approaches to nutrient management planning, and measurement of sustainability performance. [ 7 ] Of the 16 essential plant nutrients, nitrogen is usually the most difficult to manage in field crop systems. This is because the quantity of plant-available nitrogen can change rapidly in response to changes in soil water status. Nitrogen can be lost from the plant-soil system by one or more of the following processes: leaching ; surface runoff ; soil erosion ; ammonia volatilization ; and denitrification . [ 8 ] Nitrogen management aims to maximize the efficiency with which crops use applied N. Improvements in nitrogen use efficiency are associated with decreases in N loss from the soil. Although losses cannot be avoided completely, significant improvements can be realized by applying one or more of the following management practices in the cropping system. [ 8 ] Nitrate is the form of nitrogen that is most susceptible to loss from the soil, through denitrification and leaching . The amount of N lost via these processes can be limited by restricting soil nitrate concentrations, especially at times of high risk. This can be done in many ways, although these are not always cost-effective. Rates of N application should be high enough to maximize profits in the long term and minimize residual (unused) nitrate in the soil after harvest. Short-term changes in the plant-available N status make accurate seasonal predictions of crop N requirement difficult in most situations. However, models (such as NLEAP [ 13 ] and Adapt-N [ 14 ] ) that use soil, weather, crop, and field management data can be updated with day-to-day changes and thereby improve predictions of the fate of applied N. They allows farmers to make adaptive management decisions that can improve N-use efficiency and minimize N losses and environmental impact while maximizing profitability. [ 15 ] [ 9 ] [ 16 ]
https://en.wikipedia.org/wiki/Nutrient_management
Nutrient pollution is a form of water pollution caused by too many nutrients entering the water. It is a primary cause of eutrophication of surface waters (lakes, rivers and coastal waters ), in which excess nutrients, usually nitrogen or phosphorus , stimulate algal growth. [ 1 ] Sources of nutrient pollution include surface runoff from farms, waste from septic tanks and feedlots , and emissions from burning fuels. Raw sewage , which is rich in nutrients, also contributes to the issue when dumped in water bodies. Excess nitrogen causes environmental problems such as harmful algal blooms , hypoxia , acid rain , nitrogen saturation in forests, and climate change . [ 2 ] Agricultural production relies heavily on the use of natural and synthetic fertilizers , which often contain high levels of nitrogen , phosphorus and potassium . [ 3 ] [ 4 ] When nitrogen and phosphorus are not fully used by the growing plants, they can be lost from the farm fields and negatively impact air and downstream water quality. [ 5 ] These nutrients can end up in aquatic ecosystems and contribute to increased eutrophication. [ 6 ] To reduce nutrient pollution, several strategies can be implemented. These include installing buffer zones of vegetation around farms or artificial wetlands to absorb excess nutrients. Additionally, better wastewater treatment and reducing sewage dumping can help limit nutrient discharge into water systems. Finally, countries can create a permit system under the polluter pays principle . The principal source(s) of nutrient pollution in an individual watershed depend on the prevailing land uses . The sources may be point sources , nonpoint sources , or both: Nutrient pollution from some air pollution sources may occur independently of the local land uses, due to long-range transport of air pollutants from distant sources. [ 9 ] In order to gauge how to best prevent eutrophication from occurring, specific sources that contribute to nutrient loading must be identified. There are two common sources of nutrients and organic matter: point and nonpoint sources. Use of synthetic fertilizers , burning of fossil fuels , and agricultural animal production , especially concentrated animal feeding operations (CAFO), have added large quantities of reactive nitrogen to the biosphere . [ 10 ] Globally, nitrogen balances are quite inefficiently distributed with some countries having surpluses and others deficits, causing especially a range of environmental issues in the former. For most countries around the world, the trade-off between closing yield gaps and mitigating nitrogen pollution is small or non-existent. [ 11 ] Phosphorus pollution is caused by excessive use of fertilizers and manure , particularly when compounded by soil erosion . In the European Union, it is estimated that we may lose more than 100,000 tonnes of Phosphorus to water bodies and lakes due to water erosion. [ 12 ] Phosphorus is also discharged by municipal sewage treatment plants and some industries. [ 13 ] Point sources are directly attributable to one influence. In point sources the nutrient waste travels directly from source to water. Point sources are relatively easy to regulate. [ 14 ] Nonpoint source pollution (also known as 'diffuse' or 'runoff' pollution) is that which comes from ill-defined and diffuse sources. Nonpoint sources are difficult to regulate and usually vary spatially and temporally (with season , precipitation , and other irregular events ). [ 15 ] It has been shown that nitrogen transport is correlated with various indices of human activity in watersheds, [ 16 ] [ 17 ] including the amount of development. [ 18 ] Ploughing in agriculture and development are among activities that contribute most to nutrient loading. [ 8 ] Nutrients from human activities tend to accumulate in soils and remain there for years. It has been shown [ 19 ] that the amount of phosphorus lost to surface waters increases linearly with the amount of phosphorus in the soil. Thus much of the nutrient loading in soil eventually makes its way to water. Nitrogen, similarly, has a turnover time of decades. Nutrients from human activities tend to travel from land to either surface or ground water. Nitrogen in particular is removed through storm drains , sewage pipes, and other forms of surface runoff . Nutrient losses in runoff and leachate are often associated with agriculture . Modern agriculture often involves the application of nutrients onto fields in order to maximize production. However, farmers frequently apply more nutrients than are needed by crops, resulting in the excess pollution running off into either surface or groundwater. [ 20 ] or pastures. Regulations aimed at minimizing nutrient exports from agriculture are typically far less stringent than those placed on sewage treatment plants [ 21 ] and other point source polluters. It should be also noted that lakes within forested land are also under surface runoff influences. Runoff can wash out the mineral nitrogen and phosphorus from detritus and in consequence supply the water bodies leading to slow, natural eutrophication. [ 22 ] Nitrogen is released into the air because of ammonia volatilization and nitrous oxide production. The combustion of fossil fuels is a large human-initiated contributor to atmospheric nitrogen pollution. Atmospheric nitrogen reaches the ground by two different processes, the first being wet deposition such as rain or snow, and the second being dry deposition which is particles and gases found in the air. [ 23 ] Atmospheric deposition (e.g., in the form of acid rain ) can also affect nutrient concentration in water, [ 24 ] especially in highly industrialized regions. Excess nutrients have been summarized as potentially leading to: Nutrient pollution can have economic impacts due to increasing water treatment costs, commercial fishing and shellfish losses, recreational fishing losses, and reduced tourism income. [ 27 ] Human health effects include excess nitrate in drinking water ( blue baby syndrome ) and disinfection by-products in drinking water. Swimming in water affected by a harmful algal bloom can cause skin rashes and respiratory problems. [ 28 ] Nutrient trading is a type of water quality trading , a market-based policy instrument used to improve or maintain water quality. The concept of water quality trading is based on the fact that different pollution sources in a watershed can face very different costs to control the same pollutant. [ 29 ] Water quality trading involves the voluntary exchange of pollution reduction credits from sources with low costs of pollution control to those with high costs of pollution control, and the same principles apply to nutrient water quality trading. The underlying principle is " polluter pays ", usually linked with a regulatory requirement for participating in the trading program. [ 30 ] A 2013 Forest Trends report summarized water quality trading programs and found three main types of funders: beneficiaries of watershed protection, polluters compensating for their impacts and "public good payers" that may not directly benefit, but fund the pollution reduction credits on behalf of a government or NGO . As of 2013, payments were overwhelmingly initiated by public good payers like governments and NGOs. [ 30 ] : 11 Nutrient source apportionment is used to estimate the nutrient load from various sectors entering water bodies, following attenuation or treatment. Agriculture is typically the principal source of nitrogen in water bodies in Europe, whereas in many countries households and industries tend to be the dominant contributors of phosphorus. [ 31 ] Where water quality is impacted by excess nutrients, load source apportionment models can support the proportional and pragmatic management of water resources by identifying the pollution sources. There are two broad approaches to load apportionment modelling, (i) load-orientated approaches which apportion origin based on in-stream monitoring data [ 32 ] [ 33 ] and (ii) source-orientated approaches where amounts of diffuse, or nonpoint source pollution , emissions are calculated using models typically based on export coefficients from catchments with similar characteristics. [ 34 ] [ 35 ] For example, the Source Load Apportionment Model (SLAM) takes the latter approach, estimating the relative contribution of sources of nitrogen and phosphorus to surface waters in Irish catchments without in-stream monitoring data by integrating information on point discharges (urban wastewater, industry and septic tank systems), diffuse sources (pasture, arable, forestry, etc.), and catchment data, including hydrogeological characteristics. [ 36 ] Various nature-based solutions exist to tackle nutrient solution. For instance, farms can create artificial wetlands , which help remove nutrient run-off. These can have also have a waterbody included. Farms can also create buffer zones, to capture nutrients in groundwater or run-off. Finally, by vegetating drainage ditches, there is another opportunity for excess nutrients to be captured. [ 37 ] Based on surveys by state environmental agencies, agricultural nonpoint source (NPS) pollution is the largest source of water quality impairments throughout the U.S.. [ 38 ] : 10 NPS pollution is not subject to discharge permits under the federal Clean Water Act (CWA). [ 39 ] EPA and states have used grants, partnerships and demonstration projects to create incentives for farmers to adjust their practices and reduce surface runoff . [ 38 ] : 10–11 The basic requirements for states to develop nutrient criteria and standards were mandated in the 1972 Clean Water Act. Implementing this water quality program has been a major scientific, technical and resource-intensive challenge for both EPA and the states, and development is continuing well into the 21st century. EPA published a wastewater management regulation in 1978 to address the national nitrogen pollution problem, which had been increasing for decades. [ 40 ] In 1998, the agency published a National Nutrient Strategy with a focus on developing nutrient criteria. [ 41 ] Between 2000 and 2010, the EPA published federal-level nutrient criteria for rivers/streams, lakes/reservoirs, estuaries, and wetlands; and related guidance. "Ecoregional" nutrient criteria for 14 ecoregions across the U.S. were included in these publications. While states may directly adopt the EPA-published criteria, the states need to modify the criteria to reflect site-specific conditions in many cases. In 2004, EPA stated its expectations for numeric criteria (as opposed to less-specific narrative criteria) for total nitrogen (TN), total phosphorus (TP), chlorophyll a (chl-a), and clarity, and established "mutually-agreed upon plans" for state criteria development. In 2007, the agency stated that progress among the states on developing nutrient criteria had been uneven. EPA reiterated its expectations for numeric criteria and promised support for state efforts to develop their criteria. [ 42 ] After the EPA had introduced watershed-based NPDES permitting in 2007, interest in nutrient removal and achieving regional Total Maximum Daily Load (TMDL) limitations led to the development of nutrient trading schemes. [ 43 ] In 2008, the EPA published a progress report on state efforts to develop nutrient standards. Most states had not developed numeric nutrient criteria for rivers and streams; lakes and reservoirs; wetlands and estuaries (for those states with estuaries). [ 44 ] In the same year, EPA also established a Nutrient Innovations Task Group (NITG), composed of state and EPA experts, to monitor and evaluate the progress of reducing nutrient pollution. [ 45 ] In 2009 the NTIG issued a report, "An Urgent Call to Action", expressing concern that water quality continued to deteriorate nationwide due to increasing nutrient pollution, and recommending more vigorous development of nutrient standards by the states. [ 46 ] In 2011 EPA reiterated the need for states to fully develop their nutrient standards, noting that drinking water violations for nitrates had doubled in eight years, that half of all streams nationwide had medium to high levels of nitrogen and phosphorus, and harmful algal blooms were increasing. The agency set out a framework for states to develop priorities and watershed-level goals for reductions of nutrients. [ 47 ] Many point source dischargers in the U.S., while not necessarily the largest sources of nutrients in their respective watersheds, are required to comply with nutrient effluent limitations in their permits, which are issued through the National Pollutant Discharge Elimination System (NPDES), under the CWA. [ 48 ] Some large municipal sewage treatment plants, such as the Blue Plains Advanced Wastewater Treatment Plant in Washington, D.C. have installed biological nutrient removal (BNR) systems to comply with regulatory requirements. [ 49 ] Other municipalities have made adjustments to the operational practices of their existing secondary treatment systems to control nutrients. [ 50 ] NPDES permits also regulate discharges from large livestock facilities (CAFO). [ 51 ] Surface runoff from farm fields, the principal source of nutrients in many watersheds, [ 52 ] is classified as NPS pollution and is not regulated by NPDES permits. [ 39 ] A Total Maximum Daily Load (TMDL) is a regulatory plan that prescribes the maximum amount of a pollutant (including nutrients) that a body of water can receive while still meeting CWA water quality standards. [ 53 ] Specifically, Section 303 of the Act requires each state to generate a TMDL report for each body of water impaired by pollutants. TMDL reports identify pollutant levels and strategies to accomplish pollutant reduction goals. EPA has described TMDLs as establishing a "pollutant budget" with allocations to each pollutant source. [ 54 ] For many coastal water bodies, the main pollutant issue is excess nutrients, also termed nutrient over-enrichment. [ 55 ] A TMDL can prescribe the minimum level of dissolved oxygen (DO) available in a body of water, which is directly related to nutrient levels. ( See Aquatic Hypoxia .) TMDLs addressing nutrient pollution are a major component of the U.S. National Nutrient Strategy. [ 56 ] TMDLs identify all point source and nonpoint source pollutants within a watershed. Wasteload allocations are incorporated into their NPDES permits to implement TMDLs with point sources. [ 57 ] NPS discharges are generally in a voluntary compliance scenario. [ 53 ] EPA published a TMDL for the Chesapeake Bay in 2010, addressing nitrogen, phosphorus and sediment pollution for the entire watershed, covering an area of 64,000 square miles (170,000 km 2 ). This regulatory plan covers the estuary and its tributaries—the largest, most complex TMDL document that EPA has issued. [ 58 ] [ 59 ] In Long Island Sound , the TMDL development process enabled the Connecticut Department of Energy and Environmental Protection and the New York State Department of Environmental Conservation to incorporate a 58.5 percent nitrogen reduction target into a regulatory and legal framework. [ 54 ] Similar to the U.S., nutrient pollution is dominant in surface water pollution in China. [ 60 ] Urbanization and agriculture have contributed to nutrient pollution most notably, the practice of discharging of manure where animal manure is treated as waste and is discharged into water. [ 60 ] Surveys and models have also shown that Nitrogen and Phosphorus inputs to rivers are high in southern and eastern regions of China. [ 61 ] [ 62 ]
https://en.wikipedia.org/wiki/Nutrient_pollution
Nutriomics is the science that studies the food and nutrition domains comprehensively to improve consumer's well-being and health. [ 1 ] More specifically Nutriomics approaches are used to evaluate the effects of different diets to promote health and modulate the risk of disease development. [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nutriomics
Nutritional immunology is a field of immunology that focuses on studying the influence of nutrition on the immune system and its protective functions. Indeed, every organism will under nutrient-poor conditions "fight" for the precious micronutrients and conceal them from invading pathogens. As such, bacteria, fungi, plants secrete for example iron chelators (siderophores) to acquire iron from their surrounding [ 1 ] Part of nutritional immunology involves studying the possible effects of diet on the prevention and management on developing autoimmune diseases , chronic diseases, allergy , cancer ( diseases of affluence ) and infectious diseases . [ 2 ] Other related topics of nutritional immunology are: malnutrition , malabsorption and nutritional metabolic disorders including the determination of their immune products. [ 3 ] [ 4 ] [ 5 ] [ 6 ] The development and progression of many autoimmune diseases are generally unknown. The " Western pattern diet " consists of high-fat, high-sugar, low-fiber meals with a surfeit of salt and highly processed food, which have pro-inflammatory effects. These effects may promote Th1 - and Th17 - biased immunity and alter monocyte and neutrophil migration from bone marrow . [ 7 ] [ 8 ] A healthy diet contains a multitude of micronutrients that have anti-inflammatory and immune boosting effects that can help prevent or treat autoimmune diseases. The impact of diet is studied in relation to these autoimmune diseases: [ 9 ] [ 10 ] [ 11 ] Nutrition can help prevent or promote the development of food allergies. The hygiene hypothesis states that a child's early introduction to certain microorganisms can avert the onset of allergies. Breastfeeding is considered to be the main method of preventing food allergies. This is because breast milk contains oligosaccharides , secretory IgA , vitamins, antioxidants and possible transfer of microbiota . [ 12 ] Conversely, a child's lack of exposure to specific microorganisms can establish a vulnerability to food allergies Diabetes mellitus is a disease in which one's blood sugar levels are elevated. [ 13 ] There are two forms of diabetes: Type 1 diabetes and Type 2 diabetes . Type 1 is caused by the immune system attacking insulin-producing cells in the pancreas. Type 2 is caused by the underproduction of insulin and the cells in your body becoming resistant to insulin. [ 13 ] A low-glycemic diet that is high in fiber is recommended for diabetics because low-glycemic foods digest slower in the body. Slower digestion helps stabilize blood glucose levels and prevents spikes in blood sugar. [ 14 ] Cancer is a disease with multifactorial causes. Cigarette smoking, physical activity, viruses, and diet play a role in the development of cancer. [ 15 ] Poor diet has been linked to the development of cancer, while a healthy diet has been shown to have positive effects on preventing and treating cancer. Cruciferous vegetables contain chemicals called Isothiocyanates (ITC's). ITC's have immune-boosting effects, as well as anti-cancer activity such as the prevention of angiogenesis. Angiogenesis is a process where tumors have their own blood supply in order to feed growing cancer cells. The alliinase containing food group, allium, has anti-cancer and anti-inflammatory properties. Alliinase is an enzyme, which acts as an angiogenisis-inhibitor and a carcinogen detoxifier. Mushrooms reduce cancer cell and tumor growth and prevent DNA damage. Mushrooms have aromatase inhibitors that decrease the levels of estrogen released in the bloodstream, slowing the production of breast tissue. Fruits and vegetables contain flavonoids, which are anti-carcinogens. [ 14 ] Macronutrients are a class of nutrients that the human body needs in larger amounts in order to function properly and the three main classes of macronutrients include: proteins, carbohydrates, and fats (lipids). The main role of macronutrients besides to make sure the body functions properly is to provide the body with energy in the form of calories. Proteins are large biomolecules made up of chains of amino acids, which are the organic compounds that make most bodily functions possible. [ 16 ] Proteins are found naturally within the body and are found in foods such as meat, fish, dairy products, eggs, seeds and nuts, and beans and legumes. Throughout the body, proteins are found in hair, nails, muscles and bones, they also can function as enzymes and/or hormones. The role of proteins as enzymes and/or hormones is imperative for cell function and physiological processes as simple as growth. [ 17 ] Proteins aid in muscle growth, speed up metabolism and lower blood pressure. Proteins are imperative for the body's tissues and organs, working in their function, structure and regulation. [ 16 ] Protein's protect the immune system in the form of antibodies, y-shaped proteins that bind to viral, bacterial and parasitic infections, signaling to the rest of the body that there is a foreign cell that should be neutralized. [ 18 ] Without antibodies, the body would not be able to target and fight infection. Carbohydrates are sugars, starches and fibers found in grains, fruits, dairy products and vegetables. Carbohydrates are organic compounds made of Carbon, Hydrogen and Oxygen. They help the body's immunology by maintaining blood sugar, which reduces the body's stress response. [ 19 ] It is common for people to consume carbohydrate rich foods before working out in order to maintain energy and avoid crash afterwards, this is a positive result of having maintained blood sugar. Carbohydrates are also an energy source for cells, act as cell receptors for recognition, and function in cell support. [ 17 ] Lipids are macromolecules made up of hydrocarbons, there are 3 main types of lipids: triglycerides, phospholipids, and steroids. Lipids are hydrophobic molecules, therefore they are only soluble in non-polar solvents. [ 20 ] Because of this, lipids do not break down in the body without the use of lipase enzymes, which break down lipids into glycerol and fatty acids. Lipids can be found in oils, dairy products, and some meats, along with in avocados and nuts. Cholesterol is a type of lipid and is an important feature in plasma membranes, which work in regulating immune cell plasticity. [ 17 ] Lipids maintain the structure of cell membranes, act as storehouses of energy, maintain body temperature/ aid in homeostasis, are important signaling molecules. [ 21 ] Without lipids, bodily cells would not be able to maintain function or survive. While consuming too many lipids can lead to obesity, high cholesterol, type 2 diabetes and other diseases, they are an important molecule to consume and maintain within the body. There are also vitamins that only dissolve in fats, such as vitamin A, K, D and E; these vitamins are vital in transporting and metabolizing fatty acids, transporting molecules across membranes and activating enzymes necessary for oxidative phosphorylation. [ 22 ] Without lipids, cells in the body would not function and the body would simply fail. They are among the most important macromolecules. Eicosapentaenioc acid (EPA) and docosahexaenoic acid (DHA) are omega-3 fatty acids , which can be found in marine fish, primarily in salmon, tuna, mackerel, herring and sardines and in fish oil. These two fatty acids are important components of cell membranes. It has been shown that they have anti-inflammatory effects in the body. EPA and DHA inhibit production of pro-inflammatory cytokines such as IL-1β, TNF-α, IL-6; they reduce the expression of adhesion molecules that are involved in inflammation and may modulate and reduce production of prostaglandins and leukotriens from the n-6 fatty acid arachidonic acid . These changes are most likely due to alterations in the lipid rafts on cell membranes, which then further affect signaling cascades and inhibition of activation of the pro-inflammatory transcriptional factor NF-κB. EPA and DHA can increase the production of anti-inflammatory cytokine IL-10 and promote production of protective mediators such as resolvins , protectins and maresins . [ 23 ] Micronutrients are a group of nutrients, usually in smaller amounts, that are vital for the human body to perform various physiological functions properly. This includes vitamins, minerals, phytochemicals, and antioxidants. Vitamins and Minerals are essential substances that the body needs to grow and function. Your body needs thirteen vitamins, but does produce Vitamin K by the gut microflora and Vitamin D from the sunlight. [ 24 ] It is important to note that a lack of vitamins and minerals such as iron will prime the immune system. [ 25 ] Indeed, a lack of iron [ 26 ] [ 27 ] and vitamin A [ 28 ] is associated with all cause-mortality and morbidity. There are two types of vitamins, including fat-soluble vitamins and water-soluble vitamins. Fat-soluble vitamins are vitamins that are soluble in organic solvents, which include vitamins A, K, E, and D. [ 29 ] Water-soluble vitamins are vitamins that are soluble in water and include vitamin C and B vitamins (thiamine, riboflavin, niacin, pantothenic acid, biotin, vitamin B-6, vitamin B-12, and folate. [ 30 ] Most of the essential vitamins the body needs can be obtained by a balanced diet, with the exception of a portion of the population who don't get enough micronutrients from their diet or have a health condition that affects their nutritional needs. Similarly to vitamins, minerals are needed for your body to be healthy and to function properly. Minerals function to keep your bones, muscles, heart, and brain working correctly. Minerals also play a crucial role in the regulation and function of the immune system. In the adaptive immune system , [ citation needed ] the mineral zinc is an important structural element of the hormone thymulin, which is produced by the epithelial cells of the thymus [ citation needed ] and mediates the maturation of pre-T lymphocytes into T lymphocytes [ citation needed ] needed to protect the body from infection. [ 31 ] Minerals include phosphorus, calcium, magnesium, sodium, potassium, chloride, and sulfur. There are also trace minerals needed in smaller amounts, which include iron, manganese, copper, iodine, zinc, cobalt, fluoride, and selenium. [ 32 ] Phytochemicals are chemical compounds found in plants. These phytochemicals are present in things like fruits, vegetables, whole grains, seeds, nuts, and legumes. They provide a multitude of health benefits ranging from small improvements such as, lowering blood pressure, reducing inflammation, and lowering LDL cholesterol levels in the blood to the major benefits of fighting against the growth of tumors, cancer, cardiovascular disease, along with being able to boost the immune system. [ 33 ] Antioxidants are compounds that block unpaired electrons in a molecule or atom and keep it from becoming a free radical. Free radicals are molecules that are either naturally made in the human body after exercise or can be from exposure to environmental factors such as, cigarette smoke, pollution, and sunlight. These free radicals are destabilized and are highly reactive, which produces oxidative stress. This oxidative stress is what causes reactions that can damage cells in the body and can cause the cells to lose their function and become pathogenic. [ 34 ] Polyphenols are organic substances that naturally occur in plants. They are important antioxidants with anti-inflammatory properties. It was demonstrated that curcumin can modulate immunity in many ways, mainly via regulation and inhibition of transcription factors such as nuclear factor NF-κB and activator protein 1 (AP-1) . [ 35 ] Another polyphenol, resveratrol , also modulates and promotes immune response. [ 36 ] Dietary prebiotics are a fermented ingredient that affect the composition and/or activity of the gut microbiome in a way that is beneficial to the host. [ 37 ] Prebiotics involve mainly oligosaccharides and carbohydrates ( fructooligosaccharides , galactooligosacharides , xylooligosaccharides , mannose oligosaccharides). These substances can modulate immune responses in the gut. Prebiotics regulate the growth of beneficial microbial organisms in the intestine ( commensal bacteria ). [ 38 ] Probiotics are live microorganisms that are beneficial to the host in sufficient amounts. [ 39 ] Probiotics and their metabolites balance and modulate anti-inflammatory or pro-inflammatory immune responses in gut. [ 40 ] Probiotics induce antimicrobial peptides such as β-defensin-2 , they increase the production of T regulatory cells , and regulate cytokines and chemokines . [ 41 ] They can also affect the polarization of the immune response ( Th1 instead of Th2 ) and increase the production of IgA in the gut. [ 42 ] The bacterial strains most commonly used as probiotics are Lactobacillus , Enterococcus , Bifidobacterium group [ 43 ]
https://en.wikipedia.org/wiki/Nutritional_immunology
A Nv network is a term used in BEAM robotics referring to the small electrical Neural Networks that make up the bulk of BEAM-based robot control mechanisms. The most basic component included in Nv Networks is the Nv neuron . The purpose of a Nv neuron is simply to take an input, do something with it, and give an output. The most common action of Nv neurons is to give a delay. The standard for BEAM-based neurons is a capacitor that has one lead as an input, and the other going into the input line of an inverter . That inverter's output is the output of the neuron. The capacitor lead that is inputting into the inverter is pulled to ground with a resistor . The neuron functions because when an input is received (positive power on the input line), it charges the capacitor. Once the input is lost (negative power on the input line), the capacitor discharges into the inverter, causing the inverter to produce an output that is passed to the next neuron. The rate that the capacitor discharges is tied to the resistor that is pulling the input to the inverter to the negative. The larger the resistor, the longer it will take for the capacitor to fully discharge, and the longer it will take for that neuron to completely fire. There are many common network topologies used in BEAM robots, the most common of which are listed here. Probably the most utilized Nv Net topology in BEAM, the Bicore consists of two neurons placed in a loop that alternates current to the output. Input into the loop is given in the form of changing the resistance in each separate Neuron, which changes the rate at which the Neuron discharges, affecting the pace at which the loop oscillates. Another common topology is using two bicores in a master/slave layout where the master bicore leads the slave and sets the pace, while the slave bicore follows at an offset pace. This layout is most commonly used for dual-motor walkers. Other larger network topologies include the Tricore, and Quadcore which are laid out in a similar way the bicore is, except with more Neurons in the loop. More complex networks exist, but are not as common due to the simplistic nature of BEAM. A basic Nv network is built upon several Nv neurons in a loop. The loop's timing is often varied by input sensors. This difference in timing is often meant to affect the output pattern of the Nv loop. An example of this can be seen in a simple BEAM walker robot utilizing a bicore network (2 neurons). The neural network is set up to alternate current going to the main motor in a way where under equal input from the main sensors, the neurons oscillate at an equal pace to each other, producing a steady walking gait. When input (e.g. from light sensors) is present, the timing of each neuron in the loop is varied based on the input from the sensors, affecting the pace at which the loop oscillates. This affected pace is often used to alter the walking gait of a robot in order to steer it based on the input from its sensors.
https://en.wikipedia.org/wiki/Nv_network
Nvidia 's BR02 "High Speed Interconnect" ("HSI") chip was used in their early PCI Express graphics cards, where it acted as a bridge between the PCI Express connection to the computer and the natively AGP GPU . [ 1 ] This allowed Nvidia to release a PCI Express graphics card without redesigning the graphics card's GPU for the new interface. It was introduced in 2004. [ 1 ] Nvidia has also developed an HSI chip which works in the opposite direction, allowing natively PCI Express GPUs to be used in graphics cards released for the AGP interface. This chip is no longer used on modern Nvidia GPUs due to the obsolescence of the AGP standard. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nvidia_BR02
TCL NXTPAPER is a display technology developed by TCL Corporation that attempts to replicate the experience of reading on paper to improve eye comfort. TCL claims that the NXTPAPER technology reduces blue light emissions, which are often linked to digital eye strain and sleep disturbances . [ 1 ] It also includes anti-glare properties, designed to make screen reading more akin to reading paper, further reducing strain on the eyes. [ 2 ] TCL has implemented NXTPAPER technology in various products, including tablets and smartphones. [ 3 ] [ 4 ] Devices featuring NXTPAPER displays include the TCL NXTPAPER 10s [ 5 ] and TCL's 40 NXTPAPER series smartphones. This mobile technology related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nxtpaper
Cold medicines are a group of medications taken individually or in combination as a treatment for the symptoms of the common cold and similar conditions of the upper respiratory tract. The term encompasses a broad array of drugs, including analgesics , antihistamines and decongestants , among many others. It also includes drugs which are marketed as cough suppressants or antitussives , but their effectiveness in reducing cough symptoms is unclear or minimal. [ 1 ] [ 2 ] [ 3 ] While they have been used by 10% of American children in any given week, they are not recommended in Canada or the United States in children six years or younger because of lack of evidence showing effect and concerns of harm. [ 4 ] [ 5 ] There are a number of different cough and cold medications, which may be used for various coughing symptoms. The commercially available products may include various combinations of any one or more of the following types of substances: [ citation needed ] An example combination is guaifenesin with codeine . The efficacy of cough medication is questionable, particularly in children. [ 6 ] [ 3 ] A 2014 Cochrane review concluded that "There is no good evidence for or against the effectiveness of OTC [over the counter] medicines in acute cough". [ 1 ] Some cough medicines may be no more effective than placebos for acute coughs in adults, including coughs related to upper respiratory tract infections. [ 7 ] The American College of Chest Physicians emphasizes that cough medicines are not designed to treat whooping cough , a cough that is caused by bacteria and can last for months. [ 8 ] No over-the-counter cough medicines have been found to be effective in cases of pneumonia . [ 9 ] They are not recommended in those who have COPD , chronic bronchitis , or the common cold . [ 10 ] [ 2 ] There is not enough evidence to make recommendations for those who have a cough in cancer . [ 11 ] A small study found honey may be a minimally effective cough treatment due to "well-established antioxidant and antimicrobial effects" and a tendency to soothe irritated tissue. [ 21 ] A Cochrane review found there was weak evidence to recommend for or against the use of honey in children as a cough remedy. [ 22 ] In light of these findings, the Cochrane study found honey was better than no treatment, placebo, or diphenhydramine but not better than dextromethorphan for relieving cough symptoms. [ 22 ] Honey's use as a cough treatment has been linked on several occasions to infantile botulism and accordingly should not be used in children less than one year old. [ 23 ] Many alternative treatments are used to treat the common cold , though data on effectiveness is generally limited. A 2007 review states that, "alternative therapies (i.e., Echinacea , vitamin C , and zinc ) are not recommended for treating common cold symptoms; however,...Vitamin C prophylaxis may modestly reduce the duration and severity of the common cold in the general population and may reduce the incidence of the illness in persons exposed to physical and environmental stresses." [ 24 ] A 2014 review also found insufficient evidence for Echinacea, where no clinical relevance was proven to provide benefit for treating the common cold, despite a weak benefit for positive trends. [ 25 ] Similarly, a 2014 systematic review showed that garlic may prevent occurrences of the common cold but there was insufficient evidence of garlic in treating the common cold and studies reported adverse effects of a rash and odour. [ 26 ] Therefore, more research needs to be done to prove that the benefits outweigh the harms. Evidence supporting the effectiveness of zinc is mixed with respect to cough. [ 12 ] Zinc "administered within 24 hours of onset of symptoms reduces the duration of common cold symptoms in healthy people". [ 27 ] A 2003 review concluded: "Clinical trial data support the value of zinc in reducing the duration and severity of symptoms of the common cold when administered within 24 hours of the onset of common cold symptoms." [ 28 ] Zinc gel in the nose may lead to long-term or permanent loss of smell. The FDA therefore discourages its use. [ 29 ] Cough medicines, especially those containing dextromethorphan and codeine , are often abused as recreational drugs . [ 30 ] [ 31 ] Abuse may result in hallucinations, loss of consciousness and death. Many cough syrups can contain acetaminophen which will cause liver damage in recreational users. [ 31 ] A number of accidental overdoses and well-documented adverse effects suggested caution in children. [ 23 ] The FDA in 2015 warned that the use of codeine-containing cough medication in children may cause breathing problems. [ 32 ] Cold syrup overdose has been linked to visual and auditory hallucinations as well as rapid involuntary jaw, tongue, and eye movements in children. [ medical citation needed ] Decongestants are possibly harmful to people with high blood pressure or a heart disease because these substances can constrict the blood vessels. [ 33 ] Heroin was originally marketed as a cough suppressant in 1898. [ 34 ] It was, at the time, believed to be a non-addictive alternative to other opiate-containing cough syrups. This was quickly realized not to be true as heroin readily breaks down into morphine in the body. Morphine was already known to be addictive. [ citation needed ] Some brand names include: Benylin, Sudafed, Robitussin and Vicks among others. [ 35 ] Most contain a number of active ingredients. [ 4 ] The Thai company Hatakabb produces the Takabb Anti-Cough Pill , which is a Chinese herbal medication . [ 36 ] Sudafed is a brand manufactured by McNeil Laboratories . [ citation needed ] The original formulation contains the active ingredient pseudoephedrine , but formulations without pseudoephedrine are also being sold under the brand. [ citation needed ] In 2016, it was one of the biggest selling branded over-the-counter medications sold in Great Britain, with sales of £34.4 million. [ 37 ] The effectiveness of phenylephrine by mouth as a nasal decongestant is questionable. [ 38 ] Gee's Linctus is a cough medicine which contains opium tincture . [ 39 ] New Zealand in 2019 moved it to prescription only. [ 40 ] Coricidin, Coricidin D, or Coricidin HBP, is the brand name of a combination of dextromethorphan and chlorpheniramine maleate (an antihistamine ). [ citation needed ] Varieties may also contain acetaminophen and guaifenesin . [ citation needed ] Codral is a brand name manufactured by Johnson & Johnson and sold primarily in Australia and New Zealand. Codral is the highest-selling cold and flu medication in Australia. [ 41 ] In the United States, several billion dollars are spent on over-the-counter products per year. [ 42 ] According to The New York Times , at least eight mass poisonings have occurred as a result of counterfeit cough syrup in which medical-grade glycerin has been replaced with diethylene glycol , an inexpensive, yet toxic, glycerin substitute marketed for industrial use. In May 2007, 365 deaths were reported in Panama , which were associated with cough syrup containing diethylene glycol. [ 43 ] In 2022, the deaths of 66 children in The Gambia were linked to four pediatric cough syrup medications that contained diethylene glycol and ethylene glycol . [ 44 ] [ 45 ] In 2022, the US Food and Drug Administration issued a warning against cooking foods in cough syrup after a video of someone preparing "NyQuil chicken", sometimes also called "sleepy chicken", became popular on social media. Cough syrup is designed to be stored at room temperature and its properties can change when it is heated, making it potentially deadly. Heated cough syrup can also vaporize, leading to inhalation hazards. [ 46 ] [ 47 ] [ 48 ] The warning received attention from many news outlets, but some criticized the FDA's handling of the issue for amplifying the attention the topic received online and questioned if making and eating NyQuil chicken actually existed as a widespread trend. [ 49 ] [ 50 ] [ 51 ]
https://en.wikipedia.org/wiki/NyQuil_chicken
Nylander's test is a chemical test used for detecting the presence of reducing sugars . Glucose or fructose reduces bismuth oxynitrate to bismuth under alkaline conditions. When Nylander's reagent, which consists of bismuth nitrate , potassium sodium tartrate and potassium hydroxide , is added to a solution with reducing sugars, a black precipitate of metallic bismuth is formed. [ 1 ] [ 2 ] [ 3 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nylander's_test
Nylatron is a tradename for a family of nylon plastics, typically filled with molybdenum disulfide lubricant powder. It is used to cast plastic parts for machines, because of its mechanical properties and wear-resistance. [ 1 ] Nylatron is a brand name of Mitsubishi Chemical Advanced Materials, Inc. and was originally developed and manufactured by Nippon Polypenco Limited . [ 2 ] Nylatron is used in several applications such as: This article about polymer science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nylatron
Nylon 11 or Polyamide 11 (PA 11) is a polyamide , bioplastic and a member of the nylon family of polymers produced by the polymerization of 11-aminoundecanoic acid . It is produced from castor beans by Arkema under the trade name Rilsan . [ 1 ] Nylon 11 is applied in the fields of oil and gas , aerospace , automotive , textiles , electronics and sports equipment , frequently in tubing , wire sheathing, and metal coatings . [ 2 ] In 1938, a research director for Thann & Mulhouse, Joseph Zeltner, first conceived the idea of Nylon 11, which was suggested in the works of Wallace Carothers . [ 3 ] Thann & Mulhouse had already been involved in processing castor oil for 10-undecenoic-acid , which would eventually be converted into the first amount of 11-aminoundecanoic acid in 1940 with the help of coworkers Michel Genas and Marcel Kastner. In 1944, Kastner sufficiently improved the monomer process and the first patents for Nylon 11 were filed in 1947. [ 4 ] The first nylon 11 thread was created in 1950 and full industrial production began with the opening of the Marseilles production facility in 1955, which remains the sole producer of 11-aminoudecanoic acid today. Currently Arkema polymerizes Nylon 11 in Birdsboro, PA , Changshu , and Serquigny . [ 5 ] The chemical process of creating Nylon 11 begins with ricinoleic acid which makes up 85-90% of castor oil. Ricinoleic acid is first transesterified with methanol creating methyl ricinoleate , which is then cracked to create heptaldehyde and methyl undecylenate. These undergo hydrolysis to create methanol, which is re-used in the initial transesterification of ricinoleic acid, and undecylenic acid that is added on to hydrogen bromide . After hydrolysis, hydrogen bromide then undergoes nucleophilic substitution with ammonia to form 11-aminoundecanoic acid, which is polymerized into nylon 11. [ 5 ] As seen in the table below, Nylon 11 has lower values of density, flexural and Young's modulus, water absorption, as well as melting and glass transition temperatures. Nylon 11 is seen to have increased dimensional stability in the presence of moisture due to its low concentration of amides . Nylon 11 experiences 0.2-0.5% length variation and 1.9% weight variation after 25 weeks of submersion in water in comparison to 2.2-2.7% elongation variation and 9.5% weight variation for Nylon 6. [ 2 ] When specially orientated into the δ' phase, Nylon 11 11 can exhibit piezoelectric properties. [ 6 ] at break [ 7 ] at 0.32 cm thick and 24 h [ 7 ] transition temperature [ 7 ] Due to its low water absorption, increased dimensional stability when exposed to moisture, heat and chemical resistance, flexibility, and burst strength, nylon 11 is used in various applications for tubing. In the fields of automotive, aerospace, pneumatics, medical, and oil and gas, nylon 11 is used in fuel lines , hydraulic hoses , air lines, umbilical hoses, catheters , and beverage tubing. [ 2 ] Nylon 11 is used in cable and wire sheathing as well as electrical housings, connectors and clips. [ 2 ] Nylon 11 is used in metal coatings for noise reduction and protection against UV exposure as well as resistance to chemicals, abrasion, and corrosion. [ 9 ] Nylon 11 is used in textiles through brush bristles, lingerie , filters, as well as woven and technical fabrics . [ 2 ] [ 10 ] Nylon 11 is used in the soles and other mechanical parts of footwear. It is also seen in racket sports for racket strings, eyelets, and badminton shuttlecocks. Nylon 11 is used for the top layering of skis. [ 2 ]
https://en.wikipedia.org/wiki/Nylon_11
The nylon rope trick is a scientific demonstration that illustrates some of the fundamental chemical principles of step-growth polymerization and provides students and other observers with a hands-on demonstration of the preparation of a synthetic polymer . The nylon rope trick typically makes use of a water solution of an aliphatic diamine with a solution of an aliphatic diacid chloride in a solvent that does not dissolve in water, yielding a synthetic polyamide of the nylon -type. Nylon 610 is commonly used, in which hexamethylene diamine is dissolved in water to a concentration of about 0.40 moles / deciliter . A solution of sebacoyl chloride in cyclohexane (0.15 moles / deciliter concentration) is then layered on top of the water solution, the reaction typically being conducted in a beaker . The solution is not agitated; instead the nylon 610 polymer forms as a flexible film at the interface of the water and cyclohexane layers, in an example of an interfacial polymerization . [ 1 ] The experimentalist grasps the polymer film, withdrawing it from the reaction vessel, forming a filament or rope, and collecting it on a rotating rod above the reaction vessel. New polymer forms at the interface as fresh surfaces of the cyclohexane layer and the water layer form. In this way, the demonstration yields a continuous rope that is collected on the rotating rod. Nylon 66 can also be produced at laboratory scale in this way. Representative procedures and equipment lists for conducting the nylon rope trick demonstration are available in literature procedures. [ 2 ] The nylon rope trick was developed as a scientific demonstration by American chemist Stephanie Kwolek , who later invented Kevlar aramid . [ 3 ]
https://en.wikipedia.org/wiki/Nylon_rope_trick
A Nyquist filter is an electronic filter used in TV receivers to equalize the video characteristics. The filter is named after the Swedish–US engineer Harry Nyquist (1889–1976). In analogue TV broadcasting the visual radio frequency (RF) signal is produced by amplitude modulation (AM) i.e., the video signal ( VF ) modulates the amplitude of the carrier . In AM two symmetric sidebands appear, containing identical information. So the RF bandwidth is two times the VF bandwidth. For example, the RF bandwidth of a VF signal with a bandwidth of 4.2 MHz, is 8.4 MHz. (System M) In order to use the broadcast band more efficiently, one sideband can be suppressed. However, it is impossible to suppress one sideband completely without affecting the other. Furthermore, a very sharp edge filter characteristic causes intolerable delay problems. So as a compromise, a standard filter is used which reduces a considerable portion of one side band (lower side band in RF) without causing extensive delay problems. Such a filter is known as vestigial side band filter (VSB) . [ 1 ] In System B the VF bandwidth is 5 MHz. Without any suppression, the corresponding visual RF bandwidth must be 10 MHz. (Here, presence of aural signal is omitted for the sake of simplification.) But by using a VSB filter, the visual RF bandwidth is reduced to 6.25 MHz; 5 MHz in one sideband and 1.25 MHz in the other sideband. (The filter characteristic in the suppressed sideband is such that between 0 and 0.75 MHz there is no suppression.) By this method, 3.75 MHz is economised, which means that for the same band allocated for broadcasting, the number of TV services increases approximately one and half fold. When VSB filter is used in broadcasting, a problem arises during demodulation. While 0 - 0.75 MHz region has two side bands, the region beyond 1.25 MHz has only one side band. ( i.e., 0 - 0.75 MHz region is double sideband and the region beyond 1.25 MHz is single sideband ) Thus, the level of the demodulated signal in 0 - 0.75 MHz region is 6 dB higher than the level in the region beyond 1.25 MHz. Since high frequency components of the VF signal correspond to fine details and color subcarrier, the demodulation results in fading the detailed portions and color saturation of the picture with respect to less detailed portions of the picture. In order to equalise the low frequency and high frequency components of the VF signal, a filter named a Nyquist filter is used in receivers. This filter, which is used before demodulation, is actually a low-pass filter with 6 dB suppression at the intermediate frequency (IF) carrier. Thus, the level of double sideband portion of the VF signal is suppressed and the original band characteristic is reconstructed at the output of the demodulator. The specifications below are given for sound trap off case. System B (G or H in UHF band) refers to broadcast system used in most countries. System M refers to broadcast system used in America.
https://en.wikipedia.org/wiki/Nyquist_filter
The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing . The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing. In practice, it is used to select band-limiting filters to keep aliasing below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function. The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals . It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth . Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate ) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples. Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see § Sampling of non-baseband signals below and compressed sensing ). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem . [ 1 ] The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon , but the theorem was also previously discovered by E. T. Whittaker (published in 1915), and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem , Whittaker–Shannon , and Whittaker–Nyquist–Shannon , and may also be referred to as the cardinal theorem of interpolation . Sampling is a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states: [ 2 ] Theorem — If a function x ( t ) {\displaystyle x(t)} contains no frequencies higher than B hertz , then it can be completely determined from its ordinates at a sequence of points spaced less than 1 / ( 2 B ) {\displaystyle 1/(2B)} seconds apart. A sufficient sample-rate is therefore anything larger than 2 B {\displaystyle 2B} samples per second. Equivalently, for a given sample rate f s {\displaystyle f_{s}} , perfect reconstruction is guaranteed possible for a bandlimit B < f s / 2 {\displaystyle B<f_{s}/2} . When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing . Modern statements of the theorem are sometimes careful to explicitly state that x ( t ) {\displaystyle x(t)} must contain no sinusoidal component at exactly frequency B , {\displaystyle B,} or that B {\displaystyle B} must be strictly less than one half the sample rate. The threshold 2 B {\displaystyle 2B} is called the Nyquist rate and is an attribute of the continuous-time input x ( t ) {\displaystyle x(t)} to be sampled. The sample rate must exceed the Nyquist rate for the samples to suffice to represent x ( t ) . {\displaystyle x(t).} The threshold f s / 2 {\displaystyle f_{s}/2} is called the Nyquist frequency and is an attribute of the sampling equipment . All meaningful frequency components of the properly sampled x ( t ) {\displaystyle x(t)} exist below the Nyquist frequency. The condition described by these inequalities is called the Nyquist criterion , or sometimes the Raabe condition . The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure attributed to t , {\displaystyle t,} f s , {\displaystyle f_{s},} and B . {\displaystyle B.} The symbol T ≜ 1 / f s {\displaystyle T\triangleq 1/f_{s}} is customarily used to represent the interval between adjacent samples and is called the sample period or sampling interval . The samples of function x ( t ) {\displaystyle x(t)} are commonly denoted by x [ n ] ≜ T ⋅ x ( n T ) {\displaystyle x[n]\triangleq T\cdot x(nT)} [ 3 ] (alternatively x n {\displaystyle x_{n}} in older signal processing literature), for all integer values of n . {\displaystyle n.} The multiplier T {\displaystyle T} is a result of the transition from continuous time to discrete time (see Discrete-time Fourier transform#Relation to Fourier Transform ), and it is needed to preserve the energy of the signal as T {\displaystyle T} varies. A mathematically ideal way to interpolate the sequence involves the use of sinc functions . Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample n T , {\displaystyle nT,} with the amplitude of the sinc function scaled to the sample value, x ( n T ) . {\displaystyle x(nT).} Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method uses the Dirac comb and proceeds by convolving one sinc function with a series of Dirac delta pulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known as interpolation error . Practical digital-to-analog converters produce neither scaled and delayed sinc functions , nor ideal Dirac pulses . Instead they produce a piecewise-constant sequence of scaled and delayed rectangular pulses (the zero-order hold ), usually followed by a lowpass filter (called an "anti-imaging filter") to remove spurious high-frequency replicas (images) of the original baseband signal. When x ( t ) {\displaystyle x(t)} is a function with a Fourier transform X ( f ) {\displaystyle X(f)} : Then the samples x [ n ] {\displaystyle x[n]} of x ( t ) {\displaystyle x(t)} are sufficient to create a periodic summation of X ( f ) . {\displaystyle X(f).} (see Discrete-time Fourier transform#Relation to Fourier Transform ) : X 1 / T ( f ) ≜ ∑ k = − ∞ ∞ X ( f − k / T ) = ∑ n = − ∞ ∞ x [ n ] e − i 2 π f n T , {\displaystyle X_{1/T}(f)\ \triangleq \sum _{k=-\infty }^{\infty }X\left(f-k/T\right)=\sum _{n=-\infty }^{\infty }x[n]\ e^{-i2\pi fnT},} which is a periodic function and its equivalent representation as a Fourier series , whose coefficients are x [ n ] {\displaystyle x[n]} . This function is also known as the discrete-time Fourier transform (DTFT) of the sample sequence. As depicted, copies of X ( f ) {\displaystyle X(f)} are shifted by multiples of the sampling rate f s = 1 / T {\displaystyle f_{s}=1/T} and combined by addition. For a band-limited function ( X ( f ) = 0 , for all | f | ≥ B ) {\displaystyle (X(f)=0,{\text{ for all }}|f|\geq B)} and sufficiently large f s , {\displaystyle f_{s},} it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous X ( f ) . {\displaystyle X(f).} Any frequency component above f s / 2 {\displaystyle f_{s}/2} is indistinguishable from a lower-frequency component, called an alias , associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard), x ( t ) {\displaystyle x(t)} is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is a lowpass filter , and in this application it is called an anti-aliasing filter . When there is no overlap of the copies (also known as "images") of X ( f ) {\displaystyle X(f)} , the k = 0 {\displaystyle k=0} term of Eq.1 can be recovered by the product: X ( f ) = H ( f ) ⋅ X 1 / T ( f ) , {\displaystyle X(f)=H(f)\cdot X_{1/T}(f),} where: H ( f ) ≜ { 1 | f | < B 0 | f | > f s − B . {\displaystyle H(f)\ \triangleq \ {\begin{cases}1&|f|<B\\0&|f|>f_{s}-B.\end{cases}}} The sampling theorem is proved since X ( f ) {\displaystyle X(f)} uniquely determines x ( t ) {\displaystyle x(t)} . All that remains is to derive the formula for reconstruction. H ( f ) {\displaystyle H(f)} need not be precisely defined in the region [ B , f s − B ] {\displaystyle [B,\ f_{s}-B]} because X 1 / T ( f ) {\displaystyle X_{1/T}(f)} is zero in that region. However, the worst case is when B = f s / 2 , {\displaystyle B=f_{s}/2,} the Nyquist frequency. A function that is sufficient for that and all less severe cases is : H ( f ) = r e c t ( f f s ) = { 1 | f | < f s 2 0 | f | > f s 2 , {\displaystyle H(f)=\mathrm {rect} \left({\frac {f}{f_{s}}}\right)={\begin{cases}1&|f|<{\frac {f_{s}}{2}}\\0&|f|>{\frac {f_{s}}{2}},\end{cases}}} where r e c t {\displaystyle \mathrm {rect} } is the rectangular function . Therefore: The inverse transform of both sides produces the Whittaker–Shannon interpolation formula : which shows how the samples, x ( n T ) {\displaystyle x(nT)} , can be combined to reconstruct x ( t ) {\displaystyle x(t)} . Poisson shows that the Fourier series in Eq.1 produces the periodic summation of X ( f ) {\displaystyle X(f)} , regardless of f s {\displaystyle f_{s}} and B {\displaystyle B} . Shannon, however, only derives the series coefficients for the case f s = 2 B {\displaystyle f_{s}=2B} . Virtually quoting Shannon's original paper: x ( n 2 B ) = 1 2 π ∫ − 2 π B 2 π B X ( ω ) e i ω n 2 B d ω . {\displaystyle x\left({\tfrac {n}{2B}}\right)={1 \over 2\pi }\int _{-2\pi B}^{2\pi B}X(\omega )e^{i\omega {n \over {2B}}}\;{\rm {d}}\omega .} Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions , what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, as the Fourier pair relationship between the rect (the rectangular function) and sinc functions was well known by that time. [ 4 ] Let x n {\displaystyle x_{n}} be the n t h {\displaystyle n^{th}} sample. Then the function x ( t ) {\displaystyle x(t)} is represented by: As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes. The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor . The aliasing appears as a moiré pattern . The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor using an optical low-pass filter . Another example is shown here in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moiré pattern . The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The sampling theorem applies to camera systems, where the scene and lens constitute an analog spatial signal source, and the image sensor is a spatial sampling device. Each of these components is characterized by a modulation transfer function (MTF), representing the precise resolution (spatial bandwidth) available in that component. Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched. When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor, the under sampling acts as a low-pass filter to reduce or eliminate aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient spatial anti-aliasing , a separate anti-aliasing filter (optical low-pass filter) may be included in a camera system to reduce the MTF of the optical image. Instead of requiring an optical filter, the graphics processing unit of smartphone cameras performs digital signal processing to remove aliasing with a digital filter. Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies, which otherwise falls off rapidly at diffraction limits. The sampling theorem also applies to post-processing digital images, such as to up or down sampling. Effects of aliasing, blurring, and sharpening may be adjusted with digital filtering implemented in software, which necessarily follows the theoretical principles. To illustrate the necessity of f s > 2 B , {\displaystyle f_{s}>2B,} consider the family of sinusoids generated by different values of θ {\displaystyle \theta } in this formula: With f s = 2 B {\displaystyle f_{s}=2B} or equivalently T = 1 / 2 B , {\displaystyle T=1/2B,} the samples are given by: regardless of the value of θ . {\displaystyle \theta .} That sort of ambiguity is the reason for the strict inequality of the sampling theorem's condition. As discussed by Shannon: [ 2 ] A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to single-sideband modulation ) of the zero-frequency case. In this case the elementary pulse is obtained from sin ⁡ ( x ) / x {\displaystyle \sin(x)/x} by single-side-band modulation. That is, a sufficient no-loss condition for sampling signals that do not have baseband components exists that involves the width of the non-zero frequency interval as opposed to its highest frequency component. See sampling for more details and examples. For example, in order to sample FM radio signals in the frequency range of 100–102 MHz , it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval). A bandpass condition is that X ( f ) = 0 , {\displaystyle X(f)=0,} for all nonnegative f {\displaystyle f} outside the open band of frequencies: for some nonnegative integer N {\displaystyle N} . This formulation includes the normal baseband condition as the case N = 0. {\displaystyle N=0.} The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses: ( N + 1 ) sinc ⁡ ( ( N + 1 ) t T ) − N sinc ⁡ ( N t T ) . {\displaystyle (N+1)\,\operatorname {sinc} \left({\frac {(N+1)t}{T}}\right)-N\,\operatorname {sinc} \left({\frac {Nt}{T}}\right).} Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost. The sampling theory of Shannon can be generalized for the case of nonuniform sampling , that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition. [ 5 ] Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction. The general theory for non-baseband and nonuniform samples was developed in 1967 by Henry Landau . [ 6 ] He proved that the average sampling rate (uniform or otherwise) must be twice the occupied bandwidth of the signal, assuming it is a priori known what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals for which the amount of occupied bandwidth is known but the actual occupied portion of the spectrum is unknown. [ 7 ] In the 2000s, a complete theory was developed (see the section Sampling below the Nyquist rate under additional restrictions below) using compressed sensing . In particular, the theory, using signal processing language, is described in a 2009 paper by Mishali and Eldar. [ 8 ] They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum . Note that minimum sampling requirements do not necessarily guarantee stability . The Nyquist–Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the Whittaker–Shannon interpolation formula , the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition . A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing , which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low overall bandwidth (say, the effective bandwidth E B {\displaystyle EB} ) but the frequency locations are unknown, rather than all together in a single band, so that the passband technique does not apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus 2 B . {\displaystyle 2B.} Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than 2 E B . {\displaystyle 2EB.} With this approach, reconstruction is no longer given by a formula, but instead by the solution to a linear optimization program . Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimal lossy compression . [ 9 ] This setting is relevant in cases where the joint effect of sampling and quantization is to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a random signal . For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimal quantization . [ 10 ] The sampling theorem was implied by the work of Harry Nyquist in 1928, [ 11 ] in which he showed that up to 2 B {\displaystyle 2B} independent pulse samples could be sent through a system of bandwidth B {\displaystyle B} ; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result [ 12 ] and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step-response sine integral ; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English). The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon . [ 2 ] The mathematician E. T. Whittaker published similar results in 1915, [ 13 ] J. M. Whittaker in 1935, [ 14 ] and Gabor in 1946 ("Theory of communication"). In 1948 and 1949, Claude E. Shannon published the two revolutionary articles in which he founded information theory . [ 15 ] [ 16 ] [ 2 ] In Shannon's " A Mathematical Theory of Communication ", the sampling theorem is formulated as "Theorem 13": Let f ( t ) {\displaystyle f(t)} contain no frequencies over W. Then f ( t ) = ∑ n = − ∞ ∞ X n sin ⁡ π ( 2 W t − n ) π ( 2 W t − n ) , {\displaystyle f(t)=\sum _{n=-\infty }^{\infty }X_{n}{\frac {\sin \pi (2Wt-n)}{\pi (2Wt-n)}},} where X n = f ( n 2 W ) . {\displaystyle X_{n}=f\left({\frac {n}{2W}}\right).} It was not until these articles were published that the theorem known as "Shannon's sampling theorem" became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art. [ B ] A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature of communication theory ". Despite his sampling theorem being published at the end of the 1940s, Shannon had derived his sampling theorem as early as 1940. [ 17 ] Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example, by Jerri [ 18 ] and by Lüke. [ 19 ] For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering [ 20 ] mentions several other discoverers and names in a paragraph and pair of footnotes: As pointed out by Higgins, the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker and before him also by Ogura. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel. [ Meijering 1 ] As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov . In more implicit, verbal form, it had also been described in the German literature by Raabe. Several authors have mentioned that Someya introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston introduced it independently of Shannon around the same time. [ Meijering 2 ] In Russian literature it is known as the Kotelnikov's theorem, named after Vladimir Kotelnikov , who discovered it in 1933. [ 21 ] Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem remains obscure. The term Nyquist Sampling Theorem (capitalized thus) appeared as early as 1959 in a book from his former employer, Bell Labs , [ 22 ] and appeared again in 1963, [ 23 ] and not capitalized in 1965. [ 24 ] It had been called the Shannon Sampling Theorem as early as 1954, [ 25 ] but also just the sampling theorem by several other books in the early 1950s. In 1958, Blackman and Tukey cited Nyquist's 1928 article as a reference for the sampling theorem of information theory , [ 26 ] even though that article does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries: Exactly what "Nyquist's result" they are referring to remains mysterious. When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering, [ 20 ] "he referred to the critical sampling interval T = 1 2 W {\displaystyle T={\frac {1}{2W}}} as the Nyquist interval corresponding to the band W , {\displaystyle W,} in recognition of Nyquist's discovery of the fundamental importance of this interval in connection with telegraphy". This explains Nyquist's name on the critical interval, but not on the theorem. Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black : If the essential frequency range is limited to B {\displaystyle B} cycles per second, 2 B {\displaystyle 2B} was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less than half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1 2 B {\displaystyle {\frac {1}{2B}}} has been termed a Nyquist interval . According to the Oxford English Dictionary , this may be the origin of the term Nyquist rate . In Black's usage, it is not a sampling rate, but a signaling rate.
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
The Nysted reagent is a reagent used in organic synthesis for the methylenation of a carbonyl group . It was discovered in 1975 by Leonard N. Nysted in Chicago, Illinois. It was originally prepared by reacting dibromomethane and activated zinc in THF. [ 1 ] A proposed mechanism for the methenylation reaction runs as follows: A similar reagent is Tebbe's reagent . [ 2 ] In the Nysted olefination, the Nysted reagent reacts with TiCl 4 to methylenate a carbonyl group. The biggest problem with these reagents are that the reactivity has not been well documented. It is believed that the TiCl 4 acts as a mediator in the reaction. Nysted reagent can methylenate different carbonyl groups in the presence of different mediators. For example, in the presence of BF 3 •OEt 2 , the reagent will methylenate aldehydes. On the other hand, in the presence of TiCl 4 , TiCl 3 or TiCl 2 and BF 3 •OEt 2 , the reagent can methylenate ketones. Most commonly, it is used to methylenate ketones because of their general difficulty to methylenate due to crowding around the carbonyl group. The Nysted reagent is able to overcome the additional steric hindrance found in ketones, and more easily methylenate the carbonyl group. In contrast to the Wittig reaction the neutral reaction conditions of the Nysted reagent make it a useful alternative for the methylenation of easily enolizable ketones. [ citation needed ] There is little research on Nysted reagent because of the hazards and high reactivity and the difficulty of keeping the reagent stable while it is in use. More specifically, it can form explosive peroxides when exposed to air and is extremely flammable. Also, it reacts violently with water. These make this reagent very dangerous to work with. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Nysted_reagent
In mathematics numerical analysis , the Nyström method [ 1 ] or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into n {\displaystyle n} discrete intervals; quadrature or numerical integration determines the weights and locations of representative points for the integral. The problem becomes a system of linear equations with n {\displaystyle n} equations and n {\displaystyle n} unknowns, and the underlying function is implicitly represented by an interpolation using the chosen quadrature rule. This discrete problem may be ill-conditioned, depending on the original problem and the chosen quadrature rule. Since the linear equations require O ( n 3 ) {\displaystyle O(n^{3})} [ citation needed ] operations to solve, high-order quadrature rules perform better because low-order quadrature rules require large n {\displaystyle n} for a given accuracy. Gaussian quadrature is normally a good choice for smooth, non-singular problems. Standard quadrature methods seek to represent an integral as a weighed sum in the following manner: where w k {\displaystyle w_{k}} are the weights of the quadrature rule, and points x k {\displaystyle x_{k}} are the abscissas. Applying this to the inhomogeneous Fredholm equation of the second kind results in
https://en.wikipedia.org/wiki/Nyström_method
In superparamagnetism (a form of magnetism ), the Néel effect appears when a superparamagnetic material in a conducting coil is subject to varying frequencies of magnetic fields . The non-linearity of the superparamagnetic material acts as a frequency mixer , with voltage measured at the coil terminals. It consists of several frequency components, at the initial frequency and at the frequencies of certain linear combinations. The frequency shift of the field to be measured allows for detection of a direct current field with a standard coil. In 1949 French physicist Louis Néel (1904-2000) discovered that when they are finely divided, ferromagnetic nanoparticles lose their hysteresis below a certain size; [ 1 ] [ 2 ] this phenomenon is known as superparamagnetism. The magnetization of these materials is subject to the applied field, which is highly non-linear. This curve is well described by the Langevin function , but for weak fields it can be simply written as: where χ 0 {\displaystyle \chi _{0}} is the susceptibility at zero field and N e {\displaystyle N_{e}} is known as the Néel coefficient. The Néel coefficient reflects the non-linearity of superparamagnetic materials in low fields. If a coil of N {\displaystyle N} turns with a surface S {\displaystyle S} through which passes a current of excitation I exc {\displaystyle I_{\text{exc}}} is immersed in a magnetic field H e x t {\displaystyle H_{ext}} collinear with the axis of the coil, a superparamagnetic material is deposited inside the coil. The electromotive force to the terminals of a winding of the coil, e {\displaystyle e} , is given by the formula: where B {\displaystyle B} is the magnetic induction given by the equation: In the absence of magnetic material, and Differentiating this expression, the frequency of the voltage is the same as the excitation current i exc {\displaystyle i_{\text{exc}}} or the magnetic field H e x t {\displaystyle H_{ext}} . In the presence of superparamagnetic material, neglecting the higher terms of the Taylor expansion, we obtain for B: A new derivation of the first term of the equation μ 0 μ r ( 1 + χ 0 ) ( H e x t + H exc ) {\displaystyle \mu _{0}\mu _{r}(1+\chi _{0})(H_{ext}+H_{\text{exc}})} provides frequency voltage components of the stream of excitement i exc {\displaystyle i_{\text{exc}}} or the magnetic field H e x t {\displaystyle H_{ext}} . The development of the second term ( H e x t + H exc ) 3 = H e x t 3 + 3 H e x t 2 H exc + 3 H e x t H exc 2 + H exc 3 {\displaystyle (H_{ext}+H_{\text{exc}})^{3}=H_{ext}^{3}+3H_{ext}^{2}H_{\text{exc}}+3H_{ext}H_{\text{exc}}^{2}+H_{\text{exc}}^{3}} multiplies the frequency components in which intermodular frequencies start components and generate their linear combinations. The non-linearity of the superparamagnetic material acts as a frequency mixer. Calling H ( l ) {\displaystyle H(l)} the total magnetic field within the coil at the abscissa , integrating the above induction coil along the abscissa between 0 and L p {\displaystyle L_{p}} and differentiating with respect to t {\displaystyle t} obtains: with I exc ( t ) = I exc cos ⁡ ( w exc t ) {\displaystyle I_{\text{exc}}(t)=I_{\text{exc}}\cos(w_{\text{exc}}t)} The conventional terms of self-inductance and Rogowski effect are found in both the original frequencies. The third term is due to the Néel effect; it reports the intermodulation between the excitation current and the external field. When the excitation current is sinusoidal , the effect is Néel characterized by the appearance of a second harmonic carrying the information flow field: An important application of the Néel effect is as a current sensor , measuring the magnetic field radiated by a conductor with a current; [ 3 ] this is the principle of Néel effect current sensors. [ 4 ] The Néel effect allows the accurate measurement of currents with very low-frequency-type sensors in a current transformer without contact. The transducer of a Néel-effect current sensor consists of a coil with a core of superparamagnetic nanoparticles. The coil is traversed by a current excitation: In the presence of an external magnetic field to be measured: the transducer transposes (with the Néel effect) the information to be measured, H (f) around a carrier frequency , the harmonic of order 2 excitation current 2: which is simpler. The electromotive force generated by the coil is proportional to the magnetic field to measure: and to the square of the excitation current: To improve the measurement's performance (such as linearity and sensitivity to temperature and vibration), the sensor includes a second permanent winding-reaction against it to cancel the second harmonic. The relationship of the current reaction against the primary current is proportional to the number of turns against reaction:
https://en.wikipedia.org/wiki/Néel_effect
In algebraic geometry , the Néron model (or Néron minimal model , or minimal model ) for an abelian variety A K defined over the field of fractions K of a Dedekind domain R is the "push-forward" of A K from Spec( K ) to Spec( R ), in other words the "best possible" group scheme A R defined over R corresponding to A K . They were introduced by André Néron ( 1961 , 1964 ) for abelian varieties over the quotient field of a Dedekind domain R with perfect residue fields, and Raynaud (1966) extended this construction to semiabelian varieties over all Dedekind domains. Suppose that R is a Dedekind domain with field of fractions K , and suppose that A K is a smooth separated scheme over K (such as an abelian variety). Then a Néron model of A K is defined to be a smooth separated scheme A R over R with fiber A K that is universal in the following sense. In particular, the canonical map A R ( R ) → A K ( K ) {\displaystyle A_{R}(R)\to A_{K}(K)} is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism. In terms of sheaves, any scheme A over Spec( K ) represents a sheaf on the category of schemes smooth over Spec( K ) with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec( K ) to Spec( R ), which is a sheaf over Spec( R ). If this pushforward is representable by a scheme, then this scheme is the Néron model of A . In general the scheme A K need not have any Néron model. For abelian varieties A K Néron models exist and are unique (up to unique isomorphism) and are commutative quasi-projective group schemes over R . The fiber of a Néron model over a closed point of Spec( R ) is a smooth commutative algebraic group , but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exist for the additive group. The Néron model of an elliptic curve A K over K can be constructed as follows. First form the minimal model over R in the sense of algebraic (or arithmetic) surfaces. This is a regular proper surface over R but is not in general smooth over R or a group scheme over R . Its subscheme of smooth points over R is the Néron model, which is a smooth group scheme over R but not necessarily proper over R . The fibers in general may have several irreducible components, and to form the Néron model one discards all multiple components, all points where two components intersect, and all singular points of the components. Tate's algorithm calculates the special fiber of the Néron model of an elliptic curve, or more precisely the fibers of the minimal surface containing the Néron model.
https://en.wikipedia.org/wiki/Néron_model
In number theory , the Néron–Tate height (or canonical height ) is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field . It is named after André Néron and John Tate . Néron defined the Néron–Tate height as a sum of local heights. [ 1 ] Although the global Néron–Tate height is quadratic, the constituent local heights are not quite quadratic. Tate (unpublished) defined it globally by observing that the logarithmic height h L {\displaystyle h_{L}} associated to a symmetric invertible sheaf L {\displaystyle L} on an abelian variety A {\displaystyle A} is “almost quadratic,” and used this to show that the limit exists, defines a quadratic form on the Mordell–Weil group of rational points, and satisfies where the implied O ( 1 ) {\displaystyle O(1)} constant is independent of P {\displaystyle P} . [ 2 ] If L {\displaystyle L} is anti-symmetric, that is [ − 1 ] ∗ L = L − 1 {\displaystyle [-1]^{*}L=L^{-1}} , then the analogous limit converges and satisfies h ^ L ( P ) = h L ( P ) + O ( 1 ) {\displaystyle {\hat {h}}_{L}(P)=h_{L}(P)+O(1)} , but in this case h ^ L {\displaystyle {\hat {h}}_{L}} is a linear function on the Mordell-Weil group. For general invertible sheaves, one writes L ⊗ 2 = ( L ⊗ [ − 1 ] ∗ L ) ⊗ ( L ⊗ [ − 1 ] ∗ L − 1 ) {\displaystyle L^{\otimes 2}=(L\otimes [-1]^{*}L)\otimes (L\otimes [-1]^{*}L^{-1})} as a product of a symmetric sheaf and an anti-symmetric sheaf, and then is the unique quadratic function satisfying The Néron–Tate height depends on the choice of an invertible sheaf on the abelian variety, although the associated bilinear form depends only on the image of L {\displaystyle L} in the Néron–Severi group of A {\displaystyle A} . If the abelian variety A {\displaystyle A} is defined over a number field K and the invertible sheaf is symmetric and ample, then the Néron–Tate height is positive definite in the sense that it vanishes only on torsion elements of the Mordell–Weil group A ( K ) {\displaystyle A(K)} . More generally, h ^ L {\displaystyle {\hat {h}}_{L}} induces a positive definite quadratic form on the real vector space A ( K ) ⊗ R {\displaystyle A(K)\otimes \mathbb {R} } . On an elliptic curve , the Néron–Severi group is of rank one and has a unique ample generator, so this generator is often used to define the Néron–Tate height, which is denoted h ^ {\displaystyle {\hat {h}}} without reference to a particular line bundle. (However, the height that naturally appears in the statement of the Birch and Swinnerton-Dyer conjecture is twice this height.) On abelian varieties of higher dimension, there need not be a particular choice of smallest ample line bundle to be used in defining the Néron–Tate height, and the height used in the statement of the Birch–Swinnerton-Dyer conjecture is the Néron–Tate height associated to the Poincaré line bundle on A × A ^ {\displaystyle A\times {\hat {A}}} , the product of A {\displaystyle A} with its dual . The bilinear form associated to the canonical height h ^ {\displaystyle {\hat {h}}} on an elliptic curve E is The elliptic regulator of E / K is where P 1 ,..., P r is a basis for the Mordell–Weil group E ( K ) modulo torsion (cf. Gram determinant ). The elliptic regulator does not depend on the choice of basis. More generally, let A / K be an abelian variety, let B ≅ Pic 0 ( A ) be the dual abelian variety to A , and let P be the Poincaré line bundle on A × B . Then the abelian regulator of A / K is defined by choosing a basis Q 1 ,..., Q r for the Mordell–Weil group A ( K ) modulo torsion and a basis η 1 ,..., η r for the Mordell–Weil group B ( K ) modulo torsion and setting (The definitions of elliptic and abelian regulator are not entirely consistent, since if A is an elliptic curve, then the latter is 2 r times the former.) The elliptic and abelian regulators appear in the Birch–Swinnerton-Dyer conjecture . There are two fundamental conjectures that give lower bounds for the Néron–Tate height. In the first, the field K is fixed and the elliptic curve E / K and point P ∈ E ( K ) vary, while in the second, the elliptic Lehmer conjecture , the curve E / K is fixed while the field of definition of the point P varies. In both conjectures, the constants are positive and depend only on the indicated quantities. (A stronger form of Lang's conjecture asserts that c {\displaystyle c} depends only on the degree [ K : Q ] {\displaystyle [K:\mathbb {Q} ]} .) It is known that the abc conjecture implies Lang's conjecture, and that the analogue of Lang's conjecture over one dimensional characteristic 0 function fields is unconditionally true. [ 3 ] [ 5 ] The best general result on Lehmer's conjecture is the weaker estimate h ^ ( P ) ≥ c ( E / K ) / [ K ( P ) : K ] 3 + ε {\displaystyle {\hat {h}}(P)\geq c(E/K)/[K(P):K]^{3+\varepsilon }} due to Masser . [ 6 ] When the elliptic curve has complex multiplication , this has been improved to h ^ ( P ) ≥ c ( E / K ) / [ K ( P ) : K ] 1 + ε {\displaystyle {\hat {h}}(P)\geq c(E/K)/[K(P):K]^{1+\varepsilon }} by Laurent. [ 7 ] There are analogous conjectures for abelian varieties, with the nontorsion condition replaced by the condition that the multiples of P {\displaystyle P} form a Zariski dense subset of A {\displaystyle A} , and the lower bound in Lang's conjecture replaced by h ^ ( P ) ≥ c ( K ) h ( A / K ) {\displaystyle {\hat {h}}(P)\geq c(K)h(A/K)} , where h ( A / K ) {\displaystyle h(A/K)} is the Faltings height of A / K {\displaystyle A/K} . A polarized algebraic dynamical system is a triple ( V , φ , L ) {\displaystyle (V,\varphi ,L)} consisting of a (smooth projective) algebraic variety V {\displaystyle V} , an endomorphism φ : V → V {\displaystyle \varphi :V\to V} , and a line bundle L → V {\displaystyle L\to V} with the property that φ ∗ L = L ⊗ d {\displaystyle \varphi ^{*}L=L^{\otimes d}} for some integer d > 1 {\displaystyle d>1} . The associated canonical height is given by the Tate limit [ 8 ] where φ ( n ) = φ ∘ ⋯ ∘ φ {\displaystyle \varphi ^{(n)}=\varphi \circ \cdots \circ \varphi } is the n -fold iteration of φ {\displaystyle \varphi } . For example, any morphism φ : P n → P n {\displaystyle \varphi :\mathbb {P} ^{n}\to \mathbb {P} ^{n}} of degree d > 1 {\displaystyle d>1} yields a canonical height associated to the line bundle relation φ ∗ O ( 1 ) = O ( n ) {\displaystyle \varphi ^{*}{\mathcal {O}}(1)={\mathcal {O}}(n)} . If V {\displaystyle V} is defined over a number field and L {\displaystyle L} is ample, then the canonical height is non-negative, and ( P {\displaystyle P} is preperiodic if its forward orbit P , φ ( P ) , φ 2 ( P ) , φ 3 ( P ) , … {\displaystyle P,\varphi (P),\varphi ^{2}(P),\varphi ^{3}(P),\ldots } contains only finitely many distinct points.) General references for the theory of canonical heights
https://en.wikipedia.org/wiki/Néron–Tate_height
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas , nitrous , or factitious air , among others, [ 4 ] is a chemical compound , an oxide of nitrogen with the formula N 2 O . At room temperature, it is a colourless non-flammable gas , and has a slightly sweet scent and taste. [ 4 ] At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. [ 4 ] Nitrous oxide has significant medical uses , especially in surgery and dentistry , for its anaesthetic and pain-reducing effects, [ 5 ] and it is on the World Health Organization's List of Essential Medicines . [ 6 ] Its colloquial name, "laughing gas", coined by Humphry Davy , describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief " high ". [ 5 ] [ 7 ] When abused chronically, it may cause neurological damage through inactivation of vitamin B 12 . It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant , with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. [ 8 ] [ 9 ] It is a major scavenger of stratospheric ozone , with an impact comparable to that of CFCs . [ 10 ] About 40% of human-caused emissions are from agriculture , [ 11 ] [ 12 ] as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. [ 13 ] As the third most important greenhouse gas , nitrous oxide substantially contributes to global warming . [ 14 ] [ 15 ] Reduction of emissions is an important goal in the politics of climate change . [ 16 ] The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory ) [ 17 ] or inflammable nitrous air . [ 18 ] Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775) , where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid . [ 19 ] The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt , who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794) . This book was important for two reasons. First, James Watt had invented a novel machine to produce " factitious airs " (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". [ 20 ] The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials , [ clarification needed ] which began in 1798 when Thomas Beddoes established the " Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells ( Bristol ). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. [ 20 ] The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800) . In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. [ 21 ] Davy coined the name "laughing gas" for nitrous oxide. [ 22 ] Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia . The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class , became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. [ 23 ] One of the earliest commercial producers in the U.S. was George Poe , cousin of the poet Edgar Allan Poe , who also was the first to liquefy the gas. [ 24 ] The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells , with assistance by Gardner Quincy Colton and John Mankey Riggs , demonstrated insensitivity to pain from a dental extraction on 11 December 1844. [ 25 ] In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut , and, according to his own record, only failed in two cases. [ 26 ] In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. [ 27 ] The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City . [ 20 ] Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. [ 28 ] Today, nitrous oxide is used in dentistry as an anxiolytic , as an adjunct to local anaesthetic . Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether , being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. [ 20 ] When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. [ 28 ] Although hospitals today use a more advanced anaesthetic machine , these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers , who touted it as a cure for consumption , scrofula , catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. [ 29 ] [ 30 ] Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint . N 2 O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH 2 at 187 °C (369 °F) to give NaN 3 : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. [ 31 ] The pharmacological mechanism of action of inhaled N 2 O is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels , which likely plays a major role. It moderately blocks NMDAR and β 2 -subunit -containing nACh channels , weakly inhibits AMPA , kainate , GABA C and 5-HT 3 receptors , and slightly potentiates GABA A and glycine receptors . [ 32 ] [ 33 ] It also has been shown to activate two-pore-domain K + channels . [ 34 ] While N 2 O affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. [ 32 ] [ 35 ] In addition to its effects on ion channels, N 2 O may act similarly to nitric oxide (NO) in the central nervous system. [ 35 ] Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; [ 36 ] [ 37 ] however, Jay (2008) [ 38 ] suggests that it reliably induces the following states and sensations: A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. [ 38 ] In behavioural tests of anxiety , a low dose of N 2 O is an effective anxiolytic . This anti-anxiety effect is associated with enhanced activity of GABA A receptors, as it is partially reversed by benzodiazepine receptor antagonists . Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N 2 O . [ 39 ] Indeed, in humans given 30% N 2 O , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. [ 40 ] [ 41 ] The analgesic effects of N 2 O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N 2 O . [ 42 ] Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin ) also block the antinociceptive effects of N 2 O . [ 43 ] Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N 2 O . [ 43 ] Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N 2 O , but these drugs have no effect when injected into the spinal cord . Apart from an indirect action, nitrous oxide, like morphine [ 44 ] also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. [ 45 ] [ 46 ] Conversely, α 2 -adrenoceptor antagonists block the pain-reducing effects of N 2 O when given directly to the spinal cord, but not when applied directly to the brain. [ 47 ] Indeed, α 2B -adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N 2 O . [ 48 ] Apparently N 2 O -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. [ 49 ] Exactly how N 2 O causes the release of endogenous opioid peptides remains uncertain. Various methods of producing nitrous oxide are used. [ 50 ] Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate [ 50 ] at about 250 °C, which decomposes into nitrous oxide and water vapour. [ 51 ] The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation . [ 52 ] The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate : [ 53 ] Another method involves the reaction of urea, nitric acid and sulfuric acid: [ 54 ] Direct oxidation of ammonia with a manganese dioxide - bismuth oxide catalyst has been reported: [ 55 ] cf. Ostwald process . Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating HNO 3 with SnCl 2 and HCl also has been demonstrated: Hyponitrous acid decomposes to N 2 O and water with a half-life of 16 days at 25 °C at pH 1–3. [ 56 ] Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle . Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. [ 8 ] The growth rate of about 1 ppb per year has also accelerated during recent decades. [ 9 ] Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. [ 58 ] Important atmospheric properties of N 2 O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." [ 61 ] 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N 2 O was emitted in 2007–2016. [ 61 ] About 40% of N 2 O emissions are from humans and the rest are part of the natural nitrogen cycle . [ 62 ] The N 2 O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. [ 63 ] Most of the N 2 O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. [ 64 ] Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). [ 65 ] Wetlands can also be emitters of nitrous oxide . [ 66 ] [ 67 ] Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. [ 61 ] The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. [ 73 ] These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, [ 74 ] and the majority of a year's emissions may occur when conditions are favorable during "hot moments" [ 75 ] [ 76 ] and/or at favorable locations known as "hotspots". [ 77 ] Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone . [ 68 ] [ 78 ] [ 79 ] Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification . Specifically, they include: These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter , acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase . [ 80 ] Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. [ 81 ] In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. [ 82 ] Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant . In the presence of a heated catalyst at a temperature of 577 °C (1,071 °F), N 2 O decomposes exothermically into nitrogen and oxygen. [ 83 ] Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse ( I sp ) up to 180 s. While noticeably less than the I sp available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide ), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). [ 84 ] At 600 psi , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. [ 85 ] [ 86 ] In vehicle racing , nitrous oxide (often called " nitrous ") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about 300 °C (572 °F), its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold , the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines . Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft , high-speed bombers and high-altitude interceptor aircraft . It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50 , a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). [ 87 ] However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide ( SO 2 ) is added to prevent substance abuse. [ 88 ] The gas is approved for use as a food additive ( E number : E942), specifically as an aerosol spray propellant . It is commonly used in aerosol whipped cream canisters and cooking sprays . The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. [ 89 ] Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season , due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. [ 90 ] Also, cooking spray, made from various oils with lecithin emulsifier , may use nitrous oxide propellant , or alternatively food-grade alcohol or propane . Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. [ 20 ] In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. [ 28 ] Today, the gas is administered in hospitals by means of an automated relative analgesia machine , with an anaesthetic vaporiser and a medical ventilator , that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic , and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane . It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. [ 91 ] [ 92 ] [ 93 ] Dentists use a simpler machine which only delivers an N 2 O / O 2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. [ 94 ] Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth , trauma , oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. [ 95 ] Its use for acute coronary syndrome is of unknown benefit. [ 96 ] In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. [ 97 ] Recreational inhalation of nitrous oxide , to induce euphoria and slight hallucinations , began with the British upper class in 1799 in gatherings known as "laughing gas parties". [ 98 ] From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. [ 99 ] Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market , in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. [ 100 ] A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. [ 101 ] Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. [ 102 ] While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. [ 103 ] In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. [ 104 ] In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok . [ 105 ] Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or N 2 O scavenger system may be needed to prevent waste-gas buildup. [ citation needed ] The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. [ 106 ] It set a recommended exposure limit (REL) of 25 ppm (46 mg/m 3 ) to escaped anaesthetic. [ 107 ] Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, [ 108 ] putting the user at risk of accidental injury. [ 38 ] Nitrous oxide is neurotoxic , and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. [ 104 ] [ 103 ] It is believed that, like other NMDA receptor antagonists , N 2 O produces Olney's lesions in rodents upon prolonged (several hour) exposure. [ 109 ] [ 110 ] [ 111 ] [ 112 ] However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. [ 113 ] In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. [ 109 ] Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia . This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), [ 114 ] which contain no oxygen gas. [ 115 ] In reports to poison control centers, heavy users (≥400 g or ≥200 L of N 2 O gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy : ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity . [ 116 ] Nitrous oxide might have therapeutic use in treating stroke . In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca 2+ influx in neuronal cell cultures, a cause of excitotoxicity . [ 113 ] Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. [ 117 ] This correlation is dose-dependent [ 118 ] [ 119 ] and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. [ citation needed ] Long-term exposure to nitrous oxide may cause vitamin B 12 deficiency . This can cause serious neurotoxicity if the user has preexisting vitamin B 12 deficiency. [ 120 ] It inactivates the cobalamin form of vitamin B 12 by oxidation. Symptoms of vitamin B 12 deficiency, including sensory neuropathy , myelopathy and encephalopathy , may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B 12 deficiency. Symptoms are treated with high doses of vitamin B 12 , but recovery can be slow and incomplete. [ 121 ] People with normal vitamin B 12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B 12 levels should be checked in people with risk factors for vitamin B 12 deficiency prior to using nitrous oxide anaesthesia. [ 122 ] Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. [ 123 ] [ 124 ] [ 125 ] At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature . The pressure curve is thus unusually sensitive to temperature. [ 126 ] As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to " water hammer "-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). [ 127 ] Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. [ 128 ] There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. [ 84 ] Global accounting of N 2 O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr ( teragrams , or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture . [ 11 ] [ 12 ] Nitrous oxide has significant global warming potential as a greenhouse gas . On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide ( CO 2 ). [ 60 ] However, because of its low concentration (less than 1/1,000 of that of CO 2 ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane . [ 129 ] On the other hand, since about 40% of the N 2 O entering the atmosphere is the result of human activity, [ 68 ] control of nitrous oxide is part of efforts to curb greenhouse gas emissions. [ 130 ] Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture , when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change . [ 131 ] Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid , which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. [ 132 ] A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event . [ 133 ] Nitrous oxide has also been implicated in thinning the ozone layer . A 2009 study suggested that N 2 O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. [ 10 ] [ 134 ] In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks [ 135 ] is legal when intended for medical anaesthesia. The New Zealand Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. [ 136 ] This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth ( UK ) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. [ 137 ] In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. [ 138 ] Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. [ 139 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. [ citation needed ] For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 140 ]
https://en.wikipedia.org/wiki/N₂O
Dinitrogen tetroxide Dinitrogen trioxide Nitrogen dioxide Nitrous oxide Nitroxyl (reduced form) Hydroxylamine (hydrogenated form) Nitric oxide ( nitrogen oxide or nitrogen monoxide [ 1 ] ) is a colorless gas with the formula NO . It is one of the principal oxides of nitrogen . Nitric oxide is a free radical : it has an unpaired electron , which is sometimes denoted by a dot in its chemical formula ( • N=O or • NO). Nitric oxide is also a heteronuclear diatomic molecule , a class of molecules whose study spawned early modern theories of chemical bonding . [ 6 ] An important intermediate in industrial chemistry , nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. [ 7 ] It was proclaimed the " Molecule of the Year " in 1992. [ 8 ] The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. [ 9 ] Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing. [ 10 ] [ 11 ] Nitric oxide should not be confused with nitrogen dioxide (NO 2 ), a brown gas and major air pollutant , or with nitrous oxide (N 2 O), an anesthetic gas. [ 6 ] Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. [ 12 ] Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott , Louis J. Ignarro , and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine. [ 13 ] The ground state electronic configuration of NO is, in united atom notation: [ 14 ] ( 1 σ ) 2 ( 2 σ ) 2 ( 3 σ ) 2 ( 4 σ ∗ ) 2 ( 5 σ ) 2 ( 1 π ) 4 ( 2 π ∗ ) 1 {\displaystyle (1\sigma )^{2}(2\sigma )^{2}(3\sigma )^{2}(4\sigma ^{*})^{2}(5\sigma )^{2}(1\pi )^{4}(2\pi ^{*})^{1}} The first two orbitals are actually pure atomic 1 s O and 1 s N from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron. The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J = 3 ⁄ 2 or J = 1 ⁄ 2 . The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transfer of negative electronic charge from oxygen to nitrogen. [ 15 ] Upon condensing to a neat liquid, nitric oxide dimerizes to colorless dinitrogen dioxide (O=N–N=O), but the association is weak and reversible. The N–N distance in crystalline NO is 218 pm, nearly twice the N–O distance. Condensation in a highly polar environment instead gives the red alternant isomer O=N–O + =N − . [ 6 ] Since the heat of formation of • NO is endothermic , NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction: When exposed to oxygen , nitric oxide converts into nitrogen dioxide : This reaction is thought to occur via the intermediates ONOO • and the red compound ONOONO. [ 16 ] In water, nitric oxide reacts with oxygen to form nitrous acid (HNO 2 ). The reaction is thought to proceed via the following stoichiometry : Nitric oxide reacts with fluorine , chlorine , and bromine to form the nitrosyl halides, such as nitrosyl chloride : With NO 2 , also a radical, NO combines to form the intensely blue dinitrogen trioxide : [ 6 ] Nitric oxide rarely sees organic chemistry use. Most reactions with it produce complex mixtures of salts, separable only through careful recrystallization . [ 17 ] The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation . The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate , giving a diazeniumdiolate (also called a nitrosohydroxylamine ). [ 18 ] The product can undergo a subsequent retro- aldol reaction , giving an overall process similar to the haloform reaction . For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position , with subsequent loss of methyl acetate as a by-product : [ 19 ] This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide , ultimately forming sodium formate and nitrous oxide by way of an N -methoxydiazeniumdiolate. [ 20 ] Sufficiently basic secondary amines undergo a Traube-like reaction to give NONOates . [ 21 ] However, very few nucleophiles undergo the Traube reaction, either failing to adduce NO or immediately decomposing with nitrous oxide release. [ 17 ] Nitric oxide reacts with transition metals to give complexes called metal nitrosyls . The most common bonding mode of nitric oxide is the terminal linear type (M−NO). [ 6 ] Alternatively, nitric oxide can serve as a one-electron pseudohalide. In such complexes, the M−N−O group is characterized by an angle between 120° and 140°. The NO group can also bridge between metal centers through the nitrogen atom in a variety of geometries. In commercial settings, nitric oxide is produced by the oxidation of ammonia at 750–900 °C (normally at 850 °C) with platinum as catalyst in the Ostwald process : The uncatalyzed endothermic reaction of oxygen (O 2 ) and nitrogen (N 2 ), which is effected at high temperature (>2000 °C) by lightning has not been developed into a practical commercial synthesis (see Birkeland–Eyde process ): In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper : An alternative route involves the reduction of nitrous acid in the form of sodium nitrite or potassium nitrite : The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments. So-called NONOate compounds are also used for nitric oxide generation, especially in biological laboratories. However, other Traube adducts may decompose to instead give nitrous oxide . [ 22 ] Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone . [ 23 ] A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide , accompanied with emission of light ( chemiluminescence ): which can be measured with a photodetector . The amount of light produced is proportional to the amount of nitric oxide in the sample. Other methods of testing include electroanalysis (amperometric approach), where ·NO reacts with an electrode to induce a current or voltage change. The detection of NO radicals in biological tissues is particularly difficult due to the short lifetime and concentration of these radicals in tissues. One of the few practical methods is spin trapping of nitric oxide with iron- dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR). [ 24 ] [ 25 ] A group of fluorescent dye indicators that are also available in acetylated form for intracellular measurements exist. The most common compound is 4,5-diaminofluorescein (DAF-2). [ 26 ] Nitric oxide reacts with the hydroperoxyl radical ( HO • 2 ) to form nitrogen dioxide (NO 2 ), which then can react with a hydroxyl radical (HO • ) to produce nitric acid (HNO 3 ): Nitric acid, along with sulfuric acid , contributes to acid rain deposition. • NO participates in ozone layer depletion . Nitric oxide reacts with stratospheric ozone to form O 2 and nitrogen dioxide: This reaction is also utilized to measure concentrations of • NO in control volumes. As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, HO • 2 , or diatomic oxygen, O 2 ). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function. [ 27 ] NO is a gaseous signaling molecule . [ 28 ] It is a key vertebrate biological messenger , playing a role in a variety of biological processes. [ 29 ] It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells. [ 30 ] Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L -arginine , oxygen , and NADPH by various nitric oxide synthase (NOS) enzymes . [ 31 ] Reduction of inorganic nitrate may also make nitric oxide. [ 32 ] One of the main enzymatic targets of nitric oxide is guanylyl cyclase . [ 33 ] The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. [ 33 ] Nitric oxide is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes. These attributes make nitric oxide ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. [ 32 ] Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated. [ 33 ] The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, resulting in vasodilation and increasing blood flow. [ 32 ] Sildenafil (Viagra) is a drug that uses the nitric oxide pathway. Sildenafil does not produce nitric oxide, but enhances the signals that are downstream of the nitric oxide pathway by protecting cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum , allowing for the signal to be enhanced, and thus vasodilation . [ 31 ] Another endogenous gaseous transmitter, hydrogen sulfide (H 2 S) works with NO to induce vasodilation and angiogenesis in a cooperative manner. [ 34 ] [ 35 ] Nasal breathing produces nitric oxide within the body, while oral breathing does not. [ 36 ] [ 37 ] In the U.S., the Occupational Safety and Health Administration (OSHA) has set the legal limit ( permissible exposure limit ) for nitric oxide exposure in the workplace as 25 ppm (30 mg/m 3 ) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 25 ppm (30 mg/m 3 ) over an 8-hour workday. At levels of 100 ppm, nitric oxide is immediately dangerous to life and health . [ 38 ] Liquid nitrogen oxide is very sensitive to detonation even in the absence of fuel, and can be initiated as readily as nitroglycerin. Detonation of the endothermic liquid oxide close to its boiling point (−152 °C or −241.6 °F or 121.1 K) generated a 100 kbar pulse and fragmented the test equipment. It is the simplest molecule that is capable of detonation in all three phases. The liquid oxide is sensitive and may explode during distillation, and this has been the cause of industrial accidents. [ 39 ] Gaseous nitric oxide detonates at about 2,300 metres per second (8,300 km/h; 5,100 mph), but as a solid it can reach a detonation velocity of 6,100 metres per second (22,000 km/h; 13,600 mph). [ 40 ] Notes
https://en.wikipedia.org/wiki/N≅O
The nitrosonium ion is NO + , in which the nitrogen atom is bonded to an oxygen atom with a bond order of 3, and the overall diatomic species bears a positive charge. It can be viewed as nitric oxide with one electron removed. This ion is usually obtained as the following salts: NOClO 4 , NOSO 4 H ( nitrosylsulfuric acid , more descriptively written ONSO 3 OH ) and NOBF 4 . The ClO − 4 and BF − 4 salts are slightly soluble in acetonitrile CH 3 CN . NOBF 4 can be purified by sublimation at 200–250 °C and 0.01 mmHg (1.3 Pa). [ 2 ] NO + is isoelectronic with CO , CN − and N 2 . It arises via protonation of nitrous acid : In its infrared spectrum of its salts, ν NO is a strong peak in the range 2150-2400 cm −1 . [ 3 ] NO + reacts readily with water to form nitrous acid : For this reason, nitrosonium compounds must be protected from water or even moist air. With base, the reaction generates nitrite: NO + reacts with aryl amines, ArNH 2 , to give diazonium salts , ArN + 2 . The resulting diazonium group is easily displaced (unlike the amino group) by a variety of nucleophiles. NO + , e.g. as NOBF 4 , is a strong oxidizing agent : [ 4 ] In organic chemistry, it selectively cleaves ethers and oximes , and couples di arylamines . [ 5 ] NOBF 4 is a convenient oxidant because the byproduct NO is a gas, which can be swept from the reaction using a stream of N 2 . Upon contact with air, NO forms NO 2 , which can cause secondary reactions if it is not removed. NO 2 is readily detectable by its characteristic orange color. Electron-rich arenes are nitrosylated using NOBF 4 . [ 6 ] One example involves anisole : Nitrosonium, NO + , is sometimes confused with nitronium, NO + 2 , the active agent in nitrations. These species are quite different, however. Nitronium is a more potent electrophile than is nitrosonium, as anticipated by the fact that the former is derived from a strong acid (nitric acid) and the latter from a weak acid (nitrous acid). NOBF 4 reacts with some metal carbonyl complexes to yield related metal nitrosyl complexes. [ 7 ] In some cases, [NO] + does not bind the metal nucleophile but acts as an oxidant.
https://en.wikipedia.org/wiki/N≡O+
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas , nitrous , or factitious air , among others, [ 4 ] is a chemical compound , an oxide of nitrogen with the formula N 2 O . At room temperature, it is a colourless non-flammable gas , and has a slightly sweet scent and taste. [ 4 ] At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. [ 4 ] Nitrous oxide has significant medical uses , especially in surgery and dentistry , for its anaesthetic and pain-reducing effects, [ 5 ] and it is on the World Health Organization's List of Essential Medicines . [ 6 ] Its colloquial name, "laughing gas", coined by Humphry Davy , describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief " high ". [ 5 ] [ 7 ] When abused chronically, it may cause neurological damage through inactivation of vitamin B 12 . It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant , with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. [ 8 ] [ 9 ] It is a major scavenger of stratospheric ozone , with an impact comparable to that of CFCs . [ 10 ] About 40% of human-caused emissions are from agriculture , [ 11 ] [ 12 ] as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. [ 13 ] As the third most important greenhouse gas , nitrous oxide substantially contributes to global warming . [ 14 ] [ 15 ] Reduction of emissions is an important goal in the politics of climate change . [ 16 ] The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory ) [ 17 ] or inflammable nitrous air . [ 18 ] Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775) , where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid . [ 19 ] The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt , who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794) . This book was important for two reasons. First, James Watt had invented a novel machine to produce " factitious airs " (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". [ 20 ] The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials , [ clarification needed ] which began in 1798 when Thomas Beddoes established the " Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells ( Bristol ). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. [ 20 ] The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800) . In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. [ 21 ] Davy coined the name "laughing gas" for nitrous oxide. [ 22 ] Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia . The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class , became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. [ 23 ] One of the earliest commercial producers in the U.S. was George Poe , cousin of the poet Edgar Allan Poe , who also was the first to liquefy the gas. [ 24 ] The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells , with assistance by Gardner Quincy Colton and John Mankey Riggs , demonstrated insensitivity to pain from a dental extraction on 11 December 1844. [ 25 ] In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut , and, according to his own record, only failed in two cases. [ 26 ] In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. [ 27 ] The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City . [ 20 ] Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. [ 28 ] Today, nitrous oxide is used in dentistry as an anxiolytic , as an adjunct to local anaesthetic . Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether , being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. [ 20 ] When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. [ 28 ] Although hospitals today use a more advanced anaesthetic machine , these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers , who touted it as a cure for consumption , scrofula , catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. [ 29 ] [ 30 ] Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint . N 2 O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH 2 at 187 °C (369 °F) to give NaN 3 : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. [ 31 ] The pharmacological mechanism of action of inhaled N 2 O is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels , which likely plays a major role. It moderately blocks NMDAR and β 2 -subunit -containing nACh channels , weakly inhibits AMPA , kainate , GABA C and 5-HT 3 receptors , and slightly potentiates GABA A and glycine receptors . [ 32 ] [ 33 ] It also has been shown to activate two-pore-domain K + channels . [ 34 ] While N 2 O affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. [ 32 ] [ 35 ] In addition to its effects on ion channels, N 2 O may act similarly to nitric oxide (NO) in the central nervous system. [ 35 ] Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; [ 36 ] [ 37 ] however, Jay (2008) [ 38 ] suggests that it reliably induces the following states and sensations: A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. [ 38 ] In behavioural tests of anxiety , a low dose of N 2 O is an effective anxiolytic . This anti-anxiety effect is associated with enhanced activity of GABA A receptors, as it is partially reversed by benzodiazepine receptor antagonists . Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N 2 O . [ 39 ] Indeed, in humans given 30% N 2 O , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. [ 40 ] [ 41 ] The analgesic effects of N 2 O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N 2 O . [ 42 ] Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin ) also block the antinociceptive effects of N 2 O . [ 43 ] Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N 2 O . [ 43 ] Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N 2 O , but these drugs have no effect when injected into the spinal cord . Apart from an indirect action, nitrous oxide, like morphine [ 44 ] also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. [ 45 ] [ 46 ] Conversely, α 2 -adrenoceptor antagonists block the pain-reducing effects of N 2 O when given directly to the spinal cord, but not when applied directly to the brain. [ 47 ] Indeed, α 2B -adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N 2 O . [ 48 ] Apparently N 2 O -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. [ 49 ] Exactly how N 2 O causes the release of endogenous opioid peptides remains uncertain. Various methods of producing nitrous oxide are used. [ 50 ] Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate [ 50 ] at about 250 °C, which decomposes into nitrous oxide and water vapour. [ 51 ] The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation . [ 52 ] The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate : [ 53 ] Another method involves the reaction of urea, nitric acid and sulfuric acid: [ 54 ] Direct oxidation of ammonia with a manganese dioxide - bismuth oxide catalyst has been reported: [ 55 ] cf. Ostwald process . Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating HNO 3 with SnCl 2 and HCl also has been demonstrated: Hyponitrous acid decomposes to N 2 O and water with a half-life of 16 days at 25 °C at pH 1–3. [ 56 ] Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle . Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. [ 8 ] The growth rate of about 1 ppb per year has also accelerated during recent decades. [ 9 ] Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. [ 58 ] Important atmospheric properties of N 2 O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." [ 61 ] 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N 2 O was emitted in 2007–2016. [ 61 ] About 40% of N 2 O emissions are from humans and the rest are part of the natural nitrogen cycle . [ 62 ] The N 2 O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. [ 63 ] Most of the N 2 O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. [ 64 ] Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). [ 65 ] Wetlands can also be emitters of nitrous oxide . [ 66 ] [ 67 ] Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. [ 61 ] The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. [ 73 ] These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, [ 74 ] and the majority of a year's emissions may occur when conditions are favorable during "hot moments" [ 75 ] [ 76 ] and/or at favorable locations known as "hotspots". [ 77 ] Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone . [ 68 ] [ 78 ] [ 79 ] Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification . Specifically, they include: These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter , acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase . [ 80 ] Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. [ 81 ] In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. [ 82 ] Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant . In the presence of a heated catalyst at a temperature of 577 °C (1,071 °F), N 2 O decomposes exothermically into nitrogen and oxygen. [ 83 ] Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse ( I sp ) up to 180 s. While noticeably less than the I sp available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide ), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). [ 84 ] At 600 psi , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. [ 85 ] [ 86 ] In vehicle racing , nitrous oxide (often called " nitrous ") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about 300 °C (572 °F), its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold , the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines . Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft , high-speed bombers and high-altitude interceptor aircraft . It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50 , a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). [ 87 ] However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide ( SO 2 ) is added to prevent substance abuse. [ 88 ] The gas is approved for use as a food additive ( E number : E942), specifically as an aerosol spray propellant . It is commonly used in aerosol whipped cream canisters and cooking sprays . The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. [ 89 ] Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season , due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. [ 90 ] Also, cooking spray, made from various oils with lecithin emulsifier , may use nitrous oxide propellant , or alternatively food-grade alcohol or propane . Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. [ 20 ] In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. [ 28 ] Today, the gas is administered in hospitals by means of an automated relative analgesia machine , with an anaesthetic vaporiser and a medical ventilator , that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic , and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane . It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. [ 91 ] [ 92 ] [ 93 ] Dentists use a simpler machine which only delivers an N 2 O / O 2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. [ 94 ] Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth , trauma , oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. [ 95 ] Its use for acute coronary syndrome is of unknown benefit. [ 96 ] In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. [ 97 ] Recreational inhalation of nitrous oxide , to induce euphoria and slight hallucinations , began with the British upper class in 1799 in gatherings known as "laughing gas parties". [ 98 ] From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. [ 99 ] Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market , in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. [ 100 ] A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. [ 101 ] Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. [ 102 ] While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. [ 103 ] In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. [ 104 ] In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok . [ 105 ] Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or N 2 O scavenger system may be needed to prevent waste-gas buildup. [ citation needed ] The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. [ 106 ] It set a recommended exposure limit (REL) of 25 ppm (46 mg/m 3 ) to escaped anaesthetic. [ 107 ] Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, [ 108 ] putting the user at risk of accidental injury. [ 38 ] Nitrous oxide is neurotoxic , and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. [ 104 ] [ 103 ] It is believed that, like other NMDA receptor antagonists , N 2 O produces Olney's lesions in rodents upon prolonged (several hour) exposure. [ 109 ] [ 110 ] [ 111 ] [ 112 ] However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. [ 113 ] In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. [ 109 ] Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia . This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), [ 114 ] which contain no oxygen gas. [ 115 ] In reports to poison control centers, heavy users (≥400 g or ≥200 L of N 2 O gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy : ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity . [ 116 ] Nitrous oxide might have therapeutic use in treating stroke . In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca 2+ influx in neuronal cell cultures, a cause of excitotoxicity . [ 113 ] Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. [ 117 ] This correlation is dose-dependent [ 118 ] [ 119 ] and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. [ citation needed ] Long-term exposure to nitrous oxide may cause vitamin B 12 deficiency . This can cause serious neurotoxicity if the user has preexisting vitamin B 12 deficiency. [ 120 ] It inactivates the cobalamin form of vitamin B 12 by oxidation. Symptoms of vitamin B 12 deficiency, including sensory neuropathy , myelopathy and encephalopathy , may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B 12 deficiency. Symptoms are treated with high doses of vitamin B 12 , but recovery can be slow and incomplete. [ 121 ] People with normal vitamin B 12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B 12 levels should be checked in people with risk factors for vitamin B 12 deficiency prior to using nitrous oxide anaesthesia. [ 122 ] Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. [ 123 ] [ 124 ] [ 125 ] At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature . The pressure curve is thus unusually sensitive to temperature. [ 126 ] As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to " water hammer "-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). [ 127 ] Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. [ 128 ] There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. [ 84 ] Global accounting of N 2 O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr ( teragrams , or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture . [ 11 ] [ 12 ] Nitrous oxide has significant global warming potential as a greenhouse gas . On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide ( CO 2 ). [ 60 ] However, because of its low concentration (less than 1/1,000 of that of CO 2 ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane . [ 129 ] On the other hand, since about 40% of the N 2 O entering the atmosphere is the result of human activity, [ 68 ] control of nitrous oxide is part of efforts to curb greenhouse gas emissions. [ 130 ] Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture , when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change . [ 131 ] Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid , which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. [ 132 ] A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event . [ 133 ] Nitrous oxide has also been implicated in thinning the ozone layer . A 2009 study suggested that N 2 O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. [ 10 ] [ 134 ] In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks [ 135 ] is legal when intended for medical anaesthesia. The New Zealand Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. [ 136 ] This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth ( UK ) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. [ 137 ] In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. [ 138 ] Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. [ 139 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. [ citation needed ] For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 140 ]
https://en.wikipedia.org/wiki/N☱N⚍O
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas , nitrous , or factitious air , among others, [ 4 ] is a chemical compound , an oxide of nitrogen with the formula N 2 O . At room temperature, it is a colourless non-flammable gas , and has a slightly sweet scent and taste. [ 4 ] At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. [ 4 ] Nitrous oxide has significant medical uses , especially in surgery and dentistry , for its anaesthetic and pain-reducing effects, [ 5 ] and it is on the World Health Organization's List of Essential Medicines . [ 6 ] Its colloquial name, "laughing gas", coined by Humphry Davy , describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief " high ". [ 5 ] [ 7 ] When abused chronically, it may cause neurological damage through inactivation of vitamin B 12 . It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant , with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. [ 8 ] [ 9 ] It is a major scavenger of stratospheric ozone , with an impact comparable to that of CFCs . [ 10 ] About 40% of human-caused emissions are from agriculture , [ 11 ] [ 12 ] as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. [ 13 ] As the third most important greenhouse gas , nitrous oxide substantially contributes to global warming . [ 14 ] [ 15 ] Reduction of emissions is an important goal in the politics of climate change . [ 16 ] The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory ) [ 17 ] or inflammable nitrous air . [ 18 ] Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775) , where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid . [ 19 ] The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt , who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794) . This book was important for two reasons. First, James Watt had invented a novel machine to produce " factitious airs " (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". [ 20 ] The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials , [ clarification needed ] which began in 1798 when Thomas Beddoes established the " Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells ( Bristol ). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. [ 20 ] The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800) . In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. [ 21 ] Davy coined the name "laughing gas" for nitrous oxide. [ 22 ] Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia . The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class , became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. [ 23 ] One of the earliest commercial producers in the U.S. was George Poe , cousin of the poet Edgar Allan Poe , who also was the first to liquefy the gas. [ 24 ] The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells , with assistance by Gardner Quincy Colton and John Mankey Riggs , demonstrated insensitivity to pain from a dental extraction on 11 December 1844. [ 25 ] In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut , and, according to his own record, only failed in two cases. [ 26 ] In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. [ 27 ] The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City . [ 20 ] Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. [ 28 ] Today, nitrous oxide is used in dentistry as an anxiolytic , as an adjunct to local anaesthetic . Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether , being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. [ 20 ] When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. [ 28 ] Although hospitals today use a more advanced anaesthetic machine , these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers , who touted it as a cure for consumption , scrofula , catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. [ 29 ] [ 30 ] Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint . N 2 O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH 2 at 187 °C (369 °F) to give NaN 3 : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. [ 31 ] The pharmacological mechanism of action of inhaled N 2 O is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels , which likely plays a major role. It moderately blocks NMDAR and β 2 -subunit -containing nACh channels , weakly inhibits AMPA , kainate , GABA C and 5-HT 3 receptors , and slightly potentiates GABA A and glycine receptors . [ 32 ] [ 33 ] It also has been shown to activate two-pore-domain K + channels . [ 34 ] While N 2 O affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. [ 32 ] [ 35 ] In addition to its effects on ion channels, N 2 O may act similarly to nitric oxide (NO) in the central nervous system. [ 35 ] Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; [ 36 ] [ 37 ] however, Jay (2008) [ 38 ] suggests that it reliably induces the following states and sensations: A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. [ 38 ] In behavioural tests of anxiety , a low dose of N 2 O is an effective anxiolytic . This anti-anxiety effect is associated with enhanced activity of GABA A receptors, as it is partially reversed by benzodiazepine receptor antagonists . Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N 2 O . [ 39 ] Indeed, in humans given 30% N 2 O , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. [ 40 ] [ 41 ] The analgesic effects of N 2 O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N 2 O . [ 42 ] Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin ) also block the antinociceptive effects of N 2 O . [ 43 ] Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N 2 O . [ 43 ] Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N 2 O , but these drugs have no effect when injected into the spinal cord . Apart from an indirect action, nitrous oxide, like morphine [ 44 ] also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. [ 45 ] [ 46 ] Conversely, α 2 -adrenoceptor antagonists block the pain-reducing effects of N 2 O when given directly to the spinal cord, but not when applied directly to the brain. [ 47 ] Indeed, α 2B -adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N 2 O . [ 48 ] Apparently N 2 O -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. [ 49 ] Exactly how N 2 O causes the release of endogenous opioid peptides remains uncertain. Various methods of producing nitrous oxide are used. [ 50 ] Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate [ 50 ] at about 250 °C, which decomposes into nitrous oxide and water vapour. [ 51 ] The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation . [ 52 ] The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate : [ 53 ] Another method involves the reaction of urea, nitric acid and sulfuric acid: [ 54 ] Direct oxidation of ammonia with a manganese dioxide - bismuth oxide catalyst has been reported: [ 55 ] cf. Ostwald process . Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating HNO 3 with SnCl 2 and HCl also has been demonstrated: Hyponitrous acid decomposes to N 2 O and water with a half-life of 16 days at 25 °C at pH 1–3. [ 56 ] Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle . Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. [ 8 ] The growth rate of about 1 ppb per year has also accelerated during recent decades. [ 9 ] Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. [ 58 ] Important atmospheric properties of N 2 O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." [ 61 ] 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N 2 O was emitted in 2007–2016. [ 61 ] About 40% of N 2 O emissions are from humans and the rest are part of the natural nitrogen cycle . [ 62 ] The N 2 O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. [ 63 ] Most of the N 2 O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. [ 64 ] Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). [ 65 ] Wetlands can also be emitters of nitrous oxide . [ 66 ] [ 67 ] Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. [ 61 ] The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. [ 73 ] These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, [ 74 ] and the majority of a year's emissions may occur when conditions are favorable during "hot moments" [ 75 ] [ 76 ] and/or at favorable locations known as "hotspots". [ 77 ] Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone . [ 68 ] [ 78 ] [ 79 ] Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification . Specifically, they include: These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter , acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase . [ 80 ] Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. [ 81 ] In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. [ 82 ] Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant . In the presence of a heated catalyst at a temperature of 577 °C (1,071 °F), N 2 O decomposes exothermically into nitrogen and oxygen. [ 83 ] Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse ( I sp ) up to 180 s. While noticeably less than the I sp available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide ), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). [ 84 ] At 600 psi , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. [ 85 ] [ 86 ] In vehicle racing , nitrous oxide (often called " nitrous ") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about 300 °C (572 °F), its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold , the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines . Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft , high-speed bombers and high-altitude interceptor aircraft . It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50 , a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). [ 87 ] However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide ( SO 2 ) is added to prevent substance abuse. [ 88 ] The gas is approved for use as a food additive ( E number : E942), specifically as an aerosol spray propellant . It is commonly used in aerosol whipped cream canisters and cooking sprays . The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. [ 89 ] Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season , due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. [ 90 ] Also, cooking spray, made from various oils with lecithin emulsifier , may use nitrous oxide propellant , or alternatively food-grade alcohol or propane . Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. [ 20 ] In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. [ 28 ] Today, the gas is administered in hospitals by means of an automated relative analgesia machine , with an anaesthetic vaporiser and a medical ventilator , that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic , and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane . It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. [ 91 ] [ 92 ] [ 93 ] Dentists use a simpler machine which only delivers an N 2 O / O 2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. [ 94 ] Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth , trauma , oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. [ 95 ] Its use for acute coronary syndrome is of unknown benefit. [ 96 ] In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. [ 97 ] Recreational inhalation of nitrous oxide , to induce euphoria and slight hallucinations , began with the British upper class in 1799 in gatherings known as "laughing gas parties". [ 98 ] From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. [ 99 ] Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market , in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. [ 100 ] A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. [ 101 ] Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. [ 102 ] While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. [ 103 ] In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. [ 104 ] In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok . [ 105 ] Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or N 2 O scavenger system may be needed to prevent waste-gas buildup. [ citation needed ] The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. [ 106 ] It set a recommended exposure limit (REL) of 25 ppm (46 mg/m 3 ) to escaped anaesthetic. [ 107 ] Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, [ 108 ] putting the user at risk of accidental injury. [ 38 ] Nitrous oxide is neurotoxic , and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. [ 104 ] [ 103 ] It is believed that, like other NMDA receptor antagonists , N 2 O produces Olney's lesions in rodents upon prolonged (several hour) exposure. [ 109 ] [ 110 ] [ 111 ] [ 112 ] However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. [ 113 ] In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. [ 109 ] Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia . This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), [ 114 ] which contain no oxygen gas. [ 115 ] In reports to poison control centers, heavy users (≥400 g or ≥200 L of N 2 O gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy : ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity . [ 116 ] Nitrous oxide might have therapeutic use in treating stroke . In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca 2+ influx in neuronal cell cultures, a cause of excitotoxicity . [ 113 ] Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. [ 117 ] This correlation is dose-dependent [ 118 ] [ 119 ] and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. [ citation needed ] Long-term exposure to nitrous oxide may cause vitamin B 12 deficiency . This can cause serious neurotoxicity if the user has preexisting vitamin B 12 deficiency. [ 120 ] It inactivates the cobalamin form of vitamin B 12 by oxidation. Symptoms of vitamin B 12 deficiency, including sensory neuropathy , myelopathy and encephalopathy , may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B 12 deficiency. Symptoms are treated with high doses of vitamin B 12 , but recovery can be slow and incomplete. [ 121 ] People with normal vitamin B 12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B 12 levels should be checked in people with risk factors for vitamin B 12 deficiency prior to using nitrous oxide anaesthesia. [ 122 ] Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. [ 123 ] [ 124 ] [ 125 ] At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature . The pressure curve is thus unusually sensitive to temperature. [ 126 ] As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to " water hammer "-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). [ 127 ] Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. [ 128 ] There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. [ 84 ] Global accounting of N 2 O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr ( teragrams , or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture . [ 11 ] [ 12 ] Nitrous oxide has significant global warming potential as a greenhouse gas . On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide ( CO 2 ). [ 60 ] However, because of its low concentration (less than 1/1,000 of that of CO 2 ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane . [ 129 ] On the other hand, since about 40% of the N 2 O entering the atmosphere is the result of human activity, [ 68 ] control of nitrous oxide is part of efforts to curb greenhouse gas emissions. [ 130 ] Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture , when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change . [ 131 ] Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid , which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. [ 132 ] A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event . [ 133 ] Nitrous oxide has also been implicated in thinning the ozone layer . A 2009 study suggested that N 2 O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. [ 10 ] [ 134 ] In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks [ 135 ] is legal when intended for medical anaesthesia. The New Zealand Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. [ 136 ] This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth ( UK ) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. [ 137 ] In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. [ 138 ] Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. [ 139 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. [ citation needed ] For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 140 ]
https://en.wikipedia.org/wiki/N☱N⚎O
Dinitrogen tetroxide Dinitrogen trioxide Nitrogen dioxide Nitrous oxide Nitroxyl (reduced form) Hydroxylamine (hydrogenated form) Nitric oxide ( nitrogen oxide or nitrogen monoxide [ 1 ] ) is a colorless gas with the formula NO . It is one of the principal oxides of nitrogen . Nitric oxide is a free radical : it has an unpaired electron , which is sometimes denoted by a dot in its chemical formula ( • N=O or • NO). Nitric oxide is also a heteronuclear diatomic molecule , a class of molecules whose study spawned early modern theories of chemical bonding . [ 6 ] An important intermediate in industrial chemistry , nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. [ 7 ] It was proclaimed the " Molecule of the Year " in 1992. [ 8 ] The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. [ 9 ] Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing. [ 10 ] [ 11 ] Nitric oxide should not be confused with nitrogen dioxide (NO 2 ), a brown gas and major air pollutant , or with nitrous oxide (N 2 O), an anesthetic gas. [ 6 ] Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. [ 12 ] Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott , Louis J. Ignarro , and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine. [ 13 ] The ground state electronic configuration of NO is, in united atom notation: [ 14 ] ( 1 σ ) 2 ( 2 σ ) 2 ( 3 σ ) 2 ( 4 σ ∗ ) 2 ( 5 σ ) 2 ( 1 π ) 4 ( 2 π ∗ ) 1 {\displaystyle (1\sigma )^{2}(2\sigma )^{2}(3\sigma )^{2}(4\sigma ^{*})^{2}(5\sigma )^{2}(1\pi )^{4}(2\pi ^{*})^{1}} The first two orbitals are actually pure atomic 1 s O and 1 s N from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron. The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J = 3 ⁄ 2 or J = 1 ⁄ 2 . The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transfer of negative electronic charge from oxygen to nitrogen. [ 15 ] Upon condensing to a neat liquid, nitric oxide dimerizes to colorless dinitrogen dioxide (O=N–N=O), but the association is weak and reversible. The N–N distance in crystalline NO is 218 pm, nearly twice the N–O distance. Condensation in a highly polar environment instead gives the red alternant isomer O=N–O + =N − . [ 6 ] Since the heat of formation of • NO is endothermic , NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction: When exposed to oxygen , nitric oxide converts into nitrogen dioxide : This reaction is thought to occur via the intermediates ONOO • and the red compound ONOONO. [ 16 ] In water, nitric oxide reacts with oxygen to form nitrous acid (HNO 2 ). The reaction is thought to proceed via the following stoichiometry : Nitric oxide reacts with fluorine , chlorine , and bromine to form the nitrosyl halides, such as nitrosyl chloride : With NO 2 , also a radical, NO combines to form the intensely blue dinitrogen trioxide : [ 6 ] Nitric oxide rarely sees organic chemistry use. Most reactions with it produce complex mixtures of salts, separable only through careful recrystallization . [ 17 ] The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation . The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate , giving a diazeniumdiolate (also called a nitrosohydroxylamine ). [ 18 ] The product can undergo a subsequent retro- aldol reaction , giving an overall process similar to the haloform reaction . For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position , with subsequent loss of methyl acetate as a by-product : [ 19 ] This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide , ultimately forming sodium formate and nitrous oxide by way of an N -methoxydiazeniumdiolate. [ 20 ] Sufficiently basic secondary amines undergo a Traube-like reaction to give NONOates . [ 21 ] However, very few nucleophiles undergo the Traube reaction, either failing to adduce NO or immediately decomposing with nitrous oxide release. [ 17 ] Nitric oxide reacts with transition metals to give complexes called metal nitrosyls . The most common bonding mode of nitric oxide is the terminal linear type (M−NO). [ 6 ] Alternatively, nitric oxide can serve as a one-electron pseudohalide. In such complexes, the M−N−O group is characterized by an angle between 120° and 140°. The NO group can also bridge between metal centers through the nitrogen atom in a variety of geometries. In commercial settings, nitric oxide is produced by the oxidation of ammonia at 750–900 °C (normally at 850 °C) with platinum as catalyst in the Ostwald process : The uncatalyzed endothermic reaction of oxygen (O 2 ) and nitrogen (N 2 ), which is effected at high temperature (>2000 °C) by lightning has not been developed into a practical commercial synthesis (see Birkeland–Eyde process ): In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper : An alternative route involves the reduction of nitrous acid in the form of sodium nitrite or potassium nitrite : The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments. So-called NONOate compounds are also used for nitric oxide generation, especially in biological laboratories. However, other Traube adducts may decompose to instead give nitrous oxide . [ 22 ] Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone . [ 23 ] A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide , accompanied with emission of light ( chemiluminescence ): which can be measured with a photodetector . The amount of light produced is proportional to the amount of nitric oxide in the sample. Other methods of testing include electroanalysis (amperometric approach), where ·NO reacts with an electrode to induce a current or voltage change. The detection of NO radicals in biological tissues is particularly difficult due to the short lifetime and concentration of these radicals in tissues. One of the few practical methods is spin trapping of nitric oxide with iron- dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR). [ 24 ] [ 25 ] A group of fluorescent dye indicators that are also available in acetylated form for intracellular measurements exist. The most common compound is 4,5-diaminofluorescein (DAF-2). [ 26 ] Nitric oxide reacts with the hydroperoxyl radical ( HO • 2 ) to form nitrogen dioxide (NO 2 ), which then can react with a hydroxyl radical (HO • ) to produce nitric acid (HNO 3 ): Nitric acid, along with sulfuric acid , contributes to acid rain deposition. • NO participates in ozone layer depletion . Nitric oxide reacts with stratospheric ozone to form O 2 and nitrogen dioxide: This reaction is also utilized to measure concentrations of • NO in control volumes. As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, HO • 2 , or diatomic oxygen, O 2 ). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function. [ 27 ] NO is a gaseous signaling molecule . [ 28 ] It is a key vertebrate biological messenger , playing a role in a variety of biological processes. [ 29 ] It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells. [ 30 ] Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L -arginine , oxygen , and NADPH by various nitric oxide synthase (NOS) enzymes . [ 31 ] Reduction of inorganic nitrate may also make nitric oxide. [ 32 ] One of the main enzymatic targets of nitric oxide is guanylyl cyclase . [ 33 ] The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. [ 33 ] Nitric oxide is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes. These attributes make nitric oxide ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. [ 32 ] Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated. [ 33 ] The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, resulting in vasodilation and increasing blood flow. [ 32 ] Sildenafil (Viagra) is a drug that uses the nitric oxide pathway. Sildenafil does not produce nitric oxide, but enhances the signals that are downstream of the nitric oxide pathway by protecting cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum , allowing for the signal to be enhanced, and thus vasodilation . [ 31 ] Another endogenous gaseous transmitter, hydrogen sulfide (H 2 S) works with NO to induce vasodilation and angiogenesis in a cooperative manner. [ 34 ] [ 35 ] Nasal breathing produces nitric oxide within the body, while oral breathing does not. [ 36 ] [ 37 ] In the U.S., the Occupational Safety and Health Administration (OSHA) has set the legal limit ( permissible exposure limit ) for nitric oxide exposure in the workplace as 25 ppm (30 mg/m 3 ) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 25 ppm (30 mg/m 3 ) over an 8-hour workday. At levels of 100 ppm, nitric oxide is immediately dangerous to life and health . [ 38 ] Liquid nitrogen oxide is very sensitive to detonation even in the absence of fuel, and can be initiated as readily as nitroglycerin. Detonation of the endothermic liquid oxide close to its boiling point (−152 °C or −241.6 °F or 121.1 K) generated a 100 kbar pulse and fragmented the test equipment. It is the simplest molecule that is capable of detonation in all three phases. The liquid oxide is sensitive and may explode during distillation, and this has been the cause of industrial accidents. [ 39 ] Gaseous nitric oxide detonates at about 2,300 metres per second (8,300 km/h; 5,100 mph), but as a solid it can reach a detonation velocity of 6,100 metres per second (22,000 km/h; 13,600 mph). [ 40 ] Notes
https://en.wikipedia.org/wiki/N☱O
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas , nitrous , or factitious air , among others, [ 4 ] is a chemical compound , an oxide of nitrogen with the formula N 2 O . At room temperature, it is a colourless non-flammable gas , and has a slightly sweet scent and taste. [ 4 ] At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. [ 4 ] Nitrous oxide has significant medical uses , especially in surgery and dentistry , for its anaesthetic and pain-reducing effects, [ 5 ] and it is on the World Health Organization's List of Essential Medicines . [ 6 ] Its colloquial name, "laughing gas", coined by Humphry Davy , describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief " high ". [ 5 ] [ 7 ] When abused chronically, it may cause neurological damage through inactivation of vitamin B 12 . It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant , with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. [ 8 ] [ 9 ] It is a major scavenger of stratospheric ozone , with an impact comparable to that of CFCs . [ 10 ] About 40% of human-caused emissions are from agriculture , [ 11 ] [ 12 ] as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. [ 13 ] As the third most important greenhouse gas , nitrous oxide substantially contributes to global warming . [ 14 ] [ 15 ] Reduction of emissions is an important goal in the politics of climate change . [ 16 ] The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory ) [ 17 ] or inflammable nitrous air . [ 18 ] Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775) , where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid . [ 19 ] The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt , who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794) . This book was important for two reasons. First, James Watt had invented a novel machine to produce " factitious airs " (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". [ 20 ] The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials , [ clarification needed ] which began in 1798 when Thomas Beddoes established the " Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells ( Bristol ). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. [ 20 ] The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800) . In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. [ 21 ] Davy coined the name "laughing gas" for nitrous oxide. [ 22 ] Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia . The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class , became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. [ 23 ] One of the earliest commercial producers in the U.S. was George Poe , cousin of the poet Edgar Allan Poe , who also was the first to liquefy the gas. [ 24 ] The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells , with assistance by Gardner Quincy Colton and John Mankey Riggs , demonstrated insensitivity to pain from a dental extraction on 11 December 1844. [ 25 ] In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut , and, according to his own record, only failed in two cases. [ 26 ] In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. [ 27 ] The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City . [ 20 ] Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. [ 28 ] Today, nitrous oxide is used in dentistry as an anxiolytic , as an adjunct to local anaesthetic . Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether , being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. [ 20 ] When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. [ 28 ] Although hospitals today use a more advanced anaesthetic machine , these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers , who touted it as a cure for consumption , scrofula , catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. [ 29 ] [ 30 ] Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint . N 2 O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH 2 at 187 °C (369 °F) to give NaN 3 : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. [ 31 ] The pharmacological mechanism of action of inhaled N 2 O is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels , which likely plays a major role. It moderately blocks NMDAR and β 2 -subunit -containing nACh channels , weakly inhibits AMPA , kainate , GABA C and 5-HT 3 receptors , and slightly potentiates GABA A and glycine receptors . [ 32 ] [ 33 ] It also has been shown to activate two-pore-domain K + channels . [ 34 ] While N 2 O affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. [ 32 ] [ 35 ] In addition to its effects on ion channels, N 2 O may act similarly to nitric oxide (NO) in the central nervous system. [ 35 ] Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; [ 36 ] [ 37 ] however, Jay (2008) [ 38 ] suggests that it reliably induces the following states and sensations: A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. [ 38 ] In behavioural tests of anxiety , a low dose of N 2 O is an effective anxiolytic . This anti-anxiety effect is associated with enhanced activity of GABA A receptors, as it is partially reversed by benzodiazepine receptor antagonists . Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N 2 O . [ 39 ] Indeed, in humans given 30% N 2 O , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. [ 40 ] [ 41 ] The analgesic effects of N 2 O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N 2 O . [ 42 ] Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin ) also block the antinociceptive effects of N 2 O . [ 43 ] Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N 2 O . [ 43 ] Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N 2 O , but these drugs have no effect when injected into the spinal cord . Apart from an indirect action, nitrous oxide, like morphine [ 44 ] also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. [ 45 ] [ 46 ] Conversely, α 2 -adrenoceptor antagonists block the pain-reducing effects of N 2 O when given directly to the spinal cord, but not when applied directly to the brain. [ 47 ] Indeed, α 2B -adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N 2 O . [ 48 ] Apparently N 2 O -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. [ 49 ] Exactly how N 2 O causes the release of endogenous opioid peptides remains uncertain. Various methods of producing nitrous oxide are used. [ 50 ] Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate [ 50 ] at about 250 °C, which decomposes into nitrous oxide and water vapour. [ 51 ] The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation . [ 52 ] The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate : [ 53 ] Another method involves the reaction of urea, nitric acid and sulfuric acid: [ 54 ] Direct oxidation of ammonia with a manganese dioxide - bismuth oxide catalyst has been reported: [ 55 ] cf. Ostwald process . Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating HNO 3 with SnCl 2 and HCl also has been demonstrated: Hyponitrous acid decomposes to N 2 O and water with a half-life of 16 days at 25 °C at pH 1–3. [ 56 ] Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle . Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. [ 8 ] The growth rate of about 1 ppb per year has also accelerated during recent decades. [ 9 ] Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. [ 58 ] Important atmospheric properties of N 2 O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." [ 61 ] 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N 2 O was emitted in 2007–2016. [ 61 ] About 40% of N 2 O emissions are from humans and the rest are part of the natural nitrogen cycle . [ 62 ] The N 2 O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. [ 63 ] Most of the N 2 O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. [ 64 ] Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). [ 65 ] Wetlands can also be emitters of nitrous oxide . [ 66 ] [ 67 ] Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. [ 61 ] The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. [ 73 ] These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, [ 74 ] and the majority of a year's emissions may occur when conditions are favorable during "hot moments" [ 75 ] [ 76 ] and/or at favorable locations known as "hotspots". [ 77 ] Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone . [ 68 ] [ 78 ] [ 79 ] Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification . Specifically, they include: These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter , acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase . [ 80 ] Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. [ 81 ] In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. [ 82 ] Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant . In the presence of a heated catalyst at a temperature of 577 °C (1,071 °F), N 2 O decomposes exothermically into nitrogen and oxygen. [ 83 ] Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse ( I sp ) up to 180 s. While noticeably less than the I sp available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide ), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). [ 84 ] At 600 psi , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. [ 85 ] [ 86 ] In vehicle racing , nitrous oxide (often called " nitrous ") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about 300 °C (572 °F), its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold , the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines . Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft , high-speed bombers and high-altitude interceptor aircraft . It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50 , a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). [ 87 ] However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide ( SO 2 ) is added to prevent substance abuse. [ 88 ] The gas is approved for use as a food additive ( E number : E942), specifically as an aerosol spray propellant . It is commonly used in aerosol whipped cream canisters and cooking sprays . The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. [ 89 ] Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season , due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. [ 90 ] Also, cooking spray, made from various oils with lecithin emulsifier , may use nitrous oxide propellant , or alternatively food-grade alcohol or propane . Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. [ 20 ] In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. [ 28 ] Today, the gas is administered in hospitals by means of an automated relative analgesia machine , with an anaesthetic vaporiser and a medical ventilator , that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic , and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane . It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. [ 91 ] [ 92 ] [ 93 ] Dentists use a simpler machine which only delivers an N 2 O / O 2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. [ 94 ] Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth , trauma , oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. [ 95 ] Its use for acute coronary syndrome is of unknown benefit. [ 96 ] In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. [ 97 ] Recreational inhalation of nitrous oxide , to induce euphoria and slight hallucinations , began with the British upper class in 1799 in gatherings known as "laughing gas parties". [ 98 ] From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. [ 99 ] Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market , in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. [ 100 ] A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. [ 101 ] Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. [ 102 ] While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. [ 103 ] In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. [ 104 ] In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok . [ 105 ] Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or N 2 O scavenger system may be needed to prevent waste-gas buildup. [ citation needed ] The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. [ 106 ] It set a recommended exposure limit (REL) of 25 ppm (46 mg/m 3 ) to escaped anaesthetic. [ 107 ] Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, [ 108 ] putting the user at risk of accidental injury. [ 38 ] Nitrous oxide is neurotoxic , and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. [ 104 ] [ 103 ] It is believed that, like other NMDA receptor antagonists , N 2 O produces Olney's lesions in rodents upon prolonged (several hour) exposure. [ 109 ] [ 110 ] [ 111 ] [ 112 ] However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. [ 113 ] In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. [ 109 ] Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia . This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), [ 114 ] which contain no oxygen gas. [ 115 ] In reports to poison control centers, heavy users (≥400 g or ≥200 L of N 2 O gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy : ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity . [ 116 ] Nitrous oxide might have therapeutic use in treating stroke . In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca 2+ influx in neuronal cell cultures, a cause of excitotoxicity . [ 113 ] Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. [ 117 ] This correlation is dose-dependent [ 118 ] [ 119 ] and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. [ citation needed ] Long-term exposure to nitrous oxide may cause vitamin B 12 deficiency . This can cause serious neurotoxicity if the user has preexisting vitamin B 12 deficiency. [ 120 ] It inactivates the cobalamin form of vitamin B 12 by oxidation. Symptoms of vitamin B 12 deficiency, including sensory neuropathy , myelopathy and encephalopathy , may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B 12 deficiency. Symptoms are treated with high doses of vitamin B 12 , but recovery can be slow and incomplete. [ 121 ] People with normal vitamin B 12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B 12 levels should be checked in people with risk factors for vitamin B 12 deficiency prior to using nitrous oxide anaesthesia. [ 122 ] Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. [ 123 ] [ 124 ] [ 125 ] At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature . The pressure curve is thus unusually sensitive to temperature. [ 126 ] As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to " water hammer "-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). [ 127 ] Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. [ 128 ] There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. [ 84 ] Global accounting of N 2 O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr ( teragrams , or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture . [ 11 ] [ 12 ] Nitrous oxide has significant global warming potential as a greenhouse gas . On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide ( CO 2 ). [ 60 ] However, because of its low concentration (less than 1/1,000 of that of CO 2 ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane . [ 129 ] On the other hand, since about 40% of the N 2 O entering the atmosphere is the result of human activity, [ 68 ] control of nitrous oxide is part of efforts to curb greenhouse gas emissions. [ 130 ] Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture , when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change . [ 131 ] Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid , which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. [ 132 ] A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event . [ 133 ] Nitrous oxide has also been implicated in thinning the ozone layer . A 2009 study suggested that N 2 O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. [ 10 ] [ 134 ] In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks [ 135 ] is legal when intended for medical anaesthesia. The New Zealand Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. [ 136 ] This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth ( UK ) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. [ 137 ] In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. [ 138 ] Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. [ 139 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. [ citation needed ] For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 140 ]
https://en.wikipedia.org/wiki/N☴N⚍O
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas , nitrous , or factitious air , among others, [ 4 ] is a chemical compound , an oxide of nitrogen with the formula N 2 O . At room temperature, it is a colourless non-flammable gas , and has a slightly sweet scent and taste. [ 4 ] At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. [ 4 ] Nitrous oxide has significant medical uses , especially in surgery and dentistry , for its anaesthetic and pain-reducing effects, [ 5 ] and it is on the World Health Organization's List of Essential Medicines . [ 6 ] Its colloquial name, "laughing gas", coined by Humphry Davy , describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief " high ". [ 5 ] [ 7 ] When abused chronically, it may cause neurological damage through inactivation of vitamin B 12 . It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant , with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. [ 8 ] [ 9 ] It is a major scavenger of stratospheric ozone , with an impact comparable to that of CFCs . [ 10 ] About 40% of human-caused emissions are from agriculture , [ 11 ] [ 12 ] as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. [ 13 ] As the third most important greenhouse gas , nitrous oxide substantially contributes to global warming . [ 14 ] [ 15 ] Reduction of emissions is an important goal in the politics of climate change . [ 16 ] The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory ) [ 17 ] or inflammable nitrous air . [ 18 ] Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775) , where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid . [ 19 ] The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt , who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794) . This book was important for two reasons. First, James Watt had invented a novel machine to produce " factitious airs " (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". [ 20 ] The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials , [ clarification needed ] which began in 1798 when Thomas Beddoes established the " Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells ( Bristol ). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. [ 20 ] The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800) . In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. [ 21 ] Davy coined the name "laughing gas" for nitrous oxide. [ 22 ] Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia . The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class , became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. [ 23 ] One of the earliest commercial producers in the U.S. was George Poe , cousin of the poet Edgar Allan Poe , who also was the first to liquefy the gas. [ 24 ] The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells , with assistance by Gardner Quincy Colton and John Mankey Riggs , demonstrated insensitivity to pain from a dental extraction on 11 December 1844. [ 25 ] In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut , and, according to his own record, only failed in two cases. [ 26 ] In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. [ 27 ] The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City . [ 20 ] Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. [ 28 ] Today, nitrous oxide is used in dentistry as an anxiolytic , as an adjunct to local anaesthetic . Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether , being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. [ 20 ] When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. [ 28 ] Although hospitals today use a more advanced anaesthetic machine , these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers , who touted it as a cure for consumption , scrofula , catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. [ 29 ] [ 30 ] Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint . N 2 O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH 2 at 187 °C (369 °F) to give NaN 3 : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. [ 31 ] The pharmacological mechanism of action of inhaled N 2 O is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels , which likely plays a major role. It moderately blocks NMDAR and β 2 -subunit -containing nACh channels , weakly inhibits AMPA , kainate , GABA C and 5-HT 3 receptors , and slightly potentiates GABA A and glycine receptors . [ 32 ] [ 33 ] It also has been shown to activate two-pore-domain K + channels . [ 34 ] While N 2 O affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. [ 32 ] [ 35 ] In addition to its effects on ion channels, N 2 O may act similarly to nitric oxide (NO) in the central nervous system. [ 35 ] Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; [ 36 ] [ 37 ] however, Jay (2008) [ 38 ] suggests that it reliably induces the following states and sensations: A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. [ 38 ] In behavioural tests of anxiety , a low dose of N 2 O is an effective anxiolytic . This anti-anxiety effect is associated with enhanced activity of GABA A receptors, as it is partially reversed by benzodiazepine receptor antagonists . Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N 2 O . [ 39 ] Indeed, in humans given 30% N 2 O , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. [ 40 ] [ 41 ] The analgesic effects of N 2 O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N 2 O . [ 42 ] Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin ) also block the antinociceptive effects of N 2 O . [ 43 ] Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N 2 O . [ 43 ] Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N 2 O , but these drugs have no effect when injected into the spinal cord . Apart from an indirect action, nitrous oxide, like morphine [ 44 ] also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. [ 45 ] [ 46 ] Conversely, α 2 -adrenoceptor antagonists block the pain-reducing effects of N 2 O when given directly to the spinal cord, but not when applied directly to the brain. [ 47 ] Indeed, α 2B -adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N 2 O . [ 48 ] Apparently N 2 O -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. [ 49 ] Exactly how N 2 O causes the release of endogenous opioid peptides remains uncertain. Various methods of producing nitrous oxide are used. [ 50 ] Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate [ 50 ] at about 250 °C, which decomposes into nitrous oxide and water vapour. [ 51 ] The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation . [ 52 ] The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate : [ 53 ] Another method involves the reaction of urea, nitric acid and sulfuric acid: [ 54 ] Direct oxidation of ammonia with a manganese dioxide - bismuth oxide catalyst has been reported: [ 55 ] cf. Ostwald process . Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating HNO 3 with SnCl 2 and HCl also has been demonstrated: Hyponitrous acid decomposes to N 2 O and water with a half-life of 16 days at 25 °C at pH 1–3. [ 56 ] Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle . Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. [ 8 ] The growth rate of about 1 ppb per year has also accelerated during recent decades. [ 9 ] Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. [ 58 ] Important atmospheric properties of N 2 O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." [ 61 ] 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N 2 O was emitted in 2007–2016. [ 61 ] About 40% of N 2 O emissions are from humans and the rest are part of the natural nitrogen cycle . [ 62 ] The N 2 O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. [ 63 ] Most of the N 2 O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. [ 64 ] Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). [ 65 ] Wetlands can also be emitters of nitrous oxide . [ 66 ] [ 67 ] Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. [ 61 ] The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. [ 73 ] These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, [ 74 ] and the majority of a year's emissions may occur when conditions are favorable during "hot moments" [ 75 ] [ 76 ] and/or at favorable locations known as "hotspots". [ 77 ] Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone . [ 68 ] [ 78 ] [ 79 ] Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification . Specifically, they include: These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter , acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase . [ 80 ] Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. [ 81 ] In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. [ 82 ] Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant . In the presence of a heated catalyst at a temperature of 577 °C (1,071 °F), N 2 O decomposes exothermically into nitrogen and oxygen. [ 83 ] Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse ( I sp ) up to 180 s. While noticeably less than the I sp available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide ), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). [ 84 ] At 600 psi , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. [ 85 ] [ 86 ] In vehicle racing , nitrous oxide (often called " nitrous ") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about 300 °C (572 °F), its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold , the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines . Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft , high-speed bombers and high-altitude interceptor aircraft . It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50 , a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). [ 87 ] However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide ( SO 2 ) is added to prevent substance abuse. [ 88 ] The gas is approved for use as a food additive ( E number : E942), specifically as an aerosol spray propellant . It is commonly used in aerosol whipped cream canisters and cooking sprays . The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. [ 89 ] Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season , due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. [ 90 ] Also, cooking spray, made from various oils with lecithin emulsifier , may use nitrous oxide propellant , or alternatively food-grade alcohol or propane . Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. [ 20 ] In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. [ 28 ] Today, the gas is administered in hospitals by means of an automated relative analgesia machine , with an anaesthetic vaporiser and a medical ventilator , that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic , and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane . It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. [ 91 ] [ 92 ] [ 93 ] Dentists use a simpler machine which only delivers an N 2 O / O 2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. [ 94 ] Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth , trauma , oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. [ 95 ] Its use for acute coronary syndrome is of unknown benefit. [ 96 ] In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. [ 97 ] Recreational inhalation of nitrous oxide , to induce euphoria and slight hallucinations , began with the British upper class in 1799 in gatherings known as "laughing gas parties". [ 98 ] From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. [ 99 ] Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market , in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. [ 100 ] A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. [ 101 ] Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. [ 102 ] While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. [ 103 ] In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. [ 104 ] In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok . [ 105 ] Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or N 2 O scavenger system may be needed to prevent waste-gas buildup. [ citation needed ] The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. [ 106 ] It set a recommended exposure limit (REL) of 25 ppm (46 mg/m 3 ) to escaped anaesthetic. [ 107 ] Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, [ 108 ] putting the user at risk of accidental injury. [ 38 ] Nitrous oxide is neurotoxic , and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. [ 104 ] [ 103 ] It is believed that, like other NMDA receptor antagonists , N 2 O produces Olney's lesions in rodents upon prolonged (several hour) exposure. [ 109 ] [ 110 ] [ 111 ] [ 112 ] However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. [ 113 ] In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. [ 109 ] Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia . This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), [ 114 ] which contain no oxygen gas. [ 115 ] In reports to poison control centers, heavy users (≥400 g or ≥200 L of N 2 O gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy : ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity . [ 116 ] Nitrous oxide might have therapeutic use in treating stroke . In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca 2+ influx in neuronal cell cultures, a cause of excitotoxicity . [ 113 ] Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. [ 117 ] This correlation is dose-dependent [ 118 ] [ 119 ] and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. [ citation needed ] Long-term exposure to nitrous oxide may cause vitamin B 12 deficiency . This can cause serious neurotoxicity if the user has preexisting vitamin B 12 deficiency. [ 120 ] It inactivates the cobalamin form of vitamin B 12 by oxidation. Symptoms of vitamin B 12 deficiency, including sensory neuropathy , myelopathy and encephalopathy , may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B 12 deficiency. Symptoms are treated with high doses of vitamin B 12 , but recovery can be slow and incomplete. [ 121 ] People with normal vitamin B 12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B 12 levels should be checked in people with risk factors for vitamin B 12 deficiency prior to using nitrous oxide anaesthesia. [ 122 ] Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. [ 123 ] [ 124 ] [ 125 ] At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature . The pressure curve is thus unusually sensitive to temperature. [ 126 ] As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to " water hammer "-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). [ 127 ] Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. [ 128 ] There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. [ 84 ] Global accounting of N 2 O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr ( teragrams , or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture . [ 11 ] [ 12 ] Nitrous oxide has significant global warming potential as a greenhouse gas . On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide ( CO 2 ). [ 60 ] However, because of its low concentration (less than 1/1,000 of that of CO 2 ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane . [ 129 ] On the other hand, since about 40% of the N 2 O entering the atmosphere is the result of human activity, [ 68 ] control of nitrous oxide is part of efforts to curb greenhouse gas emissions. [ 130 ] Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture , when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change . [ 131 ] Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid , which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. [ 132 ] A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event . [ 133 ] Nitrous oxide has also been implicated in thinning the ozone layer . A 2009 study suggested that N 2 O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. [ 10 ] [ 134 ] In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks [ 135 ] is legal when intended for medical anaesthesia. The New Zealand Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. [ 136 ] This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth ( UK ) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. [ 137 ] In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. [ 138 ] Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. [ 139 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. [ citation needed ] For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 140 ]
https://en.wikipedia.org/wiki/N☴N⚎O
Dinitrogen tetroxide Dinitrogen trioxide Nitrogen dioxide Nitrous oxide Nitroxyl (reduced form) Hydroxylamine (hydrogenated form) Nitric oxide ( nitrogen oxide or nitrogen monoxide [ 1 ] ) is a colorless gas with the formula NO . It is one of the principal oxides of nitrogen . Nitric oxide is a free radical : it has an unpaired electron , which is sometimes denoted by a dot in its chemical formula ( • N=O or • NO). Nitric oxide is also a heteronuclear diatomic molecule , a class of molecules whose study spawned early modern theories of chemical bonding . [ 6 ] An important intermediate in industrial chemistry , nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. [ 7 ] It was proclaimed the " Molecule of the Year " in 1992. [ 8 ] The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. [ 9 ] Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing. [ 10 ] [ 11 ] Nitric oxide should not be confused with nitrogen dioxide (NO 2 ), a brown gas and major air pollutant , or with nitrous oxide (N 2 O), an anesthetic gas. [ 6 ] Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. [ 12 ] Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott , Louis J. Ignarro , and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine. [ 13 ] The ground state electronic configuration of NO is, in united atom notation: [ 14 ] ( 1 σ ) 2 ( 2 σ ) 2 ( 3 σ ) 2 ( 4 σ ∗ ) 2 ( 5 σ ) 2 ( 1 π ) 4 ( 2 π ∗ ) 1 {\displaystyle (1\sigma )^{2}(2\sigma )^{2}(3\sigma )^{2}(4\sigma ^{*})^{2}(5\sigma )^{2}(1\pi )^{4}(2\pi ^{*})^{1}} The first two orbitals are actually pure atomic 1 s O and 1 s N from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron. The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J = 3 ⁄ 2 or J = 1 ⁄ 2 . The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transfer of negative electronic charge from oxygen to nitrogen. [ 15 ] Upon condensing to a neat liquid, nitric oxide dimerizes to colorless dinitrogen dioxide (O=N–N=O), but the association is weak and reversible. The N–N distance in crystalline NO is 218 pm, nearly twice the N–O distance. Condensation in a highly polar environment instead gives the red alternant isomer O=N–O + =N − . [ 6 ] Since the heat of formation of • NO is endothermic , NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction: When exposed to oxygen , nitric oxide converts into nitrogen dioxide : This reaction is thought to occur via the intermediates ONOO • and the red compound ONOONO. [ 16 ] In water, nitric oxide reacts with oxygen to form nitrous acid (HNO 2 ). The reaction is thought to proceed via the following stoichiometry : Nitric oxide reacts with fluorine , chlorine , and bromine to form the nitrosyl halides, such as nitrosyl chloride : With NO 2 , also a radical, NO combines to form the intensely blue dinitrogen trioxide : [ 6 ] Nitric oxide rarely sees organic chemistry use. Most reactions with it produce complex mixtures of salts, separable only through careful recrystallization . [ 17 ] The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation . The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate , giving a diazeniumdiolate (also called a nitrosohydroxylamine ). [ 18 ] The product can undergo a subsequent retro- aldol reaction , giving an overall process similar to the haloform reaction . For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position , with subsequent loss of methyl acetate as a by-product : [ 19 ] This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide , ultimately forming sodium formate and nitrous oxide by way of an N -methoxydiazeniumdiolate. [ 20 ] Sufficiently basic secondary amines undergo a Traube-like reaction to give NONOates . [ 21 ] However, very few nucleophiles undergo the Traube reaction, either failing to adduce NO or immediately decomposing with nitrous oxide release. [ 17 ] Nitric oxide reacts with transition metals to give complexes called metal nitrosyls . The most common bonding mode of nitric oxide is the terminal linear type (M−NO). [ 6 ] Alternatively, nitric oxide can serve as a one-electron pseudohalide. In such complexes, the M−N−O group is characterized by an angle between 120° and 140°. The NO group can also bridge between metal centers through the nitrogen atom in a variety of geometries. In commercial settings, nitric oxide is produced by the oxidation of ammonia at 750–900 °C (normally at 850 °C) with platinum as catalyst in the Ostwald process : The uncatalyzed endothermic reaction of oxygen (O 2 ) and nitrogen (N 2 ), which is effected at high temperature (>2000 °C) by lightning has not been developed into a practical commercial synthesis (see Birkeland–Eyde process ): In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper : An alternative route involves the reduction of nitrous acid in the form of sodium nitrite or potassium nitrite : The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments. So-called NONOate compounds are also used for nitric oxide generation, especially in biological laboratories. However, other Traube adducts may decompose to instead give nitrous oxide . [ 22 ] Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone . [ 23 ] A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide , accompanied with emission of light ( chemiluminescence ): which can be measured with a photodetector . The amount of light produced is proportional to the amount of nitric oxide in the sample. Other methods of testing include electroanalysis (amperometric approach), where ·NO reacts with an electrode to induce a current or voltage change. The detection of NO radicals in biological tissues is particularly difficult due to the short lifetime and concentration of these radicals in tissues. One of the few practical methods is spin trapping of nitric oxide with iron- dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR). [ 24 ] [ 25 ] A group of fluorescent dye indicators that are also available in acetylated form for intracellular measurements exist. The most common compound is 4,5-diaminofluorescein (DAF-2). [ 26 ] Nitric oxide reacts with the hydroperoxyl radical ( HO • 2 ) to form nitrogen dioxide (NO 2 ), which then can react with a hydroxyl radical (HO • ) to produce nitric acid (HNO 3 ): Nitric acid, along with sulfuric acid , contributes to acid rain deposition. • NO participates in ozone layer depletion . Nitric oxide reacts with stratospheric ozone to form O 2 and nitrogen dioxide: This reaction is also utilized to measure concentrations of • NO in control volumes. As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, HO • 2 , or diatomic oxygen, O 2 ). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function. [ 27 ] NO is a gaseous signaling molecule . [ 28 ] It is a key vertebrate biological messenger , playing a role in a variety of biological processes. [ 29 ] It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells. [ 30 ] Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L -arginine , oxygen , and NADPH by various nitric oxide synthase (NOS) enzymes . [ 31 ] Reduction of inorganic nitrate may also make nitric oxide. [ 32 ] One of the main enzymatic targets of nitric oxide is guanylyl cyclase . [ 33 ] The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. [ 33 ] Nitric oxide is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes. These attributes make nitric oxide ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. [ 32 ] Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated. [ 33 ] The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, resulting in vasodilation and increasing blood flow. [ 32 ] Sildenafil (Viagra) is a drug that uses the nitric oxide pathway. Sildenafil does not produce nitric oxide, but enhances the signals that are downstream of the nitric oxide pathway by protecting cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum , allowing for the signal to be enhanced, and thus vasodilation . [ 31 ] Another endogenous gaseous transmitter, hydrogen sulfide (H 2 S) works with NO to induce vasodilation and angiogenesis in a cooperative manner. [ 34 ] [ 35 ] Nasal breathing produces nitric oxide within the body, while oral breathing does not. [ 36 ] [ 37 ] In the U.S., the Occupational Safety and Health Administration (OSHA) has set the legal limit ( permissible exposure limit ) for nitric oxide exposure in the workplace as 25 ppm (30 mg/m 3 ) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 25 ppm (30 mg/m 3 ) over an 8-hour workday. At levels of 100 ppm, nitric oxide is immediately dangerous to life and health . [ 38 ] Liquid nitrogen oxide is very sensitive to detonation even in the absence of fuel, and can be initiated as readily as nitroglycerin. Detonation of the endothermic liquid oxide close to its boiling point (−152 °C or −241.6 °F or 121.1 K) generated a 100 kbar pulse and fragmented the test equipment. It is the simplest molecule that is capable of detonation in all three phases. The liquid oxide is sensitive and may explode during distillation, and this has been the cause of industrial accidents. [ 39 ] Gaseous nitric oxide detonates at about 2,300 metres per second (8,300 km/h; 5,100 mph), but as a solid it can reach a detonation velocity of 6,100 metres per second (22,000 km/h; 13,600 mph). [ 40 ] Notes
https://en.wikipedia.org/wiki/N☴O
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ] Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ: The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes. In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below: For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ] In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ] The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ] Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ] It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ] Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ] The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension. When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ] About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ] Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ] Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization . In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ] A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ] To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ] As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ] The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list: The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear: These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed If bit 1 is inverted, blocks of 2 codewords change order: If bit 2 is inverted, blocks of 4 codewords reverse order: Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values. The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ] On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits. 0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221 There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings. For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ): There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}} A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ] For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ] whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ] We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form { g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}} where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by If we define the transition multiplicities m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|} to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}} The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ] Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ] Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e. V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} . An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}} otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots } The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below. These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines . Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 ‍ N , where N is the number of vertices in the middle-level subgraph. [ 70 ] Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ] Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ] The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ] Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022. An STGC for P = 360 and n = 9 is reproduced here: Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ] Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ] If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ] There are a number of binary codes similar to Gray codes, including: The following binary-coded decimal (BCD) codes are Gray code variants as well:
https://en.wikipedia.org/wiki/O'Brien_code_I
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two. Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ] Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ: The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value. This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes. In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process". In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below: For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ] In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ] The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ] Reflected binary codes were applied to mathematical puzzles before they became known to engineers. The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ] It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ] Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ] The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension. When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ] About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ] Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ] Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose. Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others. For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals. Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking. In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position. Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties. Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization . In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise . Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies. If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves. A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit. George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ] A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used. Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ] To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ] As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ] The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list: The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear: These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed If bit 1 is inverted, blocks of 2 codewords change order: If bit 2 is inverted, blocks of 4 codewords reverse order: Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code. A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two. To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values. The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ] On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation. In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word). It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits. 0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221 There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings. For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ): There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one. Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods. See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation. Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}} A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ] For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ] whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ] We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form { g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}} where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by If we define the transition multiplicities m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|} to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}} The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ] Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ] Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one. We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e. V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} . An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}} otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots } The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below. These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines . Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 ‍ N , where N is the number of vertices in the middle-level subgraph. [ 70 ] Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ] Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension. Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ] The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts. If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC. For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders. Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors. An STGC for P = 30 and n = 5 is reproduced here: Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size. The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ] Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022. An STGC for P = 360 and n = 9 is reproduced here: Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ] Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ] If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value. Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code. When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected. The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ] There are a number of binary codes similar to Gray codes, including: The following binary-coded decimal (BCD) codes are Gray code variants as well:
https://en.wikipedia.org/wiki/O'Brien_code_II
The O'Connell effect is an asymmetry in the photometric light curve of certain close eclipsing binary stars . It was named after the astronomer Daniel Joseph Kelly O'Connell , SJ [ 1 ] of Riverview College in New South Wales who in 1951 studied this phenomenon and distinguished it from the so-called periastron effect described by earlier authors, as it does not necessarily appear near the periastron , when tidal effects and an increase in mutual radiation may cause an increase in luminosity . [ 2 ] The out-of-eclipse brightness maxima of some binary stars are unequally high. This is contrary to expectations that the observed luminosity of an eclipsing binary should be the same when its components switch positions every half period. The maximum following the primary minimum is nearly always brighter than the preceding one. This is called the positive O'Connell effect, the reverse case is referred to as the negative O'Connell effect. The difference increases with the ellipticity of the stars, and the differences in their sizes and densities . [ 3 ] Also, spectral differences have been observed between subsequent maxima. [ 4 ] In some systems where the phenomenon has been observed, such as in CG Cygni , RT Lacertae, XY Ursae Majoris, or YY Eridani, the luminosity difference between subsequent maxima has been found to be variable, in others relatively stable. Furthermore, it has been observed in a variety of configurations , such as over-contact, semi-detached, and near contact systems alike. These factors make an explanation difficult and suggest that various mechanisms may be responsible for the effect to manifest. Several reasons have thus been proposed: an asymmetric distribution of starspots , impacts of one-way gas streams between the components of the binary system, or the flow of circumstellar matter, asymmetrically deflected due to Coriolis forces . [ 5 ] The O'Connell effect has been observed, among others, in the binary systems W Crucis , [ 2 ] RT Lacertae, [ 1 ] CX Canis Majoris , TU Crucis, AQ Monocerotis, DQ Velorum, [ 6 ] and CG Cygni. [ 7 ]
https://en.wikipedia.org/wiki/O'Connell_effect
Diphenyl ether is the organic compound with the formula ( C 6 H 5 ) 2 O . It is a colorless, low-melting solid. This, the simplest diaryl ether , has a variety of niche applications. [ 5 ] Diphenyl ether was discovered by Heinrich Limpricht and Karl List in 1855, when they reproduced Carl Ettling's destructive distillation of copper benzoate and separated it from the low-melting oily distillate components ignored by previous researchers. They named the compound phenyl oxide ( German : Phenyloxyd ) and studied some of its derivatives. [ 6 ] Now it is synthesized by a modification of the Williamson ether synthesis , here the reaction of phenol and bromobenzene in the presence of base and a catalytic amount of copper : Involving similar reactions, diphenyl ether is a significant side product in the high-pressure hydrolysis of chlorobenzene in the production of phenol. [ 7 ] Related compounds are prepared by Ullmann reactions . [ 8 ] The compound undergoes reactions typical of other phenyl rings, including hydroxylation , nitration , halogenation , sulfonation , and Friedel–Crafts alkylation or acylation . [ 5 ] The main application of diphenyl ether is as a eutectic mixture with biphenyl , used as a heat transfer fluid . Such a mixture is well-suited for heat transfer applications because of the relatively large temperature range of its liquid state. A eutectic mixture (commercially, Dowtherm A) is 73.5% diphenyl ether and 26.5% biphenyl. [ 9 ] [ 10 ] Diphenyl ether is a starting material in the production of phenoxathiin via the Ferrario reaction . [ 11 ] Phenoxathiin is used in polyamide and polyimide production. [ 12 ] Because of its odor reminiscent of scented geranium , as well as its stability and low price, diphenyl ether is used widely in soap perfumes. Diphenyl ether is also used as a processing aid in the production of polyesters . [ 5 ] It is a component of important hormone T 3 or triiodothyronine . Several polybrominated diphenyl ethers (PBDEs) are useful flame retardants. Of penta-, octa-, and decaBDE, the three most common PBDEs, only decaBDE is still in widespread use since its ban in the European Union in 2003. [ 13 ] DecaBDE, also known as decabromodiphenyl oxide, [ 14 ] is a high-volume industrial chemical with over 450,000 kilograms produced annually in the United States. Decabromodiphenyl oxide is sold under the trade name Saytex 102 as a flame retardant in the manufacture of paints and reinforced plastics.
https://en.wikipedia.org/wiki/O(C6H5)2
In mathematics , an O*-algebra is an algebra of possibly unbounded operators defined on a dense subspace of a Hilbert space . The original examples were described by Borchers (1962) and Uhlmann (1962) , who studied some examples of O*-algebras, called Borchers algebras , arising from the Wightman axioms of quantum field theory . Powers (1971) and Lassner (1972) began the systematic study of algebras of unbounded operators. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/O*-algebra
An O -acylpseudotropine is any derivative of pseudotropine in which the alcohol group is substituted with an acyl group . Acylpseudotropines are formed by the action of the enzyme pseudotropine acyltransferase on pseudotropine. [ 1 ] [ 2 ] [ 3 ] [ 4 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/O-Acylpseudotropine
o -Cresolphthalein is a phthalein dye used as a pH indicator in titrations . It is insoluble in water but soluble in ethanol . Its solution is colourless below pH 8.2, and purple above 9.8. Its molecular formula is C 22 H 18 O 4 . It is used medically to determine calcium levels in the human body, or to synthesize polyamides or polyimides. o-Cresolphthalein is not produced industrially, rather, it is commercially available. [ 1 ] To be produced, the method generally used to synthesize phthalein dyes is effective. This method is used to synthesize phenolphthalein and thymolphthalein . To begin, a 2M equivalent of a phenol or a substituted phenol should be combined with a 1M equivalent of a phthalic anhydride . [ 1 ] The compound has uses ranging from medicine to laboratory syntheses of chemically similar compounds. o-Cresophthalein has been used to derive polyamides and polyimides , colorimetrically estimate calcium in serum, and predict amount of time to wait before blood collection after a patient receives gadodiamide . Aromatic polyamides and polyamides are practical compounds due to their temperature resistance, electrical or insulating characteristics, and their mechanical strength. Some of the polyamides and polyimides that can be synthesized by o-Cresophthalein are polycarbonate , polyacrylate , and epoxy-resin . [ 2 ] The diether-diamine 3,3-bis[4-(4-amino-phenoxy)-3-methylphenyl]phthalide, or BNMP, is synthesized by 12 g o-cresophthalein, 11.5 g p -chloronitrobenzene, 5.1 g anhydrous potassium carbonate, and 55 mL of DMF. The compounds should be refluxed together at 160 °C for eight hours. Once it is done and has cooled, it should be mixed with 0.3 L methanol. A precipitate should form and be vacuum filtered to obtain a solid. It should then be washed with water and dried, yielding a yellow product. It should then be recrystallized from glacial acetic acid to yield yellow needles. The product is BNMP. The reaction can go further by combining 15.5 g of BNMP with 0.18 g 10% Pd/C and 50 mL ethanol. They should be stirred at 80 °C. 7 mL of hydrazine monohydrate should be added drop by drop for one hour. The solution should then be mixed for eight hours. It should then be filtered to separated from the Pd/C and concentrated. The concentrated solution should be added to water, and a precipitate should be formed. It should then be vacuum filtered to isolate the solid, yelding 3,3-Bis[4-(4-aminophenoxy)-3-methylphenyl]pthalide, or BAMP, as a white solid. It should then be purified by water and ethanol. [ 2 ] Calcium in a blood sample should be estimated when required medically. Calcium should be precipitated out of 0.1 mL of the blood sample serum as calcium oxalate. After that, the decomposition of the calcium oxalate should occur by heat. Then, the sample should be estimated colorimetrically by o-cresolphthalein complexone. The required liquid complexone is made by dissolving 10 mg o-cresolphthalein complexone in 50 mL alkaline borate [ clarification needed ] , and then 50 mL of 0.05 N HCl are added to make the solution's pH 8.5. This method for calcium determination is efficient and effective, requiring a minimal amount of blood serum sample and a reasonable amount of time. [ 3 ] Gadolinium is given to patients for magnetic resonance imaging, or an MRI.It is used as a contrast agent for the exam to improve clarity of the images formed. However, it can react in the human body and have detrimental effects. Therefore, the agent should be removed. One of these gadolinium based agents is gadodiamide. Calcium in the body should be determined accurately to ensure that the Gadodiamide does not have adverse effects on the patient. There are two o-cresolphthalein methods to determine amount of calcium. The o-cresolpthalein methods are effective because it is a calcium binding dye. The gadolinium ion with a charge of +3 can be removed from gadodiamide using o-cresolphthalein. For these methods, glomerular filtration rate, or GFR, and time since gadodiamide was given should be recorded. Ultimately, these two factors and the impact of gadodiamide on calcium levels calculated by the o-cresolphthalein method helps to reveal an amount of time that patients must wait after receiving gadodiamide to have blood drawn again, or avoid pseudohypocalcemia. [ 4 ] To the left is the NFPA diamond as determined by the Safety Data Sheet, or SDS, by Fisher Scientific. [ 5 ] There is minimal risk in handling the chemical.
https://en.wikipedia.org/wiki/O-Cresolphthalein
O -GlcNAc (short for O -linked GlcNAc or O -linked β- N -acetylglucosamine ) is a reversible enzymatic post-translational modification that is found on serine and threonine residues of nucleo cytoplasmic proteins . The modification is characterized by a β-glycosidic bond between the hydroxyl group of serine or threonine side chains and N -acetylglucosamine (GlcNAc) . O -GlcNAc differs from other forms of protein glycosylation : (i) O -GlcNAc is not elongated or modified to form more complex glycan structures, (ii) O -GlcNAc is almost exclusively found on nuclear and cytoplasmic proteins rather than membrane proteins and secretory proteins , and (iii) O -GlcNAc is a highly dynamic modification that turns over more rapidly than the proteins which it modifies. O -GlcNAc is conserved across metazoans . [ 1 ] Due to the dynamic nature of O -GlcNAc and its presence on serine and threonine residues, O -GlcNAcylation is similar to protein phosphorylation in some respects. While there are roughly 500 kinases and 150 phosphatases that regulate protein phosphorylation in humans, there are only 2 enzymes that regulate the cycling of O -GlcNAc: O -GlcNAc transferase (OGT) and O -GlcNAcase (OGA) catalyze the addition and removal of O -GlcNAc, respectively. [ 2 ] OGT utilizes UDP-GlcNAc as the donor sugar for sugar transfer. [ 3 ] First reported in 1984, this post-translational modification has since been identified on over 9,000 proteins in H. sapiens . [ 4 ] [ 5 ] Numerous functional roles for O -GlcNAcylation have been reported including crosstalking with serine/threonine phosphorylation, regulating protein-protein interactions , altering protein structure or enzyme activity, changing protein subcellular localization , and modulating protein stability and degradation . [ 1 ] [ 6 ] Numerous components of the cell's transcription machinery have been identified as being modified by O -GlcNAc, and many studies have reported links between O -GlcNAc, transcription, and epigenetics . [ 7 ] [ 8 ] Many other cellular processes are influenced by O -GlcNAc such as apoptosis , the cell cycle , and stress responses . [ 9 ] As UDP-GlcNAc is the final product of the hexosamine biosynthetic pathway, which integrates amino acid , carbohydrate , fatty acid , and nucleotide metabolism, it has been suggested that O -GlcNAc acts as a " nutrient sensor " and responds to the cell's metabolic status. [ 10 ] Dysregulation of O -GlcNAc has been implicated in many pathologies including Alzheimer's disease , cancer , diabetes , and neurodegenerative disorders . [ 11 ] [ 12 ] In 1984, the Hart lab was probing for terminal GlcNAc residues on the surfaces of thymocytes and lymphocytes . Bovine milk β-1,4-galactosyltransferase , which reacts with terminal GlcNAc residues, was used to perform radiolabeling with UDP-[ 3 H]galactose. β-elimination of serine and threonine residues demonstrated that most of the [ 3 H]galactose was attached to proteins O -glycosidically; chromatography revealed that the major β-elimination product was Galβ1-4GlcNAcitol. Insensitivity to peptide N -glycosidase treatment provided additional evidence for O -linked GlcNAc. Permeabilizing cells with detergent prior to radiolabeling greatly increased the amount of [ 3 H]galactose incorporated into Galβ1-4GlcNAcitol, leading the authors to conclude that most of the O -linked GlcNAc monosaccharide residues were intracellular. [ 13 ] O -GlcNAc is generally a dynamic modification that can be cycled on and off various proteins. Some residues are thought to be constitutively modified by O -GlcNAc. [ 14 ] [ 15 ] The O -GlcNAc modification is installed by OGT in a sequential bi-bi mechanism where the donor sugar, UDP-GlcNAc, binds to OGT first followed by the substrate protein. [ 16 ] The O -GlcNAc modification is removed by OGA in a hydrolysis mechanism involving anchimeric assistance (substrate-assisted catalysis) to yield the unmodified protein and GlcNAc. [ 17 ] While crystal structures have been reported for both OGT [ 16 ] and OGA, [ 18 ] [ 19 ] [ 20 ] the exact mechanisms by which OGT and OGA recognize substrates have not been completely elucidated. Unlike N -linked glycosylation , for which glycosylation occurs in a specific consensus sequence (Asn-X-Ser/Thr, where X is any amino acid except Pro), no definitive consensus sequence has been identified for O -GlcNAc. Consequently, predicting sites of O -GlcNAc modification is challenging, and identifying modification sites generally requires mass spectrometry methods. For OGT, studies have shown that substrate recognition is regulated by a number of factors including aspartate [ 21 ] and asparagine [ 22 ] ladder motifs in the lumen of the superhelical TPR domain, active site residues, [ 23 ] and adaptor proteins. [ 24 ] As crystal structures have shown that OGT requires its substrate to be in an extended conformation, it has been proposed that OGT has a preference for flexible substrates. [ 23 ] In in vitro kinetic experiments measuring OGT and OGA activity on a panel of protein substrates, kinetic parameters for OGT were shown to be variable between various proteins while kinetic parameters for OGA were relatively constant between various proteins. This result suggested that OGT is the "senior partner" in regulating O -GlcNAc and OGA primarily recognizes substrates via the presence of O -GlcNAc rather than the identity of the modified protein. [ 14 ] Several methods exist to detect the presence of O -GlcNAc and characterize the specific residues modified. Wheat germ agglutinin , a plant lectin , is able to recognize terminal GlcNAc residues and is thus often used for detection of O -GlcNAc. This lectin has been applied in lectin affinity chromatography for the enrichment and detection of O -GlcNAc. [ 25 ] Pan- O -GlcNAc antibodies that recognize the O -GlcNAc modification largely irrespective of the modified protein's identity are commonly used. These include RL2, [ 26 ] an IgG antibody raised against O -GlcNAcylated nuclear pore complex proteins , and CTD110.6, [ 27 ] an IgM antibody raised against an immunogenic peptide with a single serine O -GlcNAc modification. Other O -GlcNAc-specific antibodies have been reported and demonstrated to have some dependence on the identity of the modified protein. [ 28 ] Many metabolic chemical reporters have been developed to identify O -GlcNAc. Metabolic chemical reporters are generally sugar analogues that bear an additional chemical moiety allowing for additional reactivity. For example, peracetylated GlcNAc (Ac 4 GlcNAz) is a cell-permeable azido sugar that is de-esterified intracellularly by esterases to GlcNAz and converted to UDP-GlcNAz in the hexosamine salvage pathway. UDP-GlcNAz can be utilized as a sugar donor by OGT to yield the O -GlcNAz modification. [ 29 ] The presence of the azido sugar can then be visualized via alkyne -containing bioorthogonal chemical probes in an azide-alkyne cycloaddition reaction . These probes can incorporate easily identifiable tags such as the FLAG peptide , biotin , and dye molecules . [ 29 ] [ 30 ] Mass tags based on polyethylene glycol (PEG) have also been used to measure O -GlcNAc stoichiometry. Conjugation of 5 kDa PEG molecules leads to a mass shift for modified proteins - more heavily O -GlcNAcylated proteins will have multiple PEG molecules and thus migrate more slowly in gel electrophoresis . [ 31 ] Other metabolic chemical reporters bearing azides or alkynes (generally at the 2 or 6 positions) have been reported. [ 32 ] Instead of GlcNAc analogues, GalNAc analogues may be used as well as UDP-GalNAc is in equilibrium with UDP-GlcNAc in cells due to the action of UDP-galactose-4'-epimerase (GALE) . Ac 4 GalNAz shows enhanced labeling of O -GlcNAc versus Ac 4 GlcNAz, possibly due to a bottleneck in UDP-GlcNAc pyrophosphorylase processing of GlcNAz-1-P to UDP-GlcNAz. [ 33 ] Ac 3 GlcN-β-Ala-NBD-α-1-P(Ac-SATE) 2 , a metabolic chemical reporter that is processed intracellularly to a fluorophore-labeled UDP-GlcNAc analogue, has been shown to achieve one-step fluorescent labeling of O -GlcNAc in live cells. [ 34 ] Metabolic labeling may also be used to identify binding partners of O -GlcNAcylated proteins. The N -acetyl group may be elongated to incorporate a diazirine moiety. Treatment of cells with peracetylated, phosphate-protected Ac 3 GlcNDAz-1-P(Ac-SATE) 2 leads to modification of proteins with O -GlcNDAz. UV irradiation then induces photocrosslinking between proteins bearing the O -GlcNDaz modification and interacting proteins. [ 35 ] Some issues have been identified with various metabolic chemical reporters, e.g., their use may inhibit the hexosamine biosynthetic pathway, [ 32 ] they may not be recognized by OGA and therefore are not able to capture O -GlcNAc cycling, [ 36 ] or they may be incorporated into glycosylation modifications besides O -GlcNAc as seen in secreted proteins. [ 37 ] Metabolic chemical reporters with chemical handles at the N -acetyl position may also label acetylated proteins as the acetyl group may be hydrolyzed into acetate analogues that can be utilized for protein acetylation. [ 38 ] Additionally, per- O -acetylated monosaccharides have been identified to react with cysteines leading to artificial S -glycosylation via an elimination-addition mechanism. [ 39 ] [ 40 ] [ 41 ] Next-generation metabolic chemical reporters have been developed to overcome this off-target reactivity. [ 42 ] [ 43 ] [ 44 ] [ 45 ] Chemoenzymatic labeling provides an alternative strategy to incorporate handles for click chemistry . The Click-IT O -GlcNAc Enzymatic Labeling System, developed by the Hsieh-Wilson group and subsequently commercialized by Invitrogen , utilizes a mutant GalT Y289L enzyme that is able to transfer azidogalactose (GalNAz) onto O -GlcNAc. [ 30 ] [ 46 ] The presence of GalNAz (and therefore also O -GlcNAc) can be detected with various alkyne-containing probes with identifiable tags such as biotin, [ 46 ] dye molecules, [ 30 ] and PEG. [ 31 ] An engineered protein biosensor has been developed that can detect changes in O -GlcNAc levels using Förster resonance energy transfer . This sensor consists of four components linked together in the following order: cyan fluorescent protein (CFP), an O -GlcNAc binding domain (based on GafD, a lectin sensitive for terminal β - O -GlcNAc), a CKII peptide that is a known OGT substrate, and yellow fluorescent protein (YFP). Upon O -GlcNAcylation of the CKII peptide, the GafD domain binds the O -GlcNAc moiety, bringing the CFP and YFP domains into close proximity and generating a FRET signal. Generation of this signal is reversible and can be used to monitor O -GlcNAc dynamics in response to various treatments. This sensor may be genetically encoded and used in cells. [ 47 ] Addition of a localization sequence allows for targeting of this O -GlcNAc sensor to the nucleus, cytoplasm, or plasma membrane. [ 48 ] Biochemical approaches such as Western blotting may provide supporting evidence that a protein is modified by O -GlcNAc; mass spectrometry (MS) is able to provide definitive evidence as to the presence of O -GlcNAc. Glycoproteomic studies applying MS have contributed to the identification of proteins modified by O -GlcNAc. As O -GlcNAc is substoichiometric and ion suppression occurs in the presence of unmodified peptides, an enrichment step is usually performed prior to mass spectrometry analysis. This may be accomplished using lectins, antibodies, or chemical tagging. The O -GlcNAc modification is labile under collision-induced fragmentation methods such as collision-induced dissociation (CID) and higher-energy collisional dissociation (HCD) , so these methods in isolation are not readily applicable for O -GlcNAc site mapping. HCD generates fragment ions characteristic of N -acetylhexosamines that can be used to determine O -GlcNAcylation status. [ 49 ] In order to facilitate site mapping with HCD, β-elimination followed by Michael addition with dithiothreitol (BEMAD) may be used to convert the labile O -GlcNAc modification into a more stable mass tag. For BEMAD mapping of O -GlcNAc, the sample must be treated with phosphatatase otherwise other serine/threonine post-translational modifications such as phosphorylation may be detected. [ 50 ] Electron-transfer dissociation (ETD) is used for site mapping as ETD causes peptide backbone cleavage while leaving post-translational modifications such as O -GlcNAc intact. [ 51 ] Traditional proteomic studies perform tandem MS on the most abundant species in the full-scan mass spectra, prohibiting full characterization of lower-abundance species. One modern strategy for targeted proteomics uses isotopic labels, e.g., dibromide, to tag O -GlcNAcylated proteins. This method allows for algorithmic detection of low-abundance species, which are then sequenced by tandem MS. [ 52 ] Directed tandem MS and targeted glycopeptide assignment allow for identification of O -GlcNAcylated peptide sequences. One example probe consists of a biotin affinity tag, an acid-cleavable silane, an isotopic recoding motif, and an alkyne. [ 53 ] [ 54 ] [ 55 ] Unambiguous site mapping is possible for peptides with only one serine/threonine residue. [ 56 ] The general procedure for this isotope-targeted glycoproteomics (IsoTaG) method is the following: Other methodologies have been developed for quantitative profiling of O -GlcNAc using differential isotopic labeling. [ 57 ] Example probes generally consist of a biotin affinity tag, a cleavable linker (acid- or photo-cleavable), a heavy or light isotopic tag, and an alkyne. [ 58 ] [ 59 ] O-GlcNAc modification has also been recently reported on tyrosine residues, though these represent roughly 5% of all O-GlcNAc modifications. [ 60 ] Various chemical and genetic strategies have been developed to manipulate O -GlcNAc, both on a proteome -wide basis and on specific proteins. Small molecule inhibitors have been reported for both OGT [ 61 ] [ 62 ] and OGA [ 63 ] [ 64 ] that function in cells or in vivo . OGT inhibitors result in a global decrease of O -GlcNAc while OGA inhibitors result in a global increase of O -GlcNAc; these inhibitors are not able to modulate O -GlcNAc on specific proteins. Inhibition of the hexosamine biosynthetic pathway is also able to decrease O -GlcNAc levels. For instance, glutamine analogues azaserine and 6-diazo-5-oxo-L-norleucine (DON) can inhibit GFAT , though these molecules may also non-specifically affect other pathways. [ 65 ] Expressed protein ligation has been used to prepare O -GlcNAc-modified proteins in a site-specific manner. Methods exist for solid-phase peptide synthesis incorporation of GlcNAc-modified serine, threonine, or cysteine. [ 66 ] [ 67 ] Site-directed mutagenesis of O -GlcNAc-modified serine or threonine residues to alanine may be used to evaluate the function of O -GlcNAc at specific residues. As alanine's side chain is a methyl group and is thus not able to act as an O -GlcNAc site, this mutation effectively permanently removes O -GlcNAc at a specific residue. While serine/threonine phosphorylation may be modeled by mutagenesis to aspartate or glutamate , which have negatively charged carboxylate side chains, none of the 20 canonical amino acids sufficiently recapitulate the properties of O -GlcNAc. [ 68 ] Mutagenesis to tryptophan has been used to mimic the steric bulk of O -GlcNAc, though tryptophan is much more hydrophobic than O -GlcNAc. [ 69 ] [ 70 ] Mutagenesis may also perturb other post-translational modifications, e.g., if a serine is alternatively phosphorylated or O -GlcNAcylated, alanine mutagenesis permanently eliminates the possibilities of both phosphorylation and O -GlcNAcylation. Mass spectrometry identified S -GlcNAc as a post-translational modification found on cysteine residues. In vitro experiments demonstrated that OGT could catalyze the formation of S -GlcNAc and that OGA is incapable of hydrolyzing S -GlcNAc. [ 71 ] Though a previous report suggested that OGA is capable of hydrolyzing thioglycosides, this was only demonstrated on the aryl thioglycoside para -nitrophenol- S -GlcNAc; para -nitrothiophenol is a more activated leaving group than a cysteine residue. [ 72 ] Recent studies have supported the use of S -GlcNAc as an enzymatically stable structural model of O -GlcNAc that can be incorporated through solid-phase peptide synthesis or site-directed mutagenesis. [ 73 ] [ 68 ] [ 66 ] [ 74 ] Fusion constructs of a nanobody and TPR-truncated OGT allow for proximity-induced protein-specific O -GlcNAcylation in cells. The nanobody may be directed towards protein tags, e.g., GFP , that are fused to the target protein, or the nanobody may be directed towards endogenous proteins. For example, a nanobody recognizing a C-terminal EPEA sequence can direct OGT enzymatic activity to α-synuclein . [ 75 ] Apoptosis, a form of controlled cell death, has been suggested to be regulated by O -GlcNAc. In various cancers, elevated O -GlcNAc levels have been reported to suppress apoptosis. [ 76 ] [ 77 ] Caspase-3 , caspase-8 , and caspase-9 have been reported to be modified by O -GlcNAc. Caspase-8 is modified near its cleavage/activation sites; O -GlcNAc modification may block caspase-8 cleavage and activation by steric hindrance. Pharmacological lowering of O -GlcNAc with 5 S -GlcNAc accelerated caspase activation while pharmacological raising of O -GlcNAc with thiamet-G inhibited caspase activation. [ 70 ] The proteins that regulate genetics are often categorized as writers, readers, and erasers, i.e., enzymes that install epigenetic modifications, proteins that recognize these modifications, and enzymes that remove these modifications. [ 78 ] To date, O -GlcNAc has been identified on writer and eraser enzymes. O -GlcNAc is found in multiple locations on EZH2 , the catalytic methyltransferase subunit of PRC2 , and is thought to stabilize EZH2 prior to PRC2 complex formation and regulate di- and tri-methyltransferase activity. [ 79 ] [ 80 ] All three members of the ten-eleven translocation (TET) family of dioxygenases ( TET1 , TET2 , and TET3 ) are known to be modified by O -GlcNAc. [ 81 ] O -GlcNAc has been suggested to cause nuclear export of TET3, reducing its enzymatic activity by depleting it from the nucleus. [ 82 ] O -GlcNAcylation of HDAC1 is associated with elevated activating phosphorylation of HDAC1. [ 83 ] Histone proteins, the primary protein component of chromatin , have been reported to be modified by O -GlcNAc, [ 8 ] [ 84 ] [ 85 ] though other studies have not been able to detect histone O-GlcNAc. [ 86 ] [ 87 ] The presence of O -GlcNAc on histones has been suggested to affect gene transcription as well as other histone marks such as acetylation [ 8 ] and monoubiquitination . [ 85 ] TET2 has been reported to interact with the TPR domain of OGT and facilitate recruitment of OGT to histones. [ 88 ] Phosphorylation of OGT T444 via AMPK has been found to inhibit OGT-chromatin association and downregulate H2B S112 O -GlcNAc. [ 89 ] The hexosamine biosynthetic pathway's product, UDP-GlcNAc, is utilized by OGT to catalyze the addition of O -GlcNAc. This pathway integrates information about the concentrations of various metabolites including amino acids, carbohydrates, fatty acids, and nucleotides. Consequently, UDP-GlcNAc levels are sensitive to cellular metabolite levels. OGT activity is in part regulated by UDP-GlcNAc concentration, making a link between cellular nutrient status and O -GlcNAc. [ 90 ] Glucose deprivation causes a decline in UDP-GlcNAc levels and an initial decline in O -GlcNAc, but counterintuitively, O -GlcNAc is later significantly upregulated. This later increase has been shown to be dependent on AMPK and p38 MAPK activation, and this effect is partially due to increases in OGT mRNA and protein levels. [ 91 ] It has also been suggested that this effect is dependent on calcium and CaMKII . [ 92 ] Activated p38 is able to recruit OGT to specific protein targets, including neurofilament H ; O -GlcNAc modification of neurofilament H enhances its solubility. [ 91 ] During glucose deprivation, glycogen synthase is modified by O -GlcNAc which inhibits its activity. [ 93 ] NRF2 , a transcription factor associated with the cellular response to oxidative stress, has been found to be indirectly regulated by O -GlcNAc. KEAP1 , an adaptor protein for the cullin 3 -dependent E3 ubiquitin ligase complex, mediates the degradation of NRF2; oxidative stress leads to conformational changes in KEAP1 that repress degradation of NRF2. O -GlcNAc modification of KEAP1 at S104 is required for efficient ubiquitination and subsequent degradation of NRF2, linking O -GlcNAc to oxidative stress. Glucose deprivation leads to a reduction in O -GlcNAc and reduces NRF2 degradation. Cells expressing a KEAP1 S104A mutant are resistant to erastin -induced ferroptosis , consistent with higher NRF2 levels upon removal of S104 O -GlcNAc. [ 94 ] Elevated O -GlcNAc levels have been associated with diminished synthesis of hepatic glutathione , an important cellular antioxidant . Acetaminophen overdose leads to accumulation of the strongly oxidizing metabolite NAPQI in the liver, which is detoxified by glutathione. In mice, OGT knockout has a protective effect against acetaminophen-induced liver injury, while OGA inhibition with thiamet-G exacerbates acetaminophen-induced liver injury. [ 95 ] O -GlcNAc has been found to slow protein aggregation, though the generality of this phenomenon is unknown. Solid-phase peptide synthesis was used to prepare full-length α-synuclein with an O -GlcNAc modification at T72. Thioflavin T aggregation assays and transmission electron microscopy demonstrated that this modified α-synuclein does not readily form aggregates. [ 67 ] Treatment of JNPL3 tau transgenic mice with an OGA inhibitor was shown to increase microtubule-associated protein tau O -GlcNAcylation. Immunohistochemistry analysis of the brainstem revealed decreased formation of neurofibrillary tangles . Recombinant O -GlcNAcylated tau was shown to aggregate slower than unmodified tau in an in vitro thioflavin S aggregation assay. Similar results were obtained for a recombinantly prepared O -GlcNAcylated TAB1 construct versus its unmodified form. [ 96 ] Many known phosphorylation sites and O -GlcNAcylation sites are nearby each other or overlapping. [ 56 ] As protein O -GlcNAcylation and phosphorylation both occur on serine and threonine residues, these post-translational modifications can regulate each other. For example, in CKIIα , S347 O -GlcNAc has been shown to antagonize T344 phosphorylation. [ 66 ] Reciprocal inhibition, i.e., phosphorylation inhibition of O -GlcNAcylation and O -GlcNAcylation of phosphorylation, has been observed on other proteins including murine estrogen receptor β , [ 97 ] RNA Pol II , [ 98 ] tau, [ 99 ] p53 , [ 100 ] CaMKIV , [ 101 ] p65 , [ 102 ] β-catenin , [ 103 ] and α-synuclein. [ 67 ] Positive cooperativity has also been observed between these two post-translational modifications, i.e., phosphorylation induces O -GlcNAcylation or O -GlcNAcylation induces phosphorylation. This has been demonstrated on MeCP2 [ 31 ] and HDAC1. [ 83 ] In other proteins, e.g., cofilin , phosphorylation and O -GlcNAcylation appear to occur independently of each other. [ 104 ] In some cases, therapeutic strategies are under investigation to modulate O -GlcNAcylation to have a downstream effect on phosphorylation. For instance, elevating tau O -GlcNAcylation may offer therapeutic benefit by inhibiting pathological tau hyperphosphorylation. [ 105 ] Besides phosphorylation, O -GlcNAc has been found to influence other post-translational modifications such as lysine acetylation [ 102 ] and monoubiquitination. [ 85 ] Protein kinases are the enzymes responsible for phosphorylation of serine and threonine residues. [ 106 ] O -GlcNAc has been identified on over 100 (~20% of the human kinome ) kinases, and this modification is often associated with alterations in kinase activity or kinase substrate scope. [ 107 ] O-GlcNAc may have diverse functional consequences on kinases such as interfering with ATP binding, [ 101 ] altering substrate recognition, [ 66 ] or regulating other PTMs on kinases. [ 108 ] [ 109 ] Complex cross-talk relations can also exist where OGT and a kinase, e.g., AMPK, modify each other. [ 110 ] [ 89 ] [ 111 ] [ 112 ] Protein phosphatase 1 subunits PP1β and PP1γ have been shown to form functional complexes with OGT. A synthetic phosphopeptide was able to be dephosphorylated and O -GlcNAcylated by an OGT immunoprecipitate. This complex has been referred to as a "yin-yang complex" as it replaces a phosphate modification with an O -GlcNAc modification. [ 113 ] PP1γ also exists in a heterotrimer with OGT and URI under high glucose conditions. [ 114 ] MYPT1 is another protein phosphatase subunit that forms complexes with OGT and is itself O -GlcNAcylated. MYPT1 appears to have a role in directing OGT towards specific substrates. [ 115 ] O -GlcNAcylation of a protein can alter its interactome. As O -GlcNAc is highly hydrophilic, its presence may disrupt hydrophobic protein-protein interactions. For example, O -GlcNAc disrupts Sp1 interaction with TAF II 110, [ 116 ] and O -GlcNAc disrupts CREB interaction with TAF II 130 and CRTC. [ 117 ] [ 118 ] Some studies have also identified instances where protein-protein interactions are induced by O -GlcNAc. Metabolic labeling with the diazirine-containing O -GlcNDAz has been applied to identify protein-protein interactions induced by O -GlcNAc. [ 35 ] Using a bait glycopeptide based roughly on a consensus sequence for O -GlcNAc, α-enolase , EBP1 , and 14-3-3 were identified as potential O -GlcNAc readers. X-ray crystallography showed that 14-3-3 recognized O -GlcNAc through an amphipathic groove that also binds phosphorylated ligands. [ 119 ] Hsp70 has also been proposed to act as a lectin to recognize O -GlcNAc. [ 120 ] It has been suggested that O -GlcNAc plays a role in the interaction of α-catenin and β-catenin. [ 103 ] Co-translational O -GlcNAc has been identified on Sp1 and Nup62 . This modification suppresses co-translational ubiquitination and thus protects nascent polypeptides from proteasomal degradation. Similar protective effects of O -GlcNAc on full-length Sp1 have been observed. It is unknown if this pattern is universal or only applicable to specific proteins. [ 15 ] Protein phosphorylation is often used as a mark for subsequent degradation. Tumor suppressor protein p53 is targeted for proteasomal degradation via COP9 signalosome -mediated phosphorylation of T155. O -GlcNAcylation of p53 S149 has been associated with decreased T155 phosphorylation and protection of p53 from degradation. [ 100 ] β-catenin O -GlcNAcylation competes with T41 phosphorylation, which signals β-catenin for degradation, stabilizing the protein. [ 103 ] O -GlcNAcylation of the Rpt2 ATPase subunit of the 26S proteasome has been shown to inhibit proteasome activity. Testing various peptide sequences revealed that this modification slows proteasomal degradation of hydrophobic peptides, degradation of hydrophilic peptides does not appear to be affected. [ 121 ] This modification has been shown to suppress other pathways that activate the proteasome such as Rpt6 phosphorylation by cAMP-dependent protein kinase . [ 122 ] OGA-S localizes to lipid droplets and has been proposed to locally activate the proteasome to promote remodeling of lipid droplet surface proteins. [ 123 ] Various cellular stress stimuli have been associated with changes in O -GlcNAc. Treatment with hydrogen peroxide , cobalt(II) chloride , UVB light , ethanol , sodium chloride , heat shock , and sodium arsenite , all result in elevated O -GlcNAc. Knockout of OGT sensitizes cells to thermal stress. Elevated O -GlcNAc has been associated with expression of Hsp40 and Hsp70. [ 124 ] Pathological protein aggregation is a major hallmark of multiple neurodegenerative diseases. [ 125 ] [ 126 ] [ 127 ] O-GlcNAc on various proteins has been found to play roles in suppressing protein aggregation, motivating clinical efforts to inhibit OGA and elevate cellular O-GlcNAc levels. This strategy is being evaluated by companies for Alzheimer's disease, Parkinson's disease, progressive supranuclear palsy, and amyotrophic lateral sclerosis (ALS) . [ 128 ] [ 129 ] [ 130 ] [ 131 ] [ 132 ] [ 133 ] [ 134 ] [ 135 ] [ 136 ] Multiple companies have advanced OGA inhibitors into the clinic including Alectos Therapeutics, Asceneuron, Biogen , Eli Lilly , and Merck . Numerous studies have identified aberrant phosphorylation of tau as a hallmark of Alzheimer's disease. [ 137 ] O -GlcNAcylation of bovine tau was first characterized in 1996. [ 138 ] A subsequent report in 2004 demonstrated that human brain tau is also modified by O -GlcNAc. O -GlcNAcylation of tau was demonstrated to regulate tau phosphorylation with hyperphosphorylation of tau observed in the brain of mice lacking OGT, [ 139 ] which has been associated with the formation of neurofibrillary tangles. Analysis of brain samples showed that protein O -GlcNAcylation is compromised in Alzheimer's disease and paired helical fragment-tau was not recognized by traditional O -GlcNAc detection methods, suggesting that pathological tau has impaired O -GlcNAcylation relative to tau isolated from control brain samples. Elevating tau O -GlcNAcylation was proposed as a therapeutic strategy for reducing tau phosphorylation. [ 99 ] To test this therapeutic hypothesis, a selective and blood-brain barrier -permeable OGA inhibitor, thiamet-G, was developed. Thiamet-G treatment was able to increase tau O -GlcNAcylation and suppress tau phosphorylation in cell culture and in vivo in healthy Sprague-Dawley rats. [ 64 ] A subsequent study showed that thiamet-G treatment also increased tau O -GlcNAcylation in a JNPL3 tau transgenic mouse model. In this model, tau phosphorylation was not significantly affected by thiamet-G treatment, though decreased numbers of neurofibrillary tangles and slower motor neuron loss were observed. Additionally, O -GlcNAcylation of tau was noted to slow tau aggregation in vitro . [ 96 ] OGA inhibition with MK -8719 is being investigated in clinical trials as a potential treatment strategy for Alzheimer's disease and other tauopathies including progressive supranuclear palsy . [ 105 ] [ 140 ] [ 141 ] Parkinson's disease is associated with aggregation of α-synuclein. [ 142 ] As O -GlcNAc modification of α-synuclein has been found to inhibit its aggregation, elevating α-synuclein O -GlcNAc is being explored as a therapeutic strategy to treat Parkinson's disease. [ 67 ] [ 143 ] Dysregulation of O -GlcNAc is associated with cancer cell proliferation and tumor growth. O -GlcNAcylation of the glycolytic enzyme PFK1 at S529 has been found to inhibit PFK1 enzymatic activity, reducing glycolytic flux and redirecting glucose towards the pentose phosphate pathway . Structural modeling and biochemical experiments suggested that O -GlcNAc at S529 would inhibit PFK1 allosteric activation by fructose 2,6-bisphosphate and oligomerization into active forms. In a mouse model, mice injected with cells expressing PFK1 S529A mutant showed lower tumor growth than mice injected with cells expressing PFK1 wild-type. Additionally, OGT overexpression enhanced tumor growth in the latter system but had no significant effect on the system with mutant PFK1. Hypoxia induces PFK1 S529 O -GlcNAc and increases flux through the pentose phosphate pathway to generate more NADPH, which maintains glutathione levels and detoxifies reactive oxygen species , imparting a growth advantage to cancer cells. PFK1 was found to be glycosylated in human breast and lung tumor tissues. [ 144 ] OGT has also been reported to positively regulate HIF-1α . HIF-1α is normally degraded under normoxic conditions by prolyl hydroxylases that utilize α-ketoglutarate as a co-substrate. OGT suppresses α-ketoglutarate levels, protecting HIF-1α from proteasomal degradation by pVHL and promoting aerobic glycolysis . In contrast with the previous study on PFK1, this study found that elevating OGT or O -GlcNAc upregulated PFK1, though the two studies are consistent in finding that O -GlcNAc levels are positively associated with flux through the pentose phosphate pathway. This study also found that decreasing O -GlcNAc selectively killed cancer cells via ER stress -induced apoptosis. [ 76 ] Human pancreatic ductal adenocarcinoma (PDAC) cell lines have higher O -GlcNAc levels than human pancreatic duct epithelial (HPDE) cells. PDAC cells have some dependency upon O -GlcNAc for survival as OGT knockdown selectively inhibited PDAC cell proliferation (OGT knockdown did not significantly affect HPDE cell proliferation), and inhibition of OGT with 5 S -GlcNAc showed the same result. Hyper- O -GlcNAcylation in PDAC cells appeared to be anti-apoptotic, inhibiting cleavage and activation of caspase-3 and caspase-9 . Numerous sites on the p65 subunit of NF-κB were found to be modified by O -GlcNAc in a dynamic manner; O -GlcNAc at p65 T305 and S319 in turn positively regulate other modifications associated with NF-κB activation such as p300 -mediated K310 acetylation and IKK -mediated S536 phosphorylation. These results suggested that NF-κB is constitutively activated by O -GlcNAc in pancreatic cancer. [ 77 ] [ 102 ] OGT stabilization of EZH2 in various breast cancer cell lines has been found to inhibit expression of tumor suppressor genes. [ 79 ] In hepatocellular carcinoma models, O -GlcNAc is associated with activating phosphorylation of HDAC1, which in turn regulates expression of the cell cycle regulator p21 Waf1/Cip1 and cell motility regulator E-cadherin . [ 83 ] OGT has been found to stabilize SREBP-1 and activate lipogenesis in breast cancer cell lines. This stabilization was dependent on the proteasome and AMPK. OGT knockdown resulted in decreased nuclear SREBP-1, but proteasomal inhibition with MG132 blocked this effect. OGT knockdown also increased the interaction between SREBP-1 and the E3 ubiquitin ligase FBW7. AMPK is activated by T172 phosphorylation upon OGT knockdown, and AMPK phosphorylates SREBP-1 S372 to inhibit its cleavage and maturation. OGT knockdown had a diminished effect on SREBP-1 levels in AMPK-null cell lines. In a mouse model, OGT knockdown inhibited tumor growth but SREBP-1 overexpression partly rescued this effect. [ 112 ] These results contrast from those of a previous study which found that OGT knockdown/inhibition inhibited AMPK T172 phosphorylation and increased lipogenesis. [ 89 ] In breast and prostate cancer cell lines, high levels of OGT and O -GlcNAc have been associated both in vitro and in vivo with processes associated with disease progression, e.g., angiogenesis , invasion , and metastasis . OGT knockdown or inhibition was found to downregulate the transcription factor FoxM1 and upregulate the cell-cycle inhibitor p27 Kip1 (which is regulated by FoxM1-dependent expression of the E3 ubiquitin ligase component Skp2 ), causing G1 cell cycle arrest. This appeared to be dependent on proteasomal degradation of FoxM1, as expression of a FoxM1 mutant lacking a degron rescued the effects of OGT knockdown. FoxM1 was found not to be directly modified by O -GlcNAc, suggesting that hyper- O -GlcNAcylation of FoxM1 regulators impairs FoxM1 degradation. Targeting OGT also lowered levels of FoxM1-regulated proteins associated with cancer invasion and metastasis ( MMP-2 & MMP-9 ), and angiogenesis ( VEGF ). [ 145 ] [ 146 ] O -GlcNAc modification of cofilin S108 has also been reported to be important for breast cancer cell invasion by regulating cofilin subcellular localization in invadopodia . [ 104 ] Dysregulation of O -GlcNAc has been associated with diabetes and associated diabetic complications. In general, elevated O-GlcNAc is associated with an insulin resistance phenotype. [ 147 ] [ 148 ] [ 149 ] [ 150 ] [ 151 ] [ 152 ] [ 153 ] [ 154 ] [ 155 ] [ 156 ] [ 157 ] [ 158 ] [ 159 ] Pancreatic β cells synthesize and secrete insulin to regulate blood glucose levels. One study found that inhibition of OGA with streptozotocin followed by glucosamine treatment resulted in O -GlcNAc accumulation and apoptosis in β cells; [ 160 ] a subsequent study showed that a galactose-based analogue of streptozotocin was unable to inhibit OGA but still resulted in apoptosis, suggesting that the apoptotic effects of streptozotocin are not directly due to OGA inhibition. [ 161 ] O -GlcNAc has been suggested to attenuate insulin signaling . In 3T3-L1 adipocytes , OGA inhibition with PUGNAc inhibited insulin-mediated glucose uptake. PUGNAc treatment also inhibited insulin-stimulated Akt T308 phosphorylation and downstream GSK3β S9 phosphorylation. [ 162 ] In a later study, insulin stimulation of COS-7 cells caused OGT to localize to the plasma membrane. Inhibition of PI3K with wortmannin reversed this effect, suggesting dependence on phosphatidylinositol(3,4,5)-triphosphate . Increasing O -GlcNAc levels by subjecting cells to high glucose conditions or PUGNAc treatment inhibited insulin-stimulated phosphorylation of Akt T308 and Akt activity. IRS1 phosphorylation at S307 and S632/S635, which is associated with attenuated insulin signaling, was enhanced. Subsequent experiments in mice with adenoviral delivery of OGT showed that OGT overexpression negatively regulated insulin signaling in vivo . Many components of the insulin signaling pathway, including β-catenin , [ 162 ] IR-β , IRS1, Akt, PDK1 , and the p110α subunit of PI3K were found to be directly modified by O -GlcNAc. [ 163 ] Insulin signaling has also been reported to lead to OGT tyrosine phosphorylation and OGT activation, resulting in increased O -GlcNAc levels. [ 164 ] As PUGNAc also inhibits lysosomal β-hexosaminidases , the OGA-selective inhibitor NButGT was developed to further probe the relationship between O -GlcNAc and insulin signaling in 3T3-L1 adipocytes. This study also found that PUGNAc resulted in impaired insulin signaling, but NButGT did not, as measured by changes in phosphorylation of Akt T308, suggesting that the effects observed with PUGNAc may be due to off-target effects besides OGA inhibition. [ 165 ] Treatment of macrophages with lipopolysaccharide (LPS) , a major component of the Gram-negative bacteria outer membrane, results in elevated O -GlcNAc in cellular and mouse models. During infection, cytosolic OGT was de- S -nitrosylated and activated. Suppressing O -GlcNAc with DON inhibited the O -GlcNAcylation and nuclear translocation of NF-κB, as well as downstream induction of inducible nitric oxide synthase and IL-1β production. DON treatment also improved cell survival during LPS treatment. [ 166 ] O -GlcNAc has been implicated in influenza A virus (IAV) -induced cytokine storm . Specifically, O -GlcNAcylation of S430 on interferon regulatory factor-5 (IRF5) has been shown to promote its interaction with TNF receptor-associated factor 6 (TRAF6) in cellular and mouse models. TRAF6 mediates K63-linked ubiquitination of IRF5 which is necessary for IRF5 activity and subsequent cytokine production. Analysis of clinical samples showed that blood glucose levels were elevated in IAV-infected patients compared to healthy individuals. In IAV-infected patients, blood glucose levels positively correlated with IL-6 and IL-8 levels. O -GlcNAcylation of IRF5 was also relatively higher in peripheral blood mononuclear cells of IAV-infected patients. [ 167 ] Peptide therapeutics such as are attractive for their high specificity and potency, but they often have poor pharmacokinetic profiles due to their degradation by serum proteases . [ 168 ] Though O -GlcNAc is generally associated with intracellular proteins, it has been found that engineered peptide therapeutics modified by O -GlcNAc have enhanced serum stability in a mouse model and have similar structure and activity compared to the respective unmodified peptides. This method has been applied to engineer GLP-1 and PTH peptides. [ 169 ]
https://en.wikipedia.org/wiki/O-GlcNAc
The nitrite ion has the chemical formula NO − 2 . Nitrite (mostly sodium nitrite ) is widely used throughout chemical and pharmaceutical industries. [ 1 ] The nitrite anion is a pervasive intermediate in the nitrogen cycle in nature. The name nitrite also refers to organic compounds having the –ONO group, which are esters of nitrous acid . Sodium nitrite is made industrially by passing a mixture of nitrogen oxides into aqueous sodium hydroxide or sodium carbonate solution: [ 2 ] [ 1 ] The product is purified by recrystallization. Alkali metal nitrites are thermally stable up to and beyond their melting point (441 °C for KNO 2 ). Ammonium nitrite can be made from dinitrogen trioxide , N 2 O 3 , which is formally the anhydride of nitrous acid: The nitrite ion has a symmetrical structure (C 2v symmetry ), with both N–O bonds having equal length and a bond angle of about 115°. In valence bond theory , it is described as a resonance hybrid with equal contributions from two canonical forms that are mirror images of each other. In molecular orbital theory , there is a sigma bond between each oxygen atom and the nitrogen atom, and a delocalized pi bond made from the p orbitals on nitrogen and oxygen atoms which is perpendicular to the plane of the molecule. The negative charge of the ion is equally distributed on the two oxygen atoms. Both nitrogen and oxygen atoms carry a lone pair of electrons. Therefore, the nitrite ion is a Lewis base . In the gas phase it exists predominantly as a trans -planar molecule. Nitrite is the conjugate base of the weak acid nitrous acid : Nitrous acid is also highly unstable, tending to disproportionate : This reaction is slow at 0 °C. [ 2 ] Addition of acid to a solution of a nitrite in the presence of a reducing agent , such as iron(II), is a way to make nitric oxide (NO) in the laboratory. The formal oxidation state of the nitrogen atom in nitrite is +3. This means that it can be either oxidized to oxidation states +4 and +5, or reduced to oxidation states as low as −3. Standard reduction potentials for reactions directly involving nitrous acid are shown in the table below: [ 4 ] The data can be extended to include products in lower oxidation states. For example: Oxidation reactions usually result in the formation of the nitrate ion, with nitrogen in oxidation state +5. For example, oxidation with permanganate ion can be used for quantitative analysis of nitrite (by titration ): The product of reduction reactions with nitrite ion are varied, depending on the reducing agent used and its strength. With sulfur dioxide , the products are NO and N 2 O; with tin(II) (Sn 2+ ) the product is hyponitrous acid (H 2 N 2 O 2 ); reduction all the way to ammonia (NH 3 ) occurs with hydrogen sulfide . With the hydrazinium cation ( N 2 H + 5 ) the product of nitrite reduction is hydrazoic acid (HN 3 ), an unstable and explosive compound: which can also further react with nitrite: This reaction is unusual in that it involves compounds with nitrogen in four different oxidation states. [ 2 ] Nitrite is detected and analyzed by the Griess Reaction , involving the formation of a deep red-colored azo dye upon treatment of a NO − 2 -containing sample with sulfanilic acid and naphthyl-1-amine in the presence of acid. [ 5 ] Nitrite is an ambidentate ligand and can form a wide variety of coordination complexes by binding to metal ions in several ways. [ 2 ] Two examples are the red nitrito complex [Co(NH 3 ) 5 (ONO)] 2+ is metastable , isomerizing to the yellow nitro complex [Co(NH 3 ) 5 (NO 2 )] 2+ . Nitrite is processed by several enzymes, all of which utilize coordination complexes. In nitrification , ammonium is converted to nitrite. Important species include Nitrosomonas . Other bacterial species such as Nitrobacter , are responsible for the oxidation of the nitrite into nitrate. Nitrite can be reduced to nitric oxide or ammonia by many species of bacteria. Under hypoxic conditions, nitrite may release nitric oxide, which causes potent vasodilation . Several mechanisms for nitrite conversion to NO have been described, including enzymatic reduction by xanthine oxidoreductase , nitrite reductase , and NO synthase (NOS), as well as nonenzymatic acidic disproportionation reactions. Azo dyes and other colorants are prepared by the process called diazotization , which requires nitrite. [ 1 ] The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages speeds up the curing of meat and also impart an attractive colour. [ 8 ] The academic and industrial consensus is that nitrites also reduces growth and toxin production of Clostridium botulinum . [ 9 ] [ 10 ] [ 11 ] On the other hand, a 2018 study (full text not available) by the British Meat Producers Association determined that legally permitted levels of nitrite do not affect the growth of C. botulinum . [ 12 ] In the U.S., meat cannot be labeled as "cured" without the addition of nitrite. [ 13 ] [ 14 ] [ 15 ] In some countries, cured-meat products are manufactured without nitrate or nitrite, and without nitrite from vegetable sources. Parma ham , produced without nitrite since 1993, was reported in 2018 to have caused no cases of botulism. This is because the interior of the muscle is sterile and the surface is exposed to oxygen. [ 8 ] Other manufacture processes do not assure these conditions, and reduction of nitrite results in toxin production. [ 16 ] In mice, food rich in nitrites together with unsaturated fats can prevent hypertension by forming nitro fatty acids that inhibit soluble epoxide hydrolase , which is one explanation for the apparent health effect of the Mediterranean diet . [ 17 ] Adding nitrites to meat has been shown to generate known carcinogens ; the World Health Organization (WHO) advises that eating 50 g (1.8 oz) of nitrite processed meat a day would raise the risk of getting bowel cancer by 18% over a lifetime. [ 8 ] The recommended maximum limits by the World Health Organization in drinking water are 3 mg L −1 and 50 mg L −1 for nitrite and nitrate ions, respectively. [ 18 ] Ingesting too much nitrite and/or nitrate through well water is suspected to cause methemoglobinemia . [ 19 ] 95% of the nitrite ingested in modern diets comes from bacterial conversion of nitrates naturally found in vegetables. [ 20 ] However, potentially cancer-causing nitroso compounds are not made in the pH-neutral colon. They are mostly made in the acidic stomach. [ 21 ] [ 22 ] Nitrite reacts with the meat's myoglobin by attaching to the heme iron atom, forming reddish-brown nitrosomyoglobin and the characteristic pink "fresh" color of nitrosohemochrome or nitrosyl-heme upon cooking. [ 23 ] In the US, nitrite has been formally used since 1925. According to scientists working for the industry group American Meat Institute , this use of nitrite started in the Middle Ages . [ 24 ] Historians and epidemiologists argue that the widespread use of nitrite in meat-curing is closely linked to the development of industrial meat-processing. [ 25 ] [ 26 ] French investigative journalist Guillaume Coudray [ fr ] asserts that the meat industry chooses to cure its meats with nitrite even though it is established that this chemical gives rise to cancer-causing nitroso -compounds. [ 27 ] Some traditional and artisanal producers avoid nitrites. Addition of ascorbic acid , erythorbic acid , or one of their salts enhance the binding of nitrite to the iron atom in myoglobin. [ 23 ] These chemicals also reduce the formation of nitrosamine in the stomach, but only when the fat content of a meal is less than 10%, beyond which they instead increase the formation of nitrosamine. [ 28 ] [ 29 ] Nitrites in the form of sodium nitrite and amyl nitrite are components of many cyanide antidote kits. [ 30 ] Both of these compounds bind to hemoglobin and oxidize the Fe 2+ ions to Fe 3+ ions forming methemoglobin . Methemoglobin, in turn, binds to cyanide (CN), creating cyanmethemoglobin, effectively removing cyanide from the complex IV of the electron transport chain (ETC) in mitochondria , which is the primary site of disruption caused by cyanide. Another mechanism by which nitrites help treat cyanide toxicity is the generation of nitric oxide (NO). NO displaces the CN from the cytochrome c oxidase (ETC complex IV), making it available for methemoglobin to bind. [ 31 ] In organic chemistry , alkyl nitrites are esters of nitrous acid and contain the nitrosoxy functional group. Nitro compounds contain the C–NO 2 group. Nitrites have the general formula RONO, where R is an aryl or alkyl group. Amyl nitrite and other alkyl nitrites have a vasodilating action and must be handled in the laboratory with caution. They are sometimes used in medicine for the treatment of heart diseases. A classic named reaction for the synthesis of alkyl nitrites is the Meyer synthesis [ 32 ] [ 33 ] in which alkyl halides react with metallic nitrites to a mixture to nitroalkanes and nitrites. Nitrite salts can react with secondary amines to produce N -nitrosamines , which are suspected of causing stomach cancer . The World Health Organization (WHO) advises that each 50 g (1.8 oz) of processed meat eaten a day would raise the risk of getting bowel cancer by 18% over a lifetime; processed meat refers to meat that has been transformed through fermentation, nitrite curing, salting, smoking, or other processes to enhance flavor or improve preservation. The World Health Organization's review of more than 400 studies concluded in 2015 that there was sufficient evidence that processed meats caused cancer, particularly colon cancer; the WHO's International Agency for Research on Cancer (IARC) classified processed meats as carcinogenic to humans ( Group 1 ). [ 8 ] [ 34 ] Nitrite (ingested) under conditions that result in endogenous nitrosation , specifically the production of nitrosamine , has been classified as Probably carcinogenic to humans ( Group 2A ) by the IARC. [ 35 ]
https://en.wikipedia.org/wiki/O-N=O
Ozone ( / ˈ oʊ z oʊ n / ) (or trioxygen ) is an inorganic molecule with the chemical formula O 3 . It is a pale blue gas with a distinctively pungent smell. It is an allotrope of oxygen that is much less stable than the diatomic allotrope O 2 , breaking down in the lower atmosphere to O 2 ( dioxygen ). Ozone is formed from dioxygen by the action of ultraviolet (UV) light and electrical discharges within the Earth's atmosphere . It is present in very low concentrations throughout the atmosphere, with its highest concentration high in the ozone layer of the stratosphere , which absorbs most of the Sun 's ultraviolet (UV) radiation. Ozone's odor is reminiscent of chlorine , and detectable by many people at concentrations of as little as 0.1 ppm in air. Ozone's O 3 structure was determined in 1865. The molecule was later proven to have a bent structure and to be weakly diamagnetic . At standard temperature and pressure , ozone is a pale blue gas that condenses at cryogenic temperatures to a dark blue liquid and finally a violet-black solid . Ozone's instability with regard to more common dioxygen is such that both concentrated gas and liquid ozone may decompose explosively at elevated temperatures, physical shock, or fast warming to the boiling point. [ 5 ] [ 6 ] It is therefore used commercially only in low concentrations. Ozone is a powerful oxidizing agent (far more so than dioxygen) and has many industrial and consumer applications related to oxidation. This same high oxidizing potential, however, causes ozone to damage mucous and respiratory tissues in animals, and also tissues in plants, above concentrations of about 0.1 ppm . While this makes ozone a potent respiratory hazard and pollutant near ground level , a higher concentration in the ozone layer (from two to eight ppm) is beneficial, preventing damaging UV light from reaching the Earth's surface. The trivial name ozone is the most commonly used and preferred IUPAC name . The systematic names 2λ 4 -trioxidiene [ dubious – discuss ] and catena-trioxygen , valid IUPAC names, are constructed according to the substitutive and additive nomenclatures , respectively. The name ozone derives from ozein (ὄζειν), the Greek neuter present participle for smell, [ 7 ] referring to ozone's distinctive smell. In appropriate contexts, ozone can be viewed as trioxidane with two hydrogen atoms removed, and as such, trioxidanylidene may be used as a systematic name, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the ozone molecule. In an even more specific context, this can also name the non-radical singlet ground state, whereas the diradical state is named trioxidanediyl . Trioxidanediyl (or ozonide ) is used, non-systematically, to refer to the substituent group (-OOO-). Care should be taken to avoid confusing the name of the group for the context-specific name for the ozone given above. In 1785, Dutch chemist Martinus van Marum was conducting experiments involving electrical sparking above water when he noticed an unusual smell, which he attributed to the electrical reactions, failing to realize that he had in fact produced ozone. [ 8 ] [ 9 ] A half century later, Christian Friedrich Schönbein noticed the same pungent odour and recognized it as the smell often following a bolt of lightning . In 1839, he succeeded in isolating the gaseous chemical and named it "ozone", from the Greek word ozein ( ὄζειν ) meaning "to smell". [ 10 ] [ 11 ] For this reason, Schönbein is generally credited with the discovery of ozone. [ 12 ] [ 13 ] [ 14 ] [ 8 ] He also noted the similarity of ozone smell to the smell of phosphorus, and in 1844 proved that the product of reaction of white phosphorus with air is identical. [ 10 ] A subsequent effort to call ozone "electrified oxygen" he ridiculed by proposing to call the ozone from white phosphorus "phosphorized oxygen". [ 10 ] The chemical formula for ozone, O 3 , was not determined until 1865 by Jacques-Louis Soret [ 15 ] and confirmed by Schönbein in 1867. [ 10 ] [ 16 ] For much of the second half of the 19th century and well into the 20th, ozone was considered a healthy component of the environment by naturalists and health-seekers. Beaumont, California , had as its official slogan "Beaumont: Zone of Ozone", as evidenced on postcards and Chamber of Commerce letterhead. [ 17 ] Naturalists working outdoors often considered the higher elevations beneficial because of their ozone content which was readily monitored. [ 18 ] "There is quite a different atmosphere [at higher elevation] with enough ozone to sustain the necessary energy [to work]", wrote naturalist Henry Henshaw , working in Hawaii. [ 19 ] Seaside air was considered to be healthy because of its believed ozone content. The smell giving rise to this belief is in fact that of halogenated seaweed metabolites [ 20 ] and dimethyl sulfide . [ 21 ] Much of ozone's appeal seems to have resulted from its "fresh" smell, which evoked associations with purifying properties. Scientists noted its harmful effects. In 1873 James Dewar and John Gray McKendrick documented that frogs grew sluggish, birds gasped for breath, and rabbits' blood showed decreased levels of oxygen after exposure to "ozonized air", which "exercised a destructive action". [ 22 ] [ 12 ] Schönbein himself reported that chest pains, irritation of the mucous membranes , and difficulty breathing occurred as a result of inhaling ozone, and small mammals died. [ 23 ] In 1911, Leonard Hill and Martin Flack stated in the Proceedings of the Royal Society B that ozone's healthful effects "have, by mere iteration, become part and parcel of common belief; and yet exact physiological evidence in favour of its good effects has been hitherto almost entirely wanting ... The only thoroughly well-ascertained knowledge concerning the physiological effect of ozone, so far attained, is that it causes irritation and œdema of the lungs, and death if inhaled in relatively strong concentration for any time." [ 12 ] [ 24 ] During World War I , ozone was tested at Queen Alexandra Military Hospital in London as a possible disinfectant for wounds. The gas was applied directly to wounds for as long as 15 minutes. This resulted in damage to both bacterial cells and human tissue. Other sanitizing techniques, such as irrigation with antiseptics , were found preferable. [ 12 ] [ 25 ] Until the 1920s, it was not certain whether small amounts of oxozone , O 4 , were also present in ozone samples due to the difficulty of applying analytical chemistry techniques to the explosive concentrated chemical. [ 26 ] [ 27 ] In 1923, Georg-Maria Schwab (working for his doctoral thesis under Ernst Hermann Riesenfeld ) was the first to successfully solidify ozone and perform accurate analysis which conclusively refuted the oxozone hypothesis. [ 26 ] [ 27 ] Further hitherto unmeasured physical properties of pure concentrated ozone were determined by the Riesenfeld group in the 1920s. [ 26 ] Ozone is a colourless or pale blue gas, slightly soluble in water, and much more soluble in inert non-polar solvents such as carbon tetrachloride or fluorocarbons, in which it forms a blue solution. At 161 K (−112 °C; −170 °F), it condenses to form a dark blue liquid . It is dangerous to allow this liquid to warm to its boiling point, because both concentrated gaseous ozone and liquid ozone can detonate. At temperatures below 80 K (−193.2 °C; −315.7 °F), it forms a violet-black solid . [ 28 ] Ozone has a very specific sharp odour somewhat resembling chlorine bleach . Most people can detect it at the 0.01 μmol/mol level in air. Exposure of 0.1 to 1 μmol/mol produces headaches and burning eyes and irritates the respiratory passages. [ 29 ] Even low concentrations of ozone in air are very destructive to organic materials such as latex, plastics, and animal lung tissue. The ozone molecule is weakly diamagnetic . [ 30 ] According to experimental evidence from microwave spectroscopy , ozone is a bent molecule, with C 2v symmetry (similar to the water molecule). [ 31 ] The O–O distances are 127.2 pm (1.272 Å ). The O–O–O angle is 116.78°. [ 32 ] The central atom is sp ² hybridized with one lone pair. Ozone is a polar molecule with a dipole moment of 0.53 D . [ 33 ] The molecule can be represented as a resonance hybrid with two contributing structures, each with a single bond on one side and double bond on the other. The arrangement possesses an overall bond order of 1.5 for both sides. It is isoelectronic with the nitrite anion . Naturally occurring ozone can be composed of substituted isotopes ( 16 O, 17 O, 18 O). A cyclic form has been predicted but not observed. Ozone is among the most powerful oxidizing agents known, far stronger than O 2 . It is also unstable at high concentrations, decaying into ordinary diatomic oxygen. Its half-life varies with atmospheric conditions such as temperature, humidity, and air movement. Under laboratory conditions, the half-life will average ~1500 minutes (25 hours) in still air at room temperature (24 °C), zero humidity with zero air changes per hour. [ 34 ] This reaction proceeds more rapidly with increasing temperature. Deflagration of ozone can be triggered by a spark and can occur in ozone concentrations of 10 wt% or higher. [ 35 ] Ozone can also be produced from oxygen at the anode of an electrochemical cell. This reaction can create smaller quantities of ozone for research purposes. [ 36 ] This can be observed as an unwanted reaction in a Hoffman apparatus during the electrolysis of water when the voltage is set above the necessary voltage. Ozone oxidizes most metals (except gold , platinum , and iridium ) into oxides of the metals in their highest oxidation state . For example: Ozone oxidizes nitric oxide to nitrogen dioxide : This reaction is accompanied by chemiluminescence . The NO 2 can be further oxidized to nitrate radical : The NO 3 formed can react with NO 2 to form dinitrogen pentoxide ( N 2 O 5 ). Solid nitronium perchlorate can be made from NO 2 , ClO 2 , and O 3 gases: Ozone does not react with ammonium salts , but it oxidizes ammonia to ammonium nitrate : Ozone reacts with carbon to form carbon dioxide , even at room temperature: Ozone oxidizes sulfides to sulfates . For example, lead(II) sulfide is oxidized to lead(II) sulfate : Sulfuric acid can be produced from ozone, water and either elemental sulfur or sulfur dioxide : In the gas phase , ozone reacts with hydrogen sulfide to form sulfur dioxide: In an aqueous solution, however, two competing simultaneous reactions occur, one to produce elemental sulfur, and one to produce sulfuric acid : Alkenes can be oxidatively cleaved by ozone, in a process called ozonolysis , giving alcohols, aldehydes, ketones, and carboxylic acids, depending on the second step of the workup. Ozone can also cleave alkynes to form an acid anhydride or diketone product. [ 38 ] If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids . Usually ozonolysis is carried out in a solution of dichloromethane , at a temperature of −78 °C. After a sequence of cleavage and rearrangement, an organic ozonide is formed. With reductive workup (e.g. zinc in acetic acid or dimethyl sulfide ), ketones and aldehydes will be formed, with oxidative workup (e.g. aqueous or alcoholic hydrogen peroxide ), carboxylic acids will be formed. [ 39 ] All three atoms of ozone may also react, as in the reaction of tin(II) chloride with hydrochloric acid and ozone: Iodine perchlorate can be made by treating iodine dissolved in cold anhydrous perchloric acid with ozone: Ozone could also react with potassium iodide to give oxygen and iodine gas that can be titrated for quantitative determination: [ 40 ] Ozone can be used for combustion reactions and combustible gases; ozone provides higher temperatures than burning in dioxygen ( O 2 ). The following is a reaction for the combustion of carbon subnitride which can also cause higher temperatures: Ozone can react at cryogenic temperatures. At 77 K (−196.2 °C; −321.1 °F), atomic hydrogen reacts with liquid ozone to form a hydrogen superoxide radical , which dimerizes : [ 41 ] Ozone is a toxic substance, [ 42 ] [ 43 ] commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers, ...). The catalytic decomposition of ozone is very important to reduce pollution. This type of decomposition is the most widely used, especially with solid catalysts, and it has many advantages such as a higher conversion with a lower temperature. Furthermore, the product and the catalyst can be instantaneously separated, and this way the catalyst can be easily recovered without using any separation operation. The most-used materials in the catalytic decomposition of ozone in the gas phase are manganese dioxide , transition metals such as Mn, Co, Cu, Fe, Ni, or Ag, and noble metals such as Pt, Rh, or Pd. Free radicals of chlorine (Cl · ), formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs) and sea salt, are known to catalyze the breakdown of ozone in the atmosphere. There are two other possibilities for decomposing ozone in the gas phase: The uncatalyzed process of ozone decomposition in the gas phase is a complex reaction involving two elementary reactions that finally lead to molecular oxygen, [ 45 ] and this means that the reaction order and the rate law cannot be determined by the stoichiometry of the overall reaction. Overall reaction: 2 O 3 ⟶ 3 O 2 {\displaystyle {\ce {2 O3 -> 3 O2}}} Rate law (observed): V = K o b s ⋅ [ O 3 ] 2 [ O 2 ] {\displaystyle V={\frac {K_{obs}\cdot [{\ce {O3}}]^{2}}{[{\ce {O2}}]}}} where K o b s {\displaystyle K_{obs}} is the observed rate constant and V {\displaystyle V} is the reaction rate. From the rate law above it can be determined that the partial order respect to molecular oxygen is −1 and respect to ozone is 2; therefore, the global reaction order is 1. The first step is a unimolecular reaction wherein one molecule of ozone decomposes into two products (molecular oxygen and oxygen). The oxygen atom from the first step is a reactive intermediate because it participates as a reactant in the second step, which is a bimolecular reaction because there are two different reactants (ozone and oxygen) that give rise to molecular oxygen. Step 1: Unimolecular reaction O 3 ⟶ O 2 + O {\displaystyle {\ce {O3 -> O2 + O}}} Step 2: Bimolecular reaction O 3 + O ⟶ 2 O 2 {\displaystyle {\ce {O3 + O -> 2 O2}}} These two steps have different reaction rates and rate constants. The reaction rate laws for each of these steps are shown below: The following mechanism allows to explain the rate law of the ozone decomposition observed experimentally, and also it allows to determine the reaction orders with respect to ozone and oxygen, with which the overall reaction order will be determined. The first step is assumed reversible and faster than the second reaction, which means that the slower rate determining step is the second reaction. This step determines the rate of product formation, and so V = V 2 {\displaystyle V=V_{2}} . However, this equation depends on the concentration of oxygen (intermediate), which does not appear in the observed rate law. Since the first step is a rapid equilibrium, the concentration of the intermediate can be determined as follows: Then using these equations, the formation rate of molecular oxygen is as shown below: The mechanism is consistent with the rate law observed experimentally if the rate constant ( K obs ) is given in terms of the individual mechanistic steps' rate constants as follows: [ 46 ] where K obs = K 2 ⋅ K 1 K − 1 {\displaystyle K_{\text{obs}}={K_{2}\cdot K_{1} \over K_{-1}}} Reduction of ozone gives the ozonide anion, O − 3 . Derivatives of this anion are explosive and must be stored at cryogenic temperatures. Ozonides for all the alkali metals are known. KO 3 , RbO 3 , and CsO 3 can be prepared from their respective superoxides: Although KO 3 can be formed as above, it can also be formed from potassium hydroxide and ozone: [ 47 ] NaO 3 and LiO 3 must be prepared by action of CsO 3 in liquid NH 3 on an ion-exchange resin containing Na + or Li + ions: [ 48 ] A solution of calcium in ammonia reacts with ozone to give ammonium ozonide and not calcium ozonide: [ 41 ] Ozone can be used to remove iron and manganese from water , forming a precipitate which can be filtered: Ozone oxidizes dissolved hydrogen sulfide in water to sulfurous acid : These three reactions are central in the use of ozone-based well water treatment. Ozone detoxifies cyanides by converting them to cyanates . Ozone completely decomposes urea : [ 49 ] Ozone is a bent triatomic molecule with three vibrational modes: the symmetric stretch (1103.157 cm −1 ), bend (701.42 cm −1 ) and antisymmetric stretch (1042.096 cm −1 ). [ 50 ] The symmetric stretch and bend are weak absorbers, but the antisymmetric stretch is strong and responsible for ozone being an important minor greenhouse gas . This IR band is also used to detect ambient and atmospheric ozone although UV-based measurements are more common. [ 51 ] The electromagnetic spectrum of ozone is quite complex. An overview can be seen at the MPI Mainz UV/VIS Spectral Atlas of Gaseous Molecules of Atmospheric Interest. [ 52 ] All of the bands are dissociative, meaning that the molecule falls apart to O + O 2 after absorbing a photon. The most important absorption is the Hartley band, extending from slightly above 300 nm down to slightly above 200 nm. It is this band that is responsible for absorbing UV C in the stratosphere. On the high wavelength side, the Hartley band transitions to the so-called Huggins band, which falls off rapidly until disappearing by ~360 nm. Above 400 nm, extending well out into the NIR, are the Chappius and Wulf bands. There, unstructured absorption bands are useful for detecting high ambient concentrations of ozone, but are so weak that they do not have much practical effect. There are additional absorption bands in the far UV, which increase slowly from 200 nm down to reaching a maximum at ~120 nm. The standard way to express total ozone levels (the amount of ozone in a given vertical column) in the atmosphere is by using Dobson units . Point measurements are reported as mole fractions in nmol/mol (parts per billion, ppb) or as concentrations in μg/m 3 . The study of ozone concentration in the atmosphere started in the 1920s. [ 53 ] The highest levels of ozone in the atmosphere are in the stratosphere , in a region also known as the ozone layer between about 10 and 50 km above the surface (or between about 6 and 31 miles). However, even in this "layer", the ozone concentrations are only two to eight parts per million, so most of the oxygen there is dioxygen, O 2 , at about 210,000 parts per million by volume. [ 54 ] Ozone in the stratosphere is mostly produced from short-wave ultraviolet rays between 240 and 160 nm. Oxygen starts to absorb weakly at 240 nm in the Herzberg bands, but most of the oxygen is dissociated by absorption in the strong Schumann–Runge bands between 200 and 160 nm where ozone does not absorb. While shorter wavelength light, extending to even the X-Ray limit, is energetic enough to dissociate molecular oxygen, there is relatively little of it, and, the strong solar emission at Lyman-alpha, 121 nm, falls at a point where molecular oxygen absorption is a minimum. [ 55 ] The process of ozone creation and destruction is called the Chapman cycle and starts with the photolysis of molecular oxygen followed by reaction of the oxygen atom with another molecule of oxygen to form ozone. where "M" denotes the third body that carries off the excess energy of the reaction. The ozone molecule can then absorb a UV-C photon and dissociate The excess kinetic energy heats the stratosphere when the O atoms and the molecular oxygen fly apart and collide with other molecules. This conversion of UV light into kinetic energy warms the stratosphere. The oxygen atoms produced in the photolysis of ozone then react back with other oxygen molecule as in the previous step to form more ozone. In the clear atmosphere, with only nitrogen and oxygen, ozone can react with the atomic oxygen to form two molecules of O 2 : An estimate of the rate of this termination step to the cycling of atomic oxygen back to ozone can be found simply by taking the ratios of the concentration of O 2 to O 3 . The termination reaction is catalysed by the presence of certain free radicals, of which the most important are hydroxyl (OH), nitric oxide (NO) and atomic chlorine (Cl) and bromine (Br). In the second half of the 20th century, the amount of ozone in the stratosphere was discovered to be declining , mostly because of increasing concentrations of chlorofluorocarbons (CFC) and similar chlorinated and brominated organic molecules . The concern over the health effects of the decline led to the 1987 Montreal Protocol , the ban on the production of many ozone-depleting chemicals and in the first and second decade of the 21st century the beginning of the recovery of stratospheric ozone concentrations. Ozone in the ozone layer filters out sunlight wavelengths from about 200 nm UV rays to 315 nm, with ozone peak absorption at about 250 nm. [ 56 ] This ozone UV absorption is important to life, since it extends the absorption of UV by ordinary oxygen and nitrogen in air (which absorb all wavelengths < 200 nm) through the lower UV-C (200–280 nm) and the entire UV-B band (280–315 nm). The small unabsorbed part that remains of UV-B after passage through ozone causes sunburn in humans, and direct DNA damage in living tissues in both plants and animals. Ozone's effect on mid-range UV-B rays is illustrated by its effect on UV-B at 290 nm, which has a radiation intensity 350 million times as powerful at the top of the atmosphere as at the surface. Nevertheless, enough of UV-B radiation at similar frequency reaches the ground to cause some sunburn, and these same wavelengths are also among those responsible for the production of vitamin D in humans. The ozone layer has little effect on the longer UV wavelengths called UV-A (315–400 nm), but this radiation does not cause sunburn or direct DNA damage. While UV-A probably does cause long-term skin damage in certain humans, it is not as dangerous to plants and to the health of surface-dwelling organisms on Earth in general (see ultraviolet for more information on near ultraviolet). Ground-level ozone (or tropospheric ozone) is an atmospheric pollutant. [ 57 ] It is not emitted directly by car engines or by industrial operations, but formed by the reaction of sunlight on air containing hydrocarbons and nitrogen oxides that react to form ozone directly at the source of the pollution or many kilometers downwind. Ozone reacts directly with some hydrocarbons such as aldehydes and thus begins their removal from the air, but the products are themselves key components of smog . Ozone photolysis by UV light leads to production of the hydroxyl radical HO• and this plays a part in the removal of hydrocarbons from the air, but is also the first step in the creation of components of smog such as peroxyacyl nitrates , which can be powerful eye irritants. The atmospheric lifetime of tropospheric ozone is about 22 days; its main removal mechanisms are being deposited to the ground, the above-mentioned reaction giving HO•, and by reactions with OH and the peroxy radical HO 2 •. [ 58 ] There is evidence of significant reduction in agricultural yields because of increased ground-level ozone and pollution which interferes with photosynthesis and stunts overall growth of some plant species. [ 59 ] [ 60 ] The United States Environmental Protection Agency (EPA) has proposed a secondary regulation to reduce crop damage, in addition to the primary regulation designed for the protection of human health. Certain examples of cities with elevated ozone readings are Denver, Colorado ; Houston, Texas ; and Mexico City , Mexico . Houston has a reading of around 41 nmol/mol, while Mexico City is far more hazardous, with a reading of about 125 nmol/mol. [ 60 ] Ground-level ozone, or tropospheric ozone, is the most concerning type of ozone pollution in urban areas and is increasing in general. [ 61 ] Ozone pollution in urban areas affects denser populations, and is worsened by high populations of vehicles, which emit pollutants NO 2 and VOCs , the main contributors to problematic ozone levels. [ 62 ] Ozone pollution in urban areas is especially concerning with increasing temperatures, raising heat-related mortality during heat waves . [ 63 ] During heat waves in urban areas, ground level ozone pollution can be 20% higher than usual. [ 64 ] Ozone pollution in urban areas reaches higher levels of exceedance in the summer and autumn, which may be explained by weather patterns and traffic patterns. [ 62 ] People experiencing poverty are more affected by pollution in general, even though these populations are less likely to be contributing to pollution levels. [ 65 ] As mentioned above, Denver, Colorado, is one of the many cities in the U.S. that have high amounts of ozone. According to the American Lung Association , the Denver–Aurora area is the 14th most ozone-polluted area in the U.S. [ 66 ] The problem of high ozone levels is not new to this area. In 2004, the EPA allotted the Denver Metro /North Front Range [ b ] as non-attainment areas per 1997's 8-hour ozone standard, [ 67 ] but later deferred this status until 2007. The non-attainment standard indicates that an area does not meet the EPA's air quality standards. The Colorado Ozone Action Plan was created in response, and numerous changes were implemented from this plan. The first major change was that car emission testing was expanded across the state to more counties that did not previously mandate emissions testing, like areas of Larimer and Weld County. There have also been changes made to decrease Nitrogen Oxides (NOx) and Volatile Organic Compound (VOC) emissions, which should help lower ozone levels. One large contributor to high ozone levels in the area is the oil and natural gas industry situated in the Denver-Julesburg Basin (DJB) which overlaps with a majority of Colorado's metropolitan areas. Ozone is produced naturally in the Earth's stratosphere, but is also produced in the troposphere from human efforts. Briefly mentioned above, NOx and VOCs react with sunlight to create ozone through a process called photochemistry. One hour elevated ozone events (<75 ppb) "occur during June–August indicating that elevated ozone levels are driven by regional photochemistry". [ 68 ] According to an article from the University of Colorado-Boulder, "Oil and natural gas VOC emission have a major role in ozone production and bear the potential to contribute to elevated O 3 levels in the Northern Colorado Front Range (NCFR)". [ 68 ] Using complex analyses to research wind patterns and emissions from large oil and natural gas operations, the authors concluded that "elevated O 3 levels in the NCFR are predominantly correlated with air transport from N– ESE, which are the upwind sectors where the O&NG operations in the Wattenberg Field area of the DJB are located". [ 68 ] Contained in the Colorado Ozone Action Plan, created in 2008, plans exist to evaluate "emission controls for large industrial sources of NOx" and "statewide control requirements for new oil and gas condensate tanks and pneumatic valves". [ 69 ] In 2011, the Regional Haze Plan was released that included a more specific plan to help decrease NOx emissions. These efforts are increasingly difficult to implement and take many years to come to pass. Of course there are also other reasons that ozone levels remain high. These include: a growing population meaning more car emissions, and the mountains along the NCFR that can trap emissions. If interested, daily air quality readings can be found at the Colorado Department of Public Health and Environment's website. [ 70 ] As noted earlier, Denver continues to experience high levels of ozone to this day. It will take many years and a systems-thinking approach to combat this issue of high ozone levels in the Front Range of Colorado. Ozone gas attacks any polymer possessing olefinic or double bonds within its chain structure, such as natural rubber , nitrile rubber , and styrene-butadiene rubber. Products made using these polymers are especially susceptible to attack, which causes cracks to grow longer and deeper with time, the rate of crack growth depending on the load carried by the rubber component and the concentration of ozone in the atmosphere. Such materials can be protected by adding antiozonants , such as waxes, which bond to the surface to create a protective film or blend with the material and provide long term protection. Ozone cracking used to be a serious problem in car tires, [ 71 ] for example, but it is not an issue with modern tires. On the other hand, many critical products, like gaskets and O-rings , may be attacked by ozone produced within compressed air systems. Fuel lines made of reinforced rubber are also susceptible to attack, especially within the engine compartment, where some ozone is produced by electrical components. Storing rubber products in close proximity to a DC electric motor can accelerate ozone cracking. The commutator of the motor generates sparks which in turn produce ozone. Although ozone was present at ground level before the Industrial Revolution , peak concentrations are now far higher than the pre-industrial levels, and even background concentrations well away from sources of pollution are substantially higher. [ 73 ] [ 74 ] Ozone acts as a greenhouse gas , absorbing some of the infrared energy emitted by the earth. Quantifying the greenhouse gas potency of ozone is difficult because it is not present in uniform concentrations across the globe. However, the most widely accepted scientific assessments relating to climate change (e.g. the Intergovernmental Panel on Climate Change Third Assessment Report ) [ 75 ] suggest that the radiative forcing of tropospheric ozone is about 25% that of carbon dioxide . The annual global warming potential of tropospheric ozone is between 918 and 1022 tons carbon dioxide equivalent /tons tropospheric ozone. This means on a per-molecule basis, ozone in the troposphere has a radiative forcing effect roughly 1,000 times as strong as carbon dioxide . However, tropospheric ozone is a short-lived greenhouse gas, which decays in the atmosphere much more quickly than carbon dioxide . This means that over a 20-year span, the global warming potential of tropospheric ozone is much less, roughly 62 to 69 tons carbon dioxide equivalent / ton tropospheric ozone. [ 76 ] Because of its short-lived nature, tropospheric ozone does not have strong global effects, but has very strong radiative forcing effects on regional scales. In fact, there are regions of the world where tropospheric ozone has a radiative forcing up to 150% of carbon dioxide . [ 77 ] For example, ozone increase in the troposphere is shown to be responsible for ~30% of upper Southern Ocean interior warming between 1955 and 2000. [ 78 ] Filters containing an adsorbent or catalyst such as charcoal (carbon) may be used to remove odors and gaseous pollutants such as volatile organic compounds or ozone. [ 79 ] For the last few decades, scientists studied the effects of acute and chronic ozone exposure on human health. Hundreds of studies suggest that ozone is harmful to people at levels currently found in urban areas. [ 80 ] [ 81 ] Ozone has been shown to affect the respiratory, cardiovascular and central nervous system. Early death and problems in reproductive health and development are also shown to be associated with ozone exposure. [ 82 ] The American Lung Association has identified five populations who are especially vulnerable to the effects of breathing ozone: [ 83 ] Additional evidence suggests that women, those with obesity and low-income populations may also face higher risk from ozone, although more research is needed. [ 83 ] Acute ozone exposure ranges from hours to a few days. Because ozone is a gas, it directly affects the lungs and the entire respiratory system. Inhaled ozone causes inflammation and acute—but reversible—changes in lung function, as well as airway hyperresponsiveness. [ 84 ] These changes lead to shortness of breath, wheezing, and coughing which may exacerbate lung diseases, like asthma or chronic obstructive pulmonary disease (COPD) resulting in the need to receive medical treatment. [ 85 ] [ 86 ] Acute and chronic exposure to ozone has been shown to cause an increased risk of respiratory infections, due to the following mechanism. [ 87 ] Multiple studies have been conducted to determine the mechanism behind ozone's harmful effects, particularly in the lungs. These studies have shown that exposure to ozone causes changes in the immune response within the lung tissue, resulting in disruption of both the innate and adaptive immune response, as well as altering the protective function of lung epithelial cells. [ 88 ] It is thought that these changes in immune response and the related inflammatory response are factors that likely contribute to the increased risk of lung infections, and worsening or triggering of asthma and reactive airways after exposure to ground-level ozone pollution. [ 88 ] [ 89 ] The innate (cellular) immune system consists of various chemical signals and cell types that work broadly and against multiple pathogen types, typically bacteria or foreign bodies/substances in the host. [ 89 ] [ 90 ] The cells of the innate system include phagocytes, neutrophils, [ 90 ] both thought to contribute to the mechanism of ozone pathology in the lungs, as the functioning of these cell types have been shown to change after exposure to ozone. [ 89 ] Macrophages, cells that serve the purpose of eliminating pathogens or foreign material through the process of "phagocytosis", [ 90 ] have been shown to change the level of inflammatory signals they release in response to ozone, either up-regulating and resulting in an inflammatory response in the lung, or down-regulating and reducing immune protection. [ 88 ] Neutrophils, another important cell type of the innate immune system that primarily targets bacterial pathogens, [ 90 ] are found to be present in the airways within 6 hours of exposure to high ozone levels. Despite high levels in the lung tissues, however, their ability to clear bacteria appears impaired by exposure to ozone. [ 88 ] The adaptive immune system is the branch of immunity that provides long-term protection via the development of antibodies targeting specific pathogens and is also impacted by high ozone exposure. [ 89 ] [ 90 ] Lymphocytes, a cellular component of the adaptive immune response, produce an increased amount of inflammatory chemicals called "cytokines" after exposure to ozone, which may contribute to airway hyperreactivity and worsening asthma symptoms. [ 88 ] The airway epithelial cells also play an important role in protecting individuals from pathogens. In normal tissue, the epithelial layer forms a protective barrier, and also contains specialized ciliary structures that work to clear foreign bodies, mucus and pathogens from the lungs. When exposed to ozone, the cilia become damaged and mucociliary clearance of pathogens is reduced. Furthermore, the epithelial barrier becomes weakened, allowing pathogens to cross the barrier, proliferate and spread into deeper tissues. Together, these changes in the epithelial barrier help make individuals more susceptible to pulmonary infections. [ 88 ] Inhaling ozone not only affects the immune system and lungs, but it may also affect the heart as well. Ozone causes short-term autonomic imbalance leading to changes in heart rate and reduction in heart rate variability; [ 91 ] and high levels exposure for as little as one-hour results in a supraventricular arrhythmia in the elderly, [ 92 ] both increase the risk of premature death and stroke. Ozone may also lead to vasoconstriction resulting in increased systemic arterial pressure contributing to increased risk of cardiac morbidity and mortality in patients with pre-existing cardiac diseases. [ 93 ] [ 94 ] Breathing ozone for periods longer than eight hours at a time for weeks, months or years defines chronic exposure. Numerous studies suggest a serious impact on the health of various populations from this exposure. One study finds significant positive associations between chronic ozone and all-cause, circulatory, and respiratory mortality with 2%, 3%, and 12% increases in risk per 10 ppb [ 95 ] and report an association (95% CI) of annual ozone and all-cause mortality with a hazard ratio of 1.02 (1.01–1.04), and with cardiovascular mortality of 1.03 (1.01–1.05). A similar study finds similar associations with all-cause mortality and even larger effects for cardiovascular mortality. [ 96 ] An increased risk of mortality from respiratory causes is associated with long-term chronic exposure to ozone. [ 97 ] Chronic ozone has detrimental effects on children, especially those with asthma. The risk for hospitalization in children with asthma increases with chronic exposure to ozone; younger children and those with low-income status are even at greater risk. [ 98 ] Adults suffering from respiratory diseases (asthma, [ 99 ] COPD, [ 100 ] lung cancer [ 101 ] ) are at a higher risk of mortality and morbidity and critically ill patients have an increased risk of developing acute respiratory distress syndrome with chronic ozone exposure as well. [ 102 ] Ozone generators sold as air cleaners intentionally produce the gas ozone. [ 43 ] These are often marketed to control indoor air pollution , and use misleading terms to describe ozone. Some examples are describing it as "energized oxygen" or "pure air", suggesting that ozone is a healthy or "better" kind of oxygen. [ 43 ] However, according to the EPA , "There is evidence to show that at concentrations that do not exceed public health standards, ozone is not effective at removing many odor-causing chemicals", and "If used at concentrations that do not exceed public health standards, ozone applied to indoor air does not effectively remove viruses, bacteria, mold, or other biological pollutants.". [ 43 ] Furthermore, another report states that "results of some controlled studies show that concentrations of ozone considerably higher than these [human safety] standards are possible even when a user follows the manufacturer's operating instructions". [ 103 ] The California Air Resources Board has a page listing air cleaners (many with ionizers ) meeting their indoor ozone limit of 0.050 parts per million. [ 104 ] From that article: All portable indoor air cleaning devices sold in California must be certified by the California Air Resources Board (CARB). To be certified, air cleaners must be tested for electrical safety and ozone emissions, and meet an ozone emission concentration limit of 0.050 parts per million. For more information about the regulation, visit the air cleaner regulation . Ozone precursors are a group of pollutants, predominantly those emitted during the combustion of fossil fuels . Ground-level ozone pollution (tropospheric ozone) is produced near the Earth's surface by the action of daylight UV rays on these precursors. The ozone at ground level is primarily from fossil fuel precursors, but methane is a natural precursor, and the very low natural background level of ozone at ground level is considered safe. This section examines the health impacts of fossil fuel burning, which raises ground level ozone far above background levels. There is a great deal of evidence to show that ground-level ozone can harm lung function and irritate the respiratory system . [ 57 ] [ 106 ] Exposure to ozone (and the pollutants that produce it) is linked to premature death , asthma , bronchitis , heart attack , and other cardiopulmonary problems. [ 107 ] [ 108 ] Long-term exposure to ozone has been shown to increase risk of death from respiratory illness . [ 43 ] A study of 450,000 people living in U.S. cities saw a significant correlation between ozone levels and respiratory illness over the 18-year follow-up period. The study revealed that people living in cities with high ozone levels, such as Houston or Los Angeles, had an over 30% increased risk of dying from lung disease. [ 109 ] [ 110 ] Air quality guidelines such as those from the World Health Organization , the U.S. Environmental Protection Agency (EPA), and the European Union are based on detailed studies designed to identify the levels that can cause measurable ill health effects . According to scientists with the EPA, susceptible people can be adversely affected by ozone levels as low as 40 nmol/mol. [ 108 ] [ 111 ] [ 112 ] In the EU, the current target value for ozone concentrations is 120 μg/m 3 which is about 60 nmol/mol. This target applies to all member states in accordance with Directive 2008/50/EC. [ 113 ] Ozone concentration is measured as a maximum daily mean of 8 hour averages and the target should not be exceeded on more than 25 calendar days per year, starting from January 2010. While the directive requires in the future a strict compliance with 120 μg/m 3 limit (i.e. mean ozone concentration not to be exceeded on any day of the year), there is no date set for this requirement and this is treated as a long-term objective. [ 114 ] In the US, the Clean Air Act directs the EPA to set National Ambient Air Quality Standards for several pollutants, including ground-level ozone, and counties out of compliance with these standards are required to take steps to reduce their levels. In May 2008, under a court order, the EPA lowered its ozone standard from 80 nmol/mol to 75 nmol/mol. The move proved controversial, since the Agency's own scientists and advisory board had recommended lowering the standard to 60 nmol/mol. [ 108 ] Many public health and environmental groups also supported the 60 nmol/mol standard, [ 115 ] and the World Health Organization recommends 100 μg/m 3 (51 nmol/mol). [ 116 ] On January 7, 2010, the U.S. Environmental Protection Agency (EPA) announced proposed revisions to the National Ambient Air Quality Standard (NAAQS) for the pollutant ozone, the principal component of smog: ... EPA proposes that the level of the 8-hour primary standard, which was set at 0.075 μmol/mol in the 2008 final rule, should instead be set at a lower level within the range of 0.060 to 0.070 μmol/mol, to provide increased protection for children and other at risk populations against an array of O 3 – related adverse health effects that range from decreased lung function and increased respiratory symptoms to serious indicators of respiratory morbidity including emergency department visits and hospital admissions for respiratory causes, and possibly cardiovascular-related morbidity as well as total non- accidental and cardiopulmonary mortality ... [ 117 ] On October 26, 2015, the EPA published a final rule with an effective date of December 28, 2015, that revised the 8-hour primary NAAQS from 0.075 ppm to 0.070 ppm. [ 118 ] The EPA has developed an air quality index (AQI) to help explain air pollution levels to the general public. Under the current standards, eight-hour average ozone mole fractions of 85 to 104 nmol/mol are described as "unhealthy for sensitive groups", 105 nmol/mol to 124 nmol/mol as "unhealthy", and 125 nmol/mol to 404 nmol/mol as "very unhealthy". [ 119 ] Ozone can also be present in indoor air pollution , partly as a result of electronic equipment such as photocopiers. A connection has also been known to exist between the increased pollen, fungal spores, and ozone caused by thunderstorms and hospital admissions of asthma sufferers. [ 120 ] In the Victorian era , one British folk myth held that the smell of the sea was caused by ozone. In fact, the characteristic "smell of the sea" is caused by dimethyl sulfide , a chemical generated by phytoplankton . Victorian Britons considered the resulting smell "bracing". [ 121 ] An investigation to assess the joint mortality effects of ozone and heat during the European heat waves in 2003, concluded that these appear to be additive. [ 122 ] Ozone, along with reactive forms of oxygen such as superoxide , singlet oxygen , hydrogen peroxide , and hypochlorite ions, is produced by white blood cells and other biological systems (such as the roots of marigolds ) as a means of destroying foreign bodies. Ozone reacts directly with organic double bonds. Also, when ozone breaks down to dioxygen it gives rise to oxygen free radicals , which are highly reactive and capable of damaging many organic molecules . Moreover, it is believed that the powerful oxidizing properties of ozone may be a contributing factor of inflammation . The cause-and-effect relationship of how the ozone is created in the body and what it does is still under consideration and still subject to various interpretations, since other body chemical processes can trigger some of the same reactions. There is evidence linking the antibody-catalyzed water-oxidation pathway of the human immune response to the production of ozone. In this system, ozone is produced by antibody-catalyzed production of trioxidane from water and neutrophil-produced singlet oxygen. [ 123 ] When inhaled, ozone reacts with compounds lining the lungs to form specific, cholesterol-derived metabolites that are thought to facilitate the build-up and pathogenesis of atherosclerotic plaques (a form of heart disease ). These metabolites have been confirmed as naturally occurring in human atherosclerotic arteries and are categorized into a class of secosterols termed atheronals , generated by ozonolysis of cholesterol's double bond to form a 5,6 secosterol [ 124 ] as well as a secondary condensation product via aldolization. [ 125 ] Ozone has been implicated to have an adverse effect on plant growth: "... ozone reduced total chlorophylls, carotenoid and carbohydrate concentration, and increased 1-aminocyclopropane-1-carboxylic acid (ACC) content and ethylene production. In treated plants, the ascorbate leaf pool was decreased, while lipid peroxidation and solute leakage were significantly higher than in ozone-free controls. The data indicated that ozone triggered protective mechanisms against oxidative stress in citrus." [ 126 ] Studies that have used pepper plants as a model have shown that ozone decreased fruit yield and changed fruit quality. [ 127 ] [ 128 ] Furthermore, it was also observed a decrease in chlorophylls levels and antioxidant defences on the leaves, as well as increased the reactive oxygen species (ROS) levels and lipid and protein damages. [ 127 ] [ 128 ] A 2022 study concludes that East Asia loses 63 billion dollars in crops per year due to ozone pollution, a byproduct of fossil fuel combustion. China loses about one-third of its potential wheat production and one-fourth of its rice production. [ 129 ] [ 130 ] Because of the strongly oxidizing properties of ozone, ozone is a primary irritant, affecting especially the eyes and respiratory systems and can be hazardous at even low concentrations. The Canadian Centre for Occupation Safety and Health reports that: Even very low concentrations of ozone can be harmful to the upper respiratory tract and the lungs. The severity of injury depends on both the concentration of ozone and the duration of exposure. Severe and permanent lung injury or death could result from even a very short-term exposure to relatively low concentrations." [ 131 ] To protect workers potentially exposed to ozone, U.S. Occupational Safety and Health Administration has established a permissible exposure limit (PEL) of 0.1 μmol/mol (29 CFR 1910.1000 table Z-1), calculated as an 8-hour time weighted average. Higher concentrations are especially hazardous and NIOSH has established an Immediately Dangerous to Life and Health Limit (IDLH) of 5 μmol/mol. [ 132 ] Work environments where ozone is used or where it is likely to be produced should have adequate ventilation and it is prudent to have a monitor for ozone that will alarm if the concentration exceeds the OSHA PEL. Continuous monitors for ozone are available from several suppliers. Elevated ozone exposure can occur on passenger aircraft , with levels depending on altitude and atmospheric turbulence. [ 133 ] U.S. Federal Aviation Administration regulations set a limit of 250 nmol/mol with a maximum four-hour average of 100 nmol/mol. [ 134 ] Some planes are equipped with ozone converters in the ventilation system to reduce passenger exposure. [ 133 ] Ozone generators , or ozonators , [ 135 ] are used to produce ozone for cleaning air or removing smoke odours in unoccupied rooms. These ozone generators can produce over 3 g of ozone per hour. Ozone often forms in nature under conditions where O 2 will not react. [ 29 ] Ozone used in industry is measured in μmol/mol (ppm, parts per million), nmol/mol (ppb, parts per billion), μg/m 3 , mg/h (milligrams per hour) or weight percent. The regime of applied concentrations ranges from 1% to 5% (in air) and from 6% to 14% (in oxygen) for older generation methods. New electrolytic methods can achieve up 20% to 30% dissolved ozone concentrations in output water. Temperature and humidity play a large role in how much ozone is being produced using traditional generation methods (such as corona discharge and ultraviolet light). Old generation methods will produce less than 50% of nominal capacity if operated with humid ambient air, as opposed to very dry air. New generators, using electrolytic methods, can achieve higher purity and dissolution through using water molecules as the source of ozone production. This is the most common type of ozone generator for most industrial and personal uses. While variations of the "hot spark" coronal discharge method of ozone production exist, including medical grade and industrial grade ozone generators, these units usually work by means of a corona discharge tube or ozone plate. [ 136 ] [ 137 ] They are typically cost-effective and do not require an oxygen source other than the ambient air to produce ozone concentrations of 3–6%. Fluctuations in ambient air, due to weather or other environmental conditions, cause variability in ozone production. However, they also produce nitrogen oxides as a by-product. Use of an air dryer can reduce or eliminate nitric acid formation by removing water vapor and increase ozone production. At room temperature, nitric acid will form into a vapour that is hazardous if inhaled. Symptoms can include chest pain, shortness of breath, headaches and a dry nose and throat causing a burning sensation. Use of an oxygen concentrator can further increase the ozone production and further reduce the risk of nitric acid formation by removing not only the water vapor, but also the bulk of the nitrogen. UV ozone generators, or vacuum-ultraviolet (VUV) ozone generators, employ a light source that generates a narrow-band ultraviolet light, a subset of that produced by the Sun. The Sun's UV sustains the ozone layer in the stratosphere of Earth. [ 138 ] UV ozone generators use ambient air for ozone production, no air prep systems are used (air dryer or oxygen concentrator), therefore these generators tend to be less expensive. However, UV ozone generators usually produce ozone with a concentration of about 0.5% or lower which limits the potential ozone production rate. Another disadvantage of this method is that it requires the ambient air (oxygen) to be exposed to the UV source for a longer amount of time, and any gas that is not exposed to the UV source will not be treated. This makes UV generators impractical for use in situations that deal with rapidly moving air or water streams (in-duct air sterilization , for example). Production of ozone is one of the potential dangers of ultraviolet germicidal irradiation . VUV ozone generators are used in swimming pools and spa applications ranging to millions of gallons of water. VUV ozone generators, unlike corona discharge generators, do not produce harmful nitrogen by-products and also unlike corona discharge systems, VUV ozone generators work extremely well in humid air environments. There is also not normally a need for expensive off-gas mechanisms, and no need for air driers or oxygen concentrators which require extra costs and maintenance. In the cold plasma method, pure oxygen gas is exposed to a plasma created by DBD . The diatomic oxygen is split into single atoms, which then recombine in triplets to form ozone. It is common in the industry to mislabel some DBD ozone generators as CD Corona Discharge generators. Typically all solid flat metal electrode ozone generators produce ozone using the dielectric barrier discharge method. Cold plasma machines use pure oxygen as the input source and produce a maximum concentration of about 24% ozone. They produce far greater quantities of ozone in a given time compared to ultraviolet production that has about 2% efficiency. The discharges manifest as filamentary transfer of electrons (micro discharges) in a gap between two electrodes. In order to evenly distribute the micro discharges, a dielectric insulator must be used to separate the metallic electrodes and to prevent arcing. Electrolytic ozone generation (EOG) splits water molecules into H 2 , O 2 , and O 3 . In most EOG methods, the hydrogen gas will be removed to leave oxygen and ozone as the only reaction products. Therefore, EOG can achieve higher dissolution in water without other competing gases found in corona discharge method, such as nitrogen gases present in ambient air. This method of generation can achieve concentrations of 20–30% and is independent of air quality because water is used as the source material. Production of ozone electrolytically is typically unfavorable because of the high overpotential required to produce ozone as compared to oxygen. This is why ozone is not produced during typical water electrolysis. However, it is possible to increase the overpotential of oxygen by careful catalyst selection such that ozone is preferentially produced under electrolysis. Catalysts typically chosen for this approach are lead dioxide [ 139 ] or boron-doped diamond. [ 140 ] The ozone-to-oxygen ratio is improved by increasing current density at the anode, cooling the electrolyte around the anode close to 0 °C, using an acidic electrolyte (such as dilute sulfuric acid) instead of a basic solution, and by applying pulsed current instead of DC. [ 141 ] Ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site. Available ozone generators vary in the arrangement and design of the high-voltage electrodes. At production capacities higher than 20 kg per hour, a gas/water tube heat-exchanger may be utilized as ground electrode and assembled with tubular high-voltage electrodes on the gas-side. The regime of typical gas pressures is around 2 bars (200 kPa ) absolute in oxygen and 3 bars (300 kPa) absolute in air. Several megawatts of electrical power may be installed in large facilities, applied as single phase AC current at 50 to 8000 Hz and peak voltages between 3,000 and 20,000 volts. Applied voltage is usually inversely related to the applied frequency. The dominating parameter influencing ozone generation efficiency is the gas temperature, which is controlled by cooling water temperature and/or gas velocity. The cooler the water, the better the ozone synthesis. The lower the gas velocity, the higher the concentration (but the lower the net ozone produced). At typical industrial conditions, almost 90% of the effective power is dissipated as heat and needs to be removed by a sufficient cooling water flow. Because of the high reactivity of ozone, only a few materials may be used like stainless steel (quality 316L), titanium , aluminium (as long as no moisture is present), glass , polytetrafluorethylene , or polyvinylidene fluoride . Viton may be used with the restriction of constant mechanical forces and absence of humidity (humidity limitations apply depending on the formulation). Hypalon may be used with the restriction that no water comes in contact with it, except for normal atmospheric levels. Embrittlement or shrinkage is the common mode of failure of elastomers with exposure to ozone. Ozone cracking is the common mode of failure of elastomer seals like O-rings . Silicone rubbers are usually adequate for use as gaskets in ozone concentrations below 1 wt%, such as in equipment for accelerated aging of rubber samples. Ozone may be formed from O 2 by electrical discharges and by action of high energy electromagnetic radiation . Unsuppressed arcing in electrical contacts, motor brushes, or mechanical switches breaks down the chemical bonds of the atmospheric oxygen surrounding the contacts [ O 2 -> 2O]. Free radicals of oxygen in and around the arc recombine to create ozone [ O 3 ]. [ 142 ] Certain electrical equipment generate significant levels of ozone. This is especially true of devices using high voltages , such as ionic air purifiers , laser printers , photocopiers , tasers , and arc welders . Electric motors using brushes can generate ozone from repeated sparking inside the unit. Large motors that use brushes, such as those used by elevators or hydraulic pumps, will generate more ozone than smaller motors. Ozone is similarly formed in the Catatumbo lightning storms phenomenon on the Catatumbo River in Venezuela , though ozone's instability makes it dubious that it has any effect on the ozonosphere. [ 143 ] It is the world's largest single natural generator of ozone, lending calls for it to be designated a UNESCO World Heritage Site . [ 144 ] In the laboratory, ozone can be produced by electrolysis using a 9 volt battery , a pencil graphite rod cathode , a platinum wire anode , and a 3 molar sulfuric acid electrolyte . [ 145 ] The half cell reactions taking place are: where E° represents the standard electrode potential . In the net reaction, three equivalents of water are converted into one equivalent of ozone and three equivalents of hydrogen . Oxygen formation is a competing reaction. It can also be generated by a high voltage arc . In its simplest form, high voltage AC, such as the output of a neon-sign transformer is connected to two metal rods with the ends placed sufficiently close to each other to allow an arc. The resulting arc will convert atmospheric oxygen to ozone. It is often desirable to contain the ozone. This can be done with an apparatus consisting of two concentric glass tubes sealed together at the top with gas ports at the top and bottom of the outer tube. The inner core should have a length of metal foil inserted into it connected to one side of the power source. The other side of the power source should be connected to another piece of foil wrapped around the outer tube. A source of dry O 2 is applied to the bottom port. When high voltage is applied to the foil leads, electricity will discharge between the dry dioxygen in the middle and form O 3 and O 2 which will flow out the top port. This is called a Siemen's ozoniser. The reaction can be summarized as follows: [ 29 ] The largest use of ozone is in the preparation of pharmaceuticals , synthetic lubricants , and many other commercially useful organic compounds , where it is used to sever carbon -carbon bonds. [ 29 ] It can also be used for bleaching substances and for killing microorganisms in air and water sources. [ 146 ] Many municipal drinking water systems kill bacteria with ozone instead of the more common chlorine . [ 147 ] Ozone has a very high oxidation potential . [ 148 ] Ozone does not form organochlorine compounds, nor does it remain in the water after treatment. Ozone can form the suspected carcinogen bromate in source water with high bromide concentrations. The U.S. Safe Drinking Water Act mandates that these systems introduce an amount of chlorine to maintain a minimum of 0.2 μmol/mol residual free chlorine in the pipes, based on results of regular testing. Where electrical power is abundant, ozone is a cost-effective method of treating water, since it is produced on demand and does not require transportation and storage of hazardous chemicals. Once it has decayed, it leaves no taste or odour in drinking water. Although low levels of ozone have been advertised to be of some disinfectant use in residential homes, the concentration of ozone in dry air required to have a rapid, substantial effect on airborne pathogens exceeds safe levels recommended by the U.S. Occupational Safety and Health Administration and Environmental Protection Agency . Humidity control can vastly improve both the killing power of the ozone and the rate at which it decays back to oxygen (more humidity allows more effectiveness). Spore forms of most pathogens are very tolerant of atmospheric ozone in concentrations at which asthma patients start to have issues. In 1908 artificial ozonisation of the Central Line of the London Underground was introduced for aerial disinfection. The process was found to be worthwhile, but was phased out by 1956. However the beneficial effect was maintained by the ozone created incidentally from the electrical discharges of the train motors (see above: Incidental production ). [ 149 ] Ozone generators were made available to schools and universities in Wales for the Autumn term 2021, to disinfect classrooms after COVID-19 outbreaks. [ 150 ] Industrially, ozone is used to: Ozone is a reagent in many organic reactions in the laboratory and in industry. Ozonolysis is the cleavage of an alkene to carbonyl compounds. Many hospitals around the world use large ozone generators to decontaminate operating rooms between surgeries. The rooms are cleaned and then sealed airtight before being filled with ozone which effectively kills or neutralizes all remaining bacteria. [ 156 ] Ozone is used as an alternative to chlorine or chlorine dioxide in the bleaching of wood pulp . [ 157 ] It is often used in conjunction with oxygen and hydrogen peroxide to eliminate the need for chlorine-containing compounds in the manufacture of high-quality, white paper . [ 158 ] Ozone can be used to detoxify cyanide wastes (for example from gold and silver mining ) by oxidizing cyanide to cyanate and eventually to carbon dioxide . [ 159 ] Since the invention of dielectric barrier discharge (DBD) plasma reactors, it has been employed for water treatment with ozone. [ 160 ] However, with cheaper alternative disinfectants like chlorine, such applications of DBD ozone water decontamination have been limited by high power consumption and bulky equipment. [ 161 ] [ 162 ] Despite this, with research revealing the negative impacts of common disinfectants like chlorine with respect to toxic residuals and ineffectiveness in killing certain micro-organisms, [ 163 ] DBD plasma-based ozone decontamination is of interest in current available technologies. Although ozonation of water with a high concentration of bromide does lead to the formation of undesirable brominated disinfection byproducts, unless drinking water is produced by desalination, ozonation can generally be applied without concern for these byproducts. [ 162 ] [ 164 ] [ 165 ] [ 166 ] Advantages of ozone include high thermodynamic oxidation potential, less sensitivity to organic material and better tolerance for pH variations while retaining the ability to kill bacteria, fungi, viruses, as well as spores and cysts. [ 167 ] [ 168 ] [ 169 ] Although, ozone has been widely accepted in Europe for decades, it is sparingly used for decontamination in the U.S. due to limitations of high-power consumption, bulky installation and stigma attached with ozone toxicity. [ 161 ] [ 170 ] Considering this, recent research efforts have been directed toward the study of effective ozone water treatment systems. [ 171 ] Researchers have looked into lightweight and compact low power surface DBD reactors, [ 172 ] [ 173 ] energy efficient volume DBD reactors [ 174 ] and low power micro-scale DBD reactors. [ 175 ] [ 176 ] Such studies can help pave the path to re-acceptance of DBD plasma-based ozone decontamination of water, especially in the U.S. Ozone levels which are safe for people are ineffective at killing fungi and bacteria. [ 177 ] Some consumer disinfection and cosmetic products emit ozone at levels harmful to human health. [ 177 ] Devices generating high levels of ozone, some of which use ionization, are used to sanitize and deodorize uninhabited buildings, rooms, ductwork, woodsheds, boats, and other vehicles. Ozonated water is used to launder clothes and to sanitize food, drinking water, and surfaces in the home. According to the U.S. Food and Drug Administration (FDA), it is "amending the food additive regulations to provide for the safe use of ozone in gaseous and aqueous phases as an antimicrobial agent on food, including meat and poultry." Studies at California Polytechnic University demonstrated that 0.3 μmol/mol levels of ozone dissolved in filtered tapwater can produce a reduction of more than 99.99% in such food-borne microorganisms as salmonella, E. coli 0157:H7 and Campylobacter . This quantity is 20,000 times the WHO -recommended limits stated above. [ 152 ] [ 178 ] Ozone can be used to remove pesticide residues from fruits and vegetables . [ 179 ] [ 180 ] Ozone is used in homes and hot tubs to kill bacteria in the water and to reduce the amount of chlorine or bromine required by reactivating them to their free state. Since ozone does not remain in the water long enough, ozone by itself is ineffective at preventing cross-contamination among bathers and must be used in conjunction with halogens . Gaseous ozone created by ultraviolet light or by corona discharge is injected into the water. [ 181 ] Ozone is also widely used in the treatment of water in aquariums and fishponds. Its use can minimize bacterial growth, control parasites, eliminate transmission of some diseases, and reduce or eliminate "yellowing" of the water. Ozone must not come in contact with fishes' gill structures. Natural saltwater (with life forms) provides enough "instantaneous demand" that controlled amounts of ozone activate bromide ions to hypobromous acid , and the ozone entirely decays in a few seconds to minutes. If oxygen-fed ozone is used, the water will be higher in dissolved oxygen and fishes' gill structures will atrophy, making them dependent on oxygen-enriched water. Ozonation – a process of infusing water with ozone – can be used in aquaculture to facilitate organic breakdown. Ozone is also added to recirculating systems to reduce nitrite levels [ 182 ] through conversion into nitrate . If nitrite levels in the water are high, nitrites will also accumulate in the blood and tissues of fish, where it interferes with oxygen transport (it causes oxidation of the heme-group of haemoglobin from ferrous ( Fe 2+ ) to ferric ( Fe 3+ ), making haemoglobin unable to bind O 2 ). [ 183 ] Despite these apparent positive effects, ozone use in recirculation systems has been linked to reducing the level of bioavailable iodine in salt water systems, resulting in iodine deficiency symptoms such as goitre and decreased growth in Senegalese sole ( Solea senegalensis ) larvae. [ 184 ] Ozonate seawater is used for surface disinfection of haddock and Atlantic halibut eggs against nodavirus. Nodavirus is a lethal and vertically transmitted virus which causes severe mortality in fish. Haddock eggs should not be treated with high ozone level as eggs so treated did not hatch and died after 3–4 days. [ 185 ] Ozone application on freshly cut pineapple and banana shows increase in flavonoids and total phenol contents when exposure is up to 20 minutes. Decrease in ascorbic acid (one form of vitamin C ) content is observed but the positive effect on total phenol content and flavonoids can overcome the negative effect. [ 186 ] Tomatoes upon treatment with ozone show an increase in β-carotene, lutein and lycopene. [ 187 ] However, ozone application on strawberries in pre-harvest period shows decrease in ascorbic acid content. [ 188 ] Ozone facilitates the extraction of some heavy metals from soil using EDTA . EDTA forms strong, water-soluble coordination compounds with some heavy metals ( Pb and Zn ) thereby making it possible to dissolve them out from contaminated soil. If contaminated soil is pre-treated with ozone, the extraction efficacy of Pb , Am , and Pu increases by 11.0–28.9%, [ 189 ] 43.5% [ 190 ] and 50.7% [ 190 ] respectively. Crop pollination is an essential part of an ecosystem. Ozone can have detrimental effects on plant-pollinator interactions. [ 191 ] Pollinators carry pollen from one plant to another. This is an essential cycle inside of an ecosystem. Causing changes in certain atmospheric conditions around pollination sites or with xenobiotics could cause unknown changes to the natural cycles of pollinators and flowering plants. In a study conducted in North-Western Europe, crop pollinators were negatively affected more when ozone levels were higher. [ 192 ] The use of ozone for the treatment of medical conditions is not supported by high quality evidence, and is generally considered alternative medicine . [ 193 ] Footnotes Citations Nascent oxygen O Dioxygen ( singlet and triplet ) O 2 Trioxygen ( ozone and cyclic ozone ) O 3 Tetraoxygen O 4 Octaoxygen O 8
https://en.wikipedia.org/wiki/O-O=O
This page provides supplementary chemical data on o -Xylene . The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as eChemPortal , and follow its directions. MSDS is available from MATHESON TRI-GAS, INC. in the SDSdata.org database. Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. See also:
https://en.wikipedia.org/wiki/O-Xylene_(data_page)
In mathematical logic , and more specifically in model theory , an infinite structure ( M ,<,...) that is totally ordered by < is called an o-minimal structure if and only if every definable subset X ⊆ M (with parameters taken from M ) is a finite union of intervals and points. O-minimality can be regarded as a weak form of quantifier elimination . A structure M is o-minimal if and only if every formula with one free variable and parameters in M is equivalent to a quantifier-free formula involving only the ordering, also with parameters in M . This is analogous to the minimal structures, which are exactly the analogous property down to equality. A theory T is an o-minimal theory if every model of T is o-minimal. It is known that the complete theory T of an o-minimal structure is an o-minimal theory. [ 1 ] This result is remarkable because, in contrast, the complete theory of a minimal structure need not be a strongly minimal theory , that is, there may be an elementarily equivalent structure that is not minimal. O-minimal structures can be defined without recourse to model theory. Here we define a structure on a nonempty set M in a set-theoretic manner, as a sequence S = ( S n ), n = 0,1,2,... such that For a subset A of M , we consider the smallest structure S ( A ) containing S such that every finite subset of A is contained in S 1 . A subset D of M n is called A -definable if it is contained in S n ( A ); in that case A is called a set of parameters for D . A subset is called definable if it is A -definable for some A . If M has a dense linear order without endpoints on it, say <, then a structure S on M is called o-minimal (respect to <) if it satisfies the extra axioms The "o" stands for "order", since any o-minimal structure requires an ordering on the underlying set. O-minimal structures originated in model theory and so have a simpler — but equivalent — definition using the language of model theory. [ 2 ] Specifically if L is a language including a binary relation <, and ( M ,<,...) is an L -structure where < is interpreted to satisfy the axioms of a dense linear order, [ 3 ] then ( M ,<,...) is called an o-minimal structure if for any definable set X ⊆ M there are finitely many open intervals I 1 ,..., I r in M ∪ {±∞} and a finite set X 0 such that Examples of o-minimal theories are: In the case of RCF, the definable sets are the semialgebraic sets . Thus the study of o-minimal structures and theories generalises real algebraic geometry . A major line of current research is based on discovering expansions of the real ordered field that are o-minimal. Despite the generality of application, one can show a great deal about the geometry of set definable in o-minimal structures. There is a cell decomposition theorem, [ 6 ] Whitney and Verdier stratification theorems and a good notion of dimension and Euler characteristic. Moreover, continuously differentiable definable functions in a o-minimal structure satisfy a generalization of Łojasiewicz inequality , [ 7 ] a property that has been used to guarantee the convergence of some non-smooth optimization methods, such as the stochastic subgradient method (under some mild assumptions). [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/O-minimal_theory
An o-ring boss seal is a technique for joining two fluid-carrying pipes , hoses , or tubing . In an o-ring boss (abbreviated ORB) system, a male - threaded part is inserted into a female-threaded part, providing a mechanical seal. This system differs from others in that a nut is tightened over an o-ring in a chamfered area, creating a fluid-tight seal. [ 1 ] This system is used frequently in hydraulics , although it has been applied to other systems including compressed air systems and vacuum pumps , [ 2 ] such as many Robinair pumps, in which the intake tee has an o-ring boss seal on the bottom. The ORB system can be confused with other connection systems, such as NPT . While threads of different connectors sometimes fit (although often very inexactly), o-ring boss seal system connectors should never be used with any other type of connectors and vice versa, as leaks are likely. Under the high fluid pressures commonly seen in hydraulic systems, a leak or failure of the connection is quite dangerous and could lead to loss of life. This system has the advantage of being able to be tightened mechanically before being sealed. Most threaded systems, such as NPT, have a seal provided by a taper in the thread, so it is difficult to orient both ends of the hose, pipe or tube so that it is not twisted. In the o-ring boss system, this problem is eliminated because the threads do not seal the connection and therefore can be rotated at least a full revolution before they are sealed while maintaining a proper mechanical connection. The orientation problem could also be solved with a suitable union .
https://en.wikipedia.org/wiki/O-ring_boss_seal