id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
3,012,448 | https://en.wikipedia.org/wiki/Nitrogen%20balance | In human physiology, nitrogen balance is the net difference between bodily nitrogen intake (ingestion) and loss (excretion). It can be represented as the following:
Nitrogen is a fundamental chemical component of amino acids, the molecular building blocks of protein. As such, nitrogen balance may be used as an index of protein metabolism. When more nitrogen is gained than lost by an individual, they are considered to have a positive nitrogen balance and be in a state of overall protein anabolism. In contrast, a negative nitrogen balance, in which more nitrogen is lost than gained, indicates a state of overall protein catabolism.
The body obtains nitrogen from dietary protein, sources of which include meat, fish, eggs, dairy products, nuts, legumes, cereals, and grains. Nitrogen loss occurs largely through urine in the form of urea, as well as through faeces, sweat, and growth of hair and skin.
Blood urea nitrogen and urine urea nitrogen tests can be used to estimate nitrogen balance.
Physiological and Clinical Implications
Positive nitrogen balance is associated with periods of growth, hypothyroidism, tissue repair, and pregnancy.
Negative nitrogen balance is associated with burns, serious tissue injuries, fever, hyperthyroidism, wasting diseases, and periods of fasting. A negative nitrogen balance can be used as part of a clinical evaluation of malnutrition.
Nitrogen balance is a method traditionally used to measure dietary protein requirements. This approach necessitates the meticulous collection of all nitrogen inputs and outputs to ensure comprehensive accounting of nitrogen exchanges. Nitrogen balance studies typically involve controlled dietary conditions, requiring participants to consume specific diets to determine total nitrogen intake precisely. Furthermore, participants often must remain at the study location for the duration of the study to facilitate the collection of all nitrogen losses. Physical exercise is also known to influence nitrogen excretion, adding another variable that requires control during these studies. Due to the stringent conditions required for accurate results, the nitrogen balance method may pose challenges when studying dietary protein requirements across different demographics, such as children.
See also
Protein (nutrient)
Biological value
Net protein utilization
Protein efficiency ratio
Protein digestibility
Protein Digestibility Corrected Amino Acid Score
References
External links
(with clinical information & interpretation related to nitrogen balance and its clinical testing)
Nitrogen
Proteins | Nitrogen balance | [
"Chemistry"
] | 467 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
3,014,017 | https://en.wikipedia.org/wiki/Zeta%20function%20regularization | In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
Definition
There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series
One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by
if this sum converges, and by analytic continuation elsewhere.
In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Ramanujan to "sum" the series 1 + 2 + 3 + 4 + ... to ζ(−1) = −1/12.
showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed.
Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues a1, a2, ...., and in this case the zeta function is formally the trace of A−s. showed that if A is the Laplacian of a compact Riemannian manifold then the Minakshisundaram–Pleijel zeta function converges and has an analytic continuation as a meromorphic function to all complex numbers, and extended this to elliptic pseudo-differential operators A on compact Riemannian manifolds. So for such operators one can define the determinant using zeta function regularization. See "analytic torsion."
suggested using this idea to evaluate path integrals in curved spacetimes. He studied zeta function regularization in order to calculate the partition functions for thermal graviton and matter's quanta in curved background such as on the horizon of black holes and on de Sitter background using the relation by the inverse Mellin transformation to the trace of the kernel of heat equations.
Example
The first example in which zeta function regularization is available appears in the Casimir effect, which is in a flat space with the bulk contributions of the quantum field in three space dimensions. In this case we must calculate the value of Riemann zeta function at –3, which diverges explicitly. However, it can be analytically continued to s = –3 where hopefully there is no pole, thus giving a finite value to the expression. A detailed example of this regularization at work is given in the article on the detail example of the Casimir effect, where the resulting sum is very explicitly the Riemann zeta-function (and where the seemingly legerdemain analytic continuation removes an additive infinity, leaving a physically significant finite number).
An example of zeta-function regularization is the calculation of the vacuum expectation value of the energy of a particle field in quantum field theory. More generally, the zeta-function approach can be used to regularize the whole energy–momentum tensor both in flat and in curved spacetime.
The unregulated value of the energy is given by a summation over the zero-point energy of all of the excitation modes of the vacuum:
Here, is the zeroth component of the energy–momentum tensor and the sum (which may be an integral) is understood to extend over all (positive and negative) energy modes ; the absolute value reminding us that the energy is taken to be positive. This sum, as written, is usually infinite ( is typically linear in n). The sum may be regularized by writing it as
where s is some parameter, taken to be a complex number. For large, real s greater than 4 (for three-dimensional space), the sum is manifestly finite, and thus may often be evaluated theoretically.
The zeta-regularization is useful as it can often be used in a way such that the various symmetries of the physical system are preserved. Zeta-function regularization is used in conformal field theory, renormalization and in fixing the critical spacetime dimension of string theory.
Relation to other regularizations
Zeta function regularization is equivalent to dimensional regularization, see. However, the main advantage of the zeta regularization is that it can be used whenever the dimensional regularization fails, for example if there are matrices or tensors inside the calculations
Relation to Dirichlet series
Zeta-function regularization gives an analytic structure to any sums over an arithmetic function f(n). Such sums are known as Dirichlet series. The regularized form
converts divergences of the sum into simple poles on the complex s-plane. In numerical calculations, the zeta-function regularization is inappropriate, as it is extremely slow to converge. For numerical purposes, a more rapidly converging sum is the exponential regularization, given by
This is sometimes called the Z-transform of f, where z = exp(−t). The analytic structure of the exponential and zeta-regularizations are related. By expanding the exponential sum as a Laurent series
one finds that the zeta-series has the structure
The structure of the exponential and zeta-regulators are related by means of the Mellin transform. The one may be converted to the other by making use of the integral representation of the Gamma function:
which leads to the identity
relating the exponential and zeta-regulators, and converting poles in the s-plane to divergent terms in the Laurent series.
Heat kernel regularization
The sum
is sometimes called a heat kernel or a heat-kernel regularized sum; this name stems from the idea that the can sometimes be understood as eigenvalues of the heat kernel. In mathematics, such a sum is known as a generalized Dirichlet series; its use for averaging is known as an Abelian mean. It is closely related to the Laplace–Stieltjes transform, in that
where is a step function, with steps of at . A number of theorems for the convergence of such a series exist. For example, by the Hardy-Littlewood Tauberian theorem, if
then the series for converges in the half-plane and is uniformly convergent on every compact subset of the half-plane . In almost all applications to physics, one has
History
Much of the early work establishing the convergence and equivalence of series regularized with the heat kernel and zeta function regularization methods was done by G. H. Hardy and J. E. Littlewood in 1916 and is based on the application of the Cahen–Mellin integral. The effort was made in order to obtain values for various ill-defined, conditionally convergent sums appearing in number theory.
In terms of application as the regulator in physical problems, before , J. Stuart Dowker and Raymond Critchley in 1976 proposed a zeta-function regularization method for quantum physical problems. Emilio Elizalde and others have also proposed a method based on the zeta regularization for the integrals , here is a regulator and the divergent integral depends on the numbers in the limit see renormalization. Also unlike other regularizations such as dimensional regularization and analytic regularization, zeta regularization has no counterterms and gives only finite results.
See also
References
Tom M. Apostol, "Modular Functions and Dirichlet Series in Number Theory", "Springer-Verlag New York. (See Chapter 8.)"
A. Bytsenko, G. Cognola, E. Elizalde, V. Moretti and S. Zerbini, "Analytic Aspects of Quantum Fields", World Scientific Publishing, 2003,
G.H. Hardy and J.E. Littlewood, "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes", Acta Mathematica, 41(1916) pp. 119–196. (See, for example, theorem 2.12)
V. Moretti, "Direct z-function approach and renormalization of one-loop stress tensor in curved spacetimes, Phys. Rev.D 56, 7797 ''(1997).
D. Fermi, L. Pizzocchero, "Local zeta regularization and the scalar Casimir effect. A general approach based on integral kernels", World Scientific Publishing, (hardcover), (ebook). (2017).
Quantum field theory
String theory
Mathematical analysis
Zeta and L-functions
Summability methods | Zeta function regularization | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,936 | [
"Sequences and series",
"Quantum field theory",
"Astronomical hypotheses",
"Mathematical structures",
"Mathematical analysis",
"Summability methods",
"Quantum mechanics",
"String theory"
] |
3,014,542 | https://en.wikipedia.org/wiki/Motion%20detector | A motion detector is an electrical device that utilizes a sensor to detect nearby motion (motion detection). Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems. It can be achieved by either mechanical or electronic methods. When it is done by natural organisms, it is called motion perception.
Overview
An active electronic motion detector contains an optical, microwave, or acoustic sensor, as well as a transmitter. However, a passive contains only a sensor and only senses a signature from the moving object via emission or reflection. Changes in the optical, microwave or acoustic field in the device's proximity are interpreted by the electronics based on one of several technologies. Most low-cost motion detectors can detect motion at distances of about . Specialized systems are more expensive but have either increased sensitivity or much longer ranges. Tomographic motion detection systems can cover much larger areas because the radio waves it senses are at frequencies which penetrate most walls and obstructions, and are detected in multiple locations.
Motion detectors have found wide use in commercial applications. One common application is activating automatic door openers in businesses and public buildings. Motion sensors are also widely used in lieu of a true occupancy sensor in activating street lights or indoor lights in walkways, such as lobbies and staircases. In such smart lighting systems, energy is conserved by only powering the lights for the duration of a timer, after which the person has presumably left the area. A motion detector may be among the sensors of a burglar alarm that is used to alert the home owner or security service when it detects the motion of a possible intruder. Such a detector may also trigger a security camera to record the possible intrusion.
Motion controllers are also used for video game consoles as game controllers. A camera can also allow the body's movements to be used for control, such as in the Kinect system.
Sensor technology
Motion can be detected by monitoring changes in:
Infrared light (passive and active sensors)
Visible light (video and camera systems)
Radio frequency energy (radar, microwave and tomographic motion detection)
Sound (microphones, other acoustic sensors)
Kinetic energy (triboelectric, seismic, and inertia-switch sensors)
Magnetism (magnetic sensors, magnetometers)
Wi-Fi Signals (WiFi Sensing)
Several types of motion detection are in wide use:
Passive infrared (PIR)
Passive infrared (PIR) sensors are sensitive to a person's skin temperature through emitted black-body radiation at mid-infrared wavelengths, in contrast to background objects at room temperature. No energy is emitted from the sensor, thus the name passive infrared. This distinguishes it from the electric eye for instance (not usually considered a motion detector), in which the crossing of a person or vehicle interrupts a visible or infrared beam. These devices can detect objects, people, or animals by picking up one's infrared radiation.
Mechanical
The most basic forms of mechanical motion detection utilize a switch or trigger. For example, the keys of a typewriter use a mechanical method of detecting motion, where each key is a switch that is either off or on, and each letter that appears is a result of the key's motion.
Microwave
These detect motion through the principle of Doppler radar, and are similar to a radar speed gun. A continuous wave of microwave radiation is emitted, and phase shifts in the reflected microwaves due to motion of an object toward (or away from) the receiver result in a heterodyne signal at a low audio frequency.
Ultrasonic
An ultrasonic transducer emits an ultrasonic wave (sound at a frequency higher than a human ear can hear) and receives reflections from nearby objects. Exactly as in Doppler radar, heterodyne detection of the received field indicates motion. The detected doppler shift is also at low audio frequencies (for walking speeds) since the ultrasonic wavelength of around a centimeter is similar to the wavelengths used in microwave motion detectors. One potential drawback of ultrasonic sensors is that the sensor can be sensitive to motion in areas where coverage is undesired, for instance, due to reflections of sound waves around corners. Such extended coverage may be desirable for lighting control, where the goal is the detection of any occupancy in an area, but for opening an automatic door, for example, a sensor selective to traffic in the path toward the door is superior.
Tomographic motion detector
These systems sense disturbances to radio waves as they pass from node to node of a mesh network. They have the ability to detect over large areas completely because they can sense through walls and other obstructions. RF tomographic motion detection systems may use dedicated hardware, other wireless-capable devices or a combination of the two. Other wireless capable devices can act as nodes on the mesh after receiving a software update.
Video camera software
With the proliferation of low-cost digital cameras able to shoot video, it is possible to use the output of such a camera to detect motion in its field of view using software. This solution is particularly attractive when the intent is to record video triggered by motion detection, as no hardware beyond the camera and computer is needed. Since the observed field may be normally illuminated, this may be considered another passive technology. However, it can also be used together with near-infrared illumination to detect motion in the dark, that is, with the illumination at a wavelength undetectable by a human eye.
More complex algorithms are necessary to detect motion when the camera itself is panning, or when a specific object's motion must be detected in a field containing other, irrelevant movement—for example, a painting surrounded by visitors in an art gallery. With a panning camera, models based on optical flow are used to distinguish between apparent background motion caused by the camera's movement and that of independently moving objects.
Gesture detector
Photodetectors and infrared lighting elements can support digital screens to detect hand motions and gestures with the aid of machine learning algorithms.
Dual-technology motion detectors
Many modern motion detectors use combinations of different technologies. While combining multiple sensing technologies into one detector can help reduce false triggering, it does so at the expense of reduced detection probabilities and increased vulnerability. For example, many dual-tech sensors combine both a PIR sensor and a microwave sensor into one unit. For motion to be detected, both sensors must trip together. This lowers the probability of a false alarm since heat and light changes may trip the (passive infrared) PIR but not the microwave, or moving tree branches may trigger the microwave but not the PIR. If an intruder is able to fool either the PIR or microwave, however, the sensor will not detect it.
Often, PIR technology is paired with another model to maximize accuracy and reduce energy use. PIR draws less energy than emissive microwave detection, and so many sensors are calibrated so that when the PIR sensor is tripped, it activates a microwave sensor. If the latter also picks up an intruder, then the alarm is sounded.
See also
Twilight switch
Heat detector
Motion capture
Motion controller for video game consoles
Pickup (music technology)
Proximity sensor
Remote camera
Smoke detector
References
External links
Relational Motion Detection
www.cs.rochester.edu/~nelson/research
Motion Detection Algorithms In Image Processing
Motion Detection and Recognition Research
Presence and Absence detection explained
Motion detection sample algorithm realization video
Security technology
Home automation
Sensors
Motion (physics) | Motion detector | [
"Physics",
"Technology",
"Engineering"
] | 1,531 | [
"Home automation",
"Physical phenomena",
"Measuring instruments",
"Motion (physics)",
"Space",
"Mechanics",
"Spacetime",
"Sensors"
] |
3,015,029 | https://en.wikipedia.org/wiki/Tribometer | A tribometer is an instrument that measures tribological quantities, such as coefficient of friction, friction force, and wear volume, between two surfaces in contact. It was invented by the 18th century Dutch scientist Musschenbroek
A tribotester is the general name given to a machine or device used to perform tests and simulations of wear, friction and lubrication which are the subject of the study of tribology. Often tribotesters are extremely specific in their function and are fabricated by manufacturers who desire to test and analyze the long-term performance of their products. An example is that of orthopedic implant manufacturers who have spent considerable sums of money to develop tribotesters that accurately reproduce the motions and forces that occur in human hip joints so that they can perform accelerated wear tests of their products.
Theory
A simple tribometer is described by a hanging mass and a mass resting on a horizontal surface, connected to each other via a string and pulley. The coefficient of friction, μ, when the system is stationary, is determined by increasing the hanging mass until the moment that the resting mass begins to slide. Then using the general equation for friction force:
Where N, the normal force, is equal to the weight (mass x gravity) of the sitting mass (mT) and F, the loading force, is equal to the weight (mass x gravity) of the hanging mass (mH).
To determine the kinetic coefficient of friction the hanging mass is increased or decreased until the mass system moves at a constant speed.
In both cases, the coefficient of friction is simplified to the ratio of the two masses:
In most test applications using tribometers, wear is measured by comparing the mass or surfaces of test specimens before and after testing. Equipment and methods used to examine the worn surfaces include optical microscopes, scanning electron microscopes, optical interferometry and mechanical roughness testers.
Types
Tribometers are often referred to by the specific contact arrangement they simulate or by the original equipment developer. Several arrangements are:
Four ball
Pin on disc
Ball on disc
Ring on ring
Ball on three plates
Reciprocating pin (usually referred to as SRV or HFRR)
Block on ring
Bouncing ball
Fretting test machine
Twin disc
Bouncing ball
A bouncing ball tribometer consists of a ball which is impacted at an angle against a surface. During a typical test, a ball is slid on an angle along a track until it impacts a surface and then bounces off of the surface. The friction produced in the contact between the ball and the surface results in a horizontal force on the surface and a rotational force on the ball. Frictional force is determined by finding the rotational speed of the ball using high speed photography or by measuring the force on the horizontal surface. Pressure in the contact is very high due to the large instantaneous force caused by the impact with the ball.
Bouncing ball tribometers have been used to determine the shear characteristics of lubricants under high pressures such as is found in ball bearings or gears.
Pin on disc
A pin on disc tribometer consists of a stationary pin that is normally loaded against a rotating disc. The pin can have any shape to simulate a specific contact, but cylindrical tips are often used to simplify the contact geometry. The coefficient of friction is determined by the ratio of the frictional force to the loading force on the pin.
The pin on disc test has proved useful in providing a simple wear and friction test for low friction coatings such as diamond-like carbon coatings on valve train components in internal combustion engines.
See also
Abrasion
Twist compression tester
Tribology
References
Tribology
Measuring instruments
Materials science | Tribometer | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 751 | [
"Tribology",
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Measuring instruments",
"nan",
"Mechanical engineering"
] |
3,015,758 | https://en.wikipedia.org/wiki/Maximum%20entropy%20thermodynamics | In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review.
Maximum Shannon entropy
Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy,
This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function).
A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:
kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant).
However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising:
This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time.
For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking.
Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy",
Hc is the negative of the Kullback–Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform – i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics.
Philosophical implications
Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below.
The nature of the probabilities in statistical mechanics
Jaynes (1985, 2003, et passim) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle, and stands reliable today.
Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists".
The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality.
The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account.
Is entropy "real"?
The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt STh is as "real" as the entropy in classical thermodynamics.
Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description.
Is ergodic theory relevant?
The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in.
However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time.
If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model.
The second law
According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time:
However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities.
Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored.
The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy STh(1), should reproduce the expectation values of the observed macroscopic variables at time t2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy STh(2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect:
At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6N-dimensional probability distribution, this result represents coarse graining—i.e., information loss by smoothing out very fine-scale detail.
Caveats with the argument
Some caveats should be considered with the above.
1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system.
It is also sometimes suggested that quantum measurement, especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy).
2. The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1). (Note that if we allow ourselves the abilities of Laplace's demon, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t1 is now also reduced from SI(1) to SI(2)).
We know that STh(2) > SI(2); but we can now no longer be certain that it is greater than STh(1) = SI(1). This then leaves open the possibility for fluctuations in STh. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture.
3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past.
The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see arrow of time).
Criticisms
The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium.
The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result".
Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". The physically defined second entropy can also be considered from an informational viewpoint.
See also
Edwin Thompson Jaynes
First law of thermodynamics
Second law of thermodynamics
Principle of maximum entropy
Principle of Minimum Discrimination Information
Kullback–Leibler divergence
Quantum relative entropy
Information theory and measure theory
Entropy power inequality
References
Bibliography of cited references
Guttmann, Y.M. (1999). The Concept of Probability in Statistical Physics, Cambridge University Press, Cambridge UK, .
Further reading
Shows invalidity of Dewar's derivations (a) of maximum entropy production (MaxEP) from fluctuation theorem for far-from-equilibrium systems, and (b) of a claimed link between MaxEP and self-organized criticality.
Grandy, W. T., 1987. Foundations of Statistical Mechanics. Vol 1: Equilibrium Theory; Vol. 2: Nonequilibrium Phenomena. Dordrecht: D. Reidel. Vol. 1: . Vol. 2: .
Extensive archive of further papers by E.T. Jaynes on probability and physics. Many are collected in
Statistical mechanics
Philosophy of thermal and statistical physics
Non-equilibrium thermodynamics
Information theory
Thermodynamics
Thermodynamic entropy | Maximum entropy thermodynamics | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 3,833 | [
"Telecommunications engineering",
"Philosophy of thermal and statistical physics",
"Physical quantities",
"Applied mathematics",
"Non-equilibrium thermodynamics",
"Thermodynamic entropy",
"Computer science",
"Entropy",
"Information theory",
"Thermodynamics",
"Statistical mechanics",
"Dynamical... |
3,016,310 | https://en.wikipedia.org/wiki/Fission%20track%20dating | Fission track dating is a radiometric dating technique based on analyses of the damage trails, or tracks, left by fission fragments in certain uranium-bearing minerals and glasses. Fission-track dating is a relatively simple method of radiometric dating that has made a significant impact on understanding the thermal history of continental crust, the timing of volcanic events, and the source and age of different archeological artifacts. The method involves using the number of fission events produced from the spontaneous decay of uranium-238 in common accessory minerals to date the time of rock cooling below closure temperature. Fission tracks are sensitive to heat, and therefore the technique is useful at unraveling the thermal evolution of rocks and minerals. Most current research using fission tracks is aimed at: a) understanding the evolution of mountain belts; b) determining the source or provenance of sediments; c) studying the thermal evolution of basins; d) determining the age of poorly dated strata; and e) dating and provenance determination of archeological artifacts.
In the 1930s it was discovered that uranium (specifically U-235) would undergo fission when struck by neutrons. This caused damage tracks in solids which could be revealed by chemical etching.
Method
Unlike other isotopic dating methods, the "daughter" in fission track dating is an effect in the crystal rather than a daughter isotope. Uranium-238 undergoes spontaneous fission decay at a known rate, and it is the only isotope with a decay rate that is relevant to the significant production of natural fission tracks; other isotopes have fission decay rates too slow to be of consequence. The fragments emitted by this fission process
leave trails of damage (fossil tracks or ion tracks) in the crystal structure of the mineral that contains the uranium. The process of track production is essentially the same by which swift heavy ions produce ion tracks.
Chemical etching of polished internal surfaces of these minerals reveals spontaneous fission tracks, and the track density can be determined. Because etched tracks are relatively large (in the range 1 to 15 micrometres), counting can be done by optical microscopy, although other imaging techniques are used. The density of fossil tracks correlates with the cooling age of the sample and with uranium content, which needs to be determined independently.
To determine the uranium content, several methods have been used. One method is by neutron irradiation, where the sample is irradiated with thermal neutrons in a nuclear reactor, with an external detector, such as mica, affixed to the grain surface. The neutron irradiation induces fission of uranium-235 in the sample, and the resulting induced tracks are used to determine the uranium content of the sample because the 235U:238U ratio is well known and assumed constant in nature. However, it is not always constant. To determine the number of induced fission events that occurred during neutron irradiation an external detector is attached to the sample and both sample and detector are simultaneously irradiated by thermal neutrons. The external detector is typically a low-uranium mica flake, but plastics such as CR-39 have also been used. The resulting induced fission of the uranium-235 in the sample creates induced tracks in the overlying external detector, which are later revealed by chemical etching. The ratio of spontaneous to induced tracks is proportional to the age.
Another method of determining uranium concentration is through LA-ICPMS, a technique where the crystal is hit with a laser beam and ablated, and then the material is passed through a mass spectrometer.
Applications
Unlike many other dating techniques, fission-track dating is uniquely suited for determining low-temperature thermal events using common accessory minerals over a very wide geological range (typically 0.1 Ma to 2000 Ma). Apatite, sphene, zircon, micas and volcanic glass typically contain enough uranium to be useful in dating samples of relatively young age (Mesozoic and Cenozoic) and are the materials most useful for this technique. Additionally low-uranium epidotes and garnets may be used for very old samples (Paleozoic to Precambrian). The fission-track dating technique is widely used in understanding the thermal evolution of the upper crust, especially in mountain belts. Fission tracks are preserved in a crystal when the ambient temperature of the rock falls below the annealing temperature. This annealing temperature varies from mineral to mineral and is the basis for determining low-temperature vs. time histories. While the details of closure temperatures are complicated, they are approximately 70 to 110 °C for typical apatite, c. 230 to 250 °C for zircon, and c. 300 °C for titanite.
Because heating of a sample above the annealing temperature causes the fission damage to heal or anneal, the technique is useful for dating the most recent cooling event in the history of the sample. This resetting of the clock can be used to investigate the thermal history of basin sediments, kilometer-scale exhumation caused by tectonism and erosion, low temperature metamorphic events, and geothermal vein formation. The fission track method has also been used to date archaeological sites and artifacts. It was used to confirm the potassium-argon dates for the deposits at Olduvai Gorge.
Provenance analysis of detrital grains
A number of datable minerals occur as common detrital grains in sandstones, and if the strata have not been buried too deeply, these minerals grains retain information about the source rock. Fission track analysis of these minerals provides information about the thermal evolution of the source rocks and therefore can be used to understand provenance and the evolution of mountain belts that shed the sediment. This technique of detrital analysis is most commonly applied to zircon because it is very common and robust in the sedimentary system, and in addition it has a relatively high annealing temperature so that in many sedimentary basins the crystals are not reset by later heating.
Fission-track dating of detrital zircon is a widely applied analytical tool used to understand the tectonic evolution of source terrains that have left a long and continuous erosional record in adjacent basin strata. Early studies focused on using the cooling ages in detrital zircon from stratigraphic sequences to document the timing and rate of erosion of rocks in adjacent orogenic belts (mountain ranges). A number of recent studies have combined U/Pb and/or Helium dating (U+Th/He) on single crystals to document the specific history of individual crystals. This double-dating approach is an extremely powerful provenance tool because a nearly complete crystal history can be obtained, and therefore researchers can pinpoint specific source areas with distinct geologic histories with relative certainty. Fission-track ages on detrital zircon can be as young as 1 Ma to as old as 2000 Ma.
See also
Radiometric dating
Thermochronology
References
Further reading
Naeser, C. W., Fission-Track Dating and Geologic Annealing of Fission Tracks, in: Jäger, E. and J. C. Hunziker, Lectures in Isotope Geology, Springer-Verlag, 1979,
Garver, J.I., 2008, Fission-track dating. In Encyclopedia of Paleoclimatology and Ancient Environments, V. Gornitz, (Ed.), Encyclopedia of Earth Science Series, Kluwer Academic Press, p. 247-249.
Wagner, G. A., and Van den Haute, P., 1992, Fission-Track Dating; Kluwer Academic Publishers, 285 pp.
Enkelmann, E., Garver, J.I., and Pavlis, T.L., 2008, Rapid exhumation of ice-covered rocks of the Chugach-St. Elias Orogen, Southeast Alaska. Geology, v. 36, n.12, p. 915-918.
Garver, J.I. and Montario, M.J., 2008. Detrital fission-track ages from the Upper Cambrian Potsdam Formation, New York: implications for the low-temperature thermal history of the Grenville terrane. In: Garver, J.I., and Montario, M.J. (eds.) Proceedings from the 11th International Conference on thermochronometry, Anchorage Alaska, Sept. 2008, p. 87-89.
Bernet, M., and Garver, J.I., 2005, Chapter 8: Fission-track analysis of Detrital zircon, In P.W. Reiners, and T. A. Ehlers, (eds.), Low-Temperature thermochronology: Techniques, Interpretations, and Applications, Reviews in Mineralogy and Geochemistry Series, v. 58, p. 205-237.
Radiometric dating
Nuclear fission
Uranium | Fission track dating | [
"Physics",
"Chemistry"
] | 1,792 | [
"Nuclear fission",
"Radiometric dating",
"Radioactivity",
"Nuclear physics"
] |
16,018,002 | https://en.wikipedia.org/wiki/Trans-Spliced%20Exon%20Coupled%20RNA%20End%20Determination | Trans-Spliced Exon Coupled RNA End Determination (TEC-RED) is a transcriptomic technique that, like SAGE, allows for the digital detection of messenger RNA sequences. Unlike SAGE, detection and purification of transcripts from the 5’ end of the messenger RNA require the presence of a trans-spliced leader sequence.
Trans-splicing Background
Spliced leader sequences are short sequences of non coding RNA, not found within a gene itself, that are attached to the 5’ end of all, or a portion of, mRNAs transcribed in an organism. They have been found in several species to be responsible for separating polycistronic transcripts into single gene mRNAs, and in others to splice onto monocistronic transcripts. The major role of trans-splicing on monocistronic transcripts is largely unknown. It has been proposed that they may act as an independent promoter that aids in tissue specific expression of independent protein isoforms. Spliced leaders have been seen in trypanosomatids, Euglena, flatworms, Caenorhabditis. Some species contain only one spliced leader sequence found on all mRNAs. In C. elegans two are seen and are labeled SL1 and SL2.
TEC-RED Methods
Total RNA is purified from the specimen of interest. Poly A messenger RNA is then purified from total RNA and subsequently translated into cDNA using a reverse transcription reaction. The cDNA produced from the mRNA is labeled using primers homologous to the spliced leader sequences of the organism. In a nine step PCR reaction the cDNAs are concurrently embedded with the BpmI restriction endonuclease site (though any class IIs restriction endonuclease may work) and a biotin label which are present in the primers. These tagged cDNAs are then cleaved 14 bp downstream from the recognition site using BpmI restriction endonuclease and blunt ended with T4 DNA polymerase. The fragments are further purified away from extraneous DNA material by using the biotin labels to bind them to a strepdavidin matrix. They are then ligated to adapter DNA, in six separate reactions, containing six different restriction endonuclease recognition sites. These tags are then amplified by PCR with primers containing a mismatch changing the Bpm1 site to a Xho1 site. The amplicons are concatenated and ligated into a plasmid vector. The clonal vectors are then sequenced and mapped to the genome.
Concatenation
Concatenation of the tags, as developed in 2004, is different from that seen in SAGE. The cleavage of the tags with Xho1 and mixture of the different samples, followed by ligation, form the first concatenation step. The second step uses one of the restriction endonucleases with consensus to the adapter molecule attached to the 3’ end. They are again ligated, and PCR is performed to purify samples for the next joining. The concatenation is continued with the second restriction endonuclease, followed by the third and finally the fourth. This results in the concatamer formed by the six endonuclease ligations containing 32 tags, arranged 5’ to 5’ around the Xho1 site. In SAGE, concatenation takes place after ditags are formed and amplified by PCR. The linkers on the outside of the ditags are cleaved with the enzyme that provided their binding and these sticky end ditags are concatenated randomly and placed into a cloning vector.
Advantages
The advantage of TEC-RED over SAGE is that no restriction endonuclease is needed for the initial linker binding. This prevents bias associated with restriction site sequences that will be missing from some genes, as is seen in SAGE. The ability to have a snapshot of specific RNA isoforms allows the deduction of differential regulation of isoforms through alternative selection of promoters. This may also aid in the discernment of expression patterns unique to the SL1 or SL2 sequence. TEC-RED also allows characterization of the 5’ ends of RNA produced and therefore of isoforms that differ by the amino terminal splicing. The technology permits the determination and verification of all known and unknown genes that may be predicted as well as the 5’ splice isoforms or 5’ RNA ends that may be produced. Using TEC-RED in conjunction with SAGE or a modified protocol will allow discernment of the 5’ and 3’ ends of transcripts, respectively. The identification of alternative splice variants, and possibly the relative quantities, containing a trans-spliced leader sequence is therefore possible.
Variations
Two alternate techniques have been described that allow for 5’ tag analysis in organisms that do not have trans-spliced leader sequences. The techniques presented by Toshiyuki et al. and Shin-ichi et al. are called CAGE and 5’ SAGE respectively. CAGE utilizes biotinylated cap-trapper technology to maintain mRNA signal long enough to create and select full length cDNAs, which have adapter sequences ligated on the 5‘ end. 5’ SAGE utilizes oligo-capping technology. Both use their adapter sequence to prime from after the cDNA is created. Both of these methods have disadvantages though. CAGE has shown tags with addition of a guanine on the first position and oligo-capping may lead to sequence bias due to the use of RNA ligase.
See also
RNA-seq
DNA microarray
References
External links
CAGE Tags http://genome.gsc.riken.jp/absolute/
5’ SAGE results https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/" https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/
TEC RED Tags seen in wormbase https://web.archive.org/web/20080909025225/http://www.wormbase.org/db/searches/advanced/dumper
RNA
Molecular biology | Trans-Spliced Exon Coupled RNA End Determination | [
"Chemistry",
"Biology"
] | 1,310 | [
"Biochemistry",
"Molecular biology"
] |
16,019,737 | https://en.wikipedia.org/wiki/Tiling%20array | Tiling arrays are a subtype of microarray chips. Like traditional microarrays, they function by hybridizing labeled DNA or RNA target molecules to probes fixed onto a solid surface.
Tiling arrays differ from traditional microarrays in the nature of the probes. Instead of probing for sequences of known or predicted genes that may be dispersed throughout the genome, tiling arrays probe intensively for sequences which are known to exist in a contiguous region. This is useful for characterizing regions that are sequenced, but whose local functions are largely unknown. Tiling arrays aid in transcriptome mapping as well as in discovering sites of DNA/protein interaction (ChIP-chip, DamID), of DNA methylation (MeDIP-chip) and of sensitivity to DNase (DNase Chip) and array CGH. In addition to detecting previously unidentified genes and regulatory sequences, improved quantification of transcription products is possible. Specific probes are present in millions of copies (as opposed to only several in traditional arrays) within an array unit called a feature, with anywhere from 10,000 to more than 6,000,000 different features per array. Variable mapping resolutions are obtainable by adjusting the amount of sequence overlap between probes, or the amount of known base pairs between probe sequences, as well as probe length. For smaller genomes such as Arabidopsis, whole genomes can be examined. Tiling arrays are a useful tool in genome-wide association studies.
Synthesis and manufacturers
The two main ways of synthesizing tiling arrays are photolithographic manufacturing and mechanical spotting or printing.
The first method involves in situ synthesis where probes, approximately 25bp, are built on the surface of the chip. These arrays can hold up to 6 million discrete features, each of which contains millions of copies of one probe.
The other way of synthesizing tiling array chips is via mechanically printing probes onto the chip. This is done by using automated machines with pins that place the previously synthesized probes onto the surface. Due to the size restriction of the pins, these chips can hold up to nearly 400,000 features.
Three manufacturers of tiling arrays are Affymetrix, NimbleGen and Agilent. Their products vary in probe length and spacing. ArrayExplorer.com is a free web-server to compare tiling arrays.
Applications and types
ChIP-chip
ChIP-chip is one of the most popular usages of tiling arrays. Chromatin immunoprecipitation allows binding sites of proteins to be identified. A genome-wide variation of this is known as ChIP-on-chip. Proteins that bind to chromatin are cross-linked in vivo, usually via fixation with formaldehyde. The chromatin is then fragmented and exposed to antibodies specific to the protein of interest. These complexes are then precipitated. The DNA is then isolated and purified. With traditional DNA microarrays, the immunoprecipitated DNA is hybridized to the chip, which contains probes that are designed to cover representative genome regions. Overlapping probes or probes in very close proximity can be used. This gives an unbiased analysis with high resolution. Besides these advantages, tiling arrays show high reproducibility and with overlapping probes spanning large segments of the genome, tiling arrays can interrogate protein binding sites, which harbor repeats. ChIP-chip experiments have been able to identify binding sites of transcription factors across the genome in yeast, drosophila and a few mammalian species.
Transcriptome mapping
Another popular use of tiling arrays is in finding expressed genes. Traditional methods of gene prediction for annotation of genomic sequences have had problems when used to map the transcriptome, such as not producing an accurate structure of the genes and also missing transcripts entirely. The method of sequencing cDNA to find transcribed genes also runs into problems, such as failing to detect rare or very short RNA molecules, and so do not detect genes that are active only in response to signals or specific to a time frame. Tiling arrays can solve these issues. Due to the high resolution and sensitivity, even small and rare molecules can be detected. The overlapping nature of the probes also allows detection of non-polyadenylated RNA and can produce a more precise picture of gene structure. Earlier studies on chromosome 21 and 22 showed the power of tiling arrays for identifying transcription units. The authors used 25-mer probes that were 35bp apart, spanning the entire chromosomes. Labeled targets were made from polyadenylated RNA. They found many more transcripts than predicted and 90% were outside of annotated exons. Another study with Arabidopsis used high-density oligonucleotide arrays that cover the entire genome. More than 10 times more transcripts were found than predicted by ESTs and other prediction tools. Also found were novel transcripts in the centromeric regions where it was thought that no genes are actively expressed. Many noncoding and natural antisense RNA have been identified using tiling arrays.
MeDIP-chip
Methyl-DNA immunoprecipitation followed by tiling array allows DNA methylation mapping and measurement across the genome. DNA is methylated on cytosine in CG di-nucleotides in many places in the genome. This modification is one of the best-understood inherited epigenetic changes and is shown to affect gene expression. Mapping these sites can add to the knowledge of expressed genes and also epigenetic regulation on a genome-wide level. Tiling array studies have generated high-resolution methylation maps for the Arabidopsis genome to generate the first "methylome".
DNase-chip
DNase chip is an application of tiling arrays to identify hypersensitive sites, segments of open chromatin that are more readily cleaved by DNaseI. DNaseI cleaving produces larger fragments of around 1.2kb in size. These hypersensitive sites have been shown to accurately predict regulatory elements such as promoter regions, enhancers and silencers. Historically, the method uses Southern blotting to find digested fragments. Tiling arrays have allowed researchers to apply the technique on a genome-wide scale.
Comparative genomic hybridization (CGH)
Array-based CGH is a technique often used in diagnostics to compare differences between types of DNA, such as normal cells vs. cancer cells. Two types of tiling arrays are commonly used for array CGH, whole genome and fine tiled. The whole genome approach would be useful in identifying copy number variations with high resolution. On the other hand, fine-tiled array CGH would produce ultrahigh resolution to find other abnormalities such as breakpoints.
Procedure
Several different methods exist for tiling an array. One protocol for analyzing gene expression involves first isolating total RNA. This is then purified of rRNA molecules. The RNA is copied into double stranded DNA, which is subsequently amplified and in vitro transcribed to cRNA. The product is split into triplicates to produce dsDNA, which is then fragmented and labeled. Finally, the samples are hybridized to the tiling array chip. The signals from the chip are scanned and interpreted by computers.
Various software and algorithms are available for data analysis and vary in benefits depending on the manufacturer of the chip. For Affymetrix chips, the model-based analysis of tiling array (MAT) or hypergeometric analysis of tiling-arrays (HAT) are effective peak-seeking algorithms. For NimbleGen chips, TAMAL is more suitable for locating binding sites. Alternative algorithms include MA2C and TileScope, which are less complicated to operate. The Joint binding deconvolution algorithm is commonly used for Agilent chips. If sequence analysis of binding site or annotation of the genome is required then programs like MEME, Gibbs Motif Sampler, Cis-regulatory element annotation system and Galaxy are used.
Advantages and disadvantages
Tiling arrays provide an unbiased tool to investigate protein binding, gene expression and gene structure on a genome-wide scope. They allow a new level of insight in studying the transcriptome and methylome.
Drawbacks include the cost of tiling array kits. Although prices have fallen in the last several years, the price makes it impractical to use genome-wide tiling arrays for mammalian and other large genomes. Another issue is the "transcriptional noise" produced by its ultra-sensitive detection capability. Furthermore, the approach provides no clearly defined start or stop to regions of interest identified by the array. Finally, arrays usually give only chromosome and position numbers, often necessitating sequencing as a separate step (although some modern arrays do give sequence information.)
References
Microarrays
Computational biology | Tiling array | [
"Chemistry",
"Materials_science",
"Biology"
] | 1,799 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques",
"Computational biology"
] |
5,509,703 | https://en.wikipedia.org/wiki/Species%20distribution | Species distribution, or species dispersion, is the manner in which a biological taxon is spatially arranged. The geographic limits of a particular taxon's distribution is its range, often represented as shaded areas on a map. Patterns of distribution change depending on the scale at which they are viewed, from the arrangement of individuals within a small family unit, to patterns within a population, or the distribution of the entire species as a whole (range). Species distribution is not to be confused with dispersal, which is the movement of individuals away from their region of origin or from a population center of high density.
Range
In biology, the range of a species is the geographical area within which that species can be found. Within that range, distribution is the general structure of the species population, while dispersion is the variation in its population density.
Range is often described with the following qualities:
Sometimes a distinction is made between a species' natural, endemic, indigenous, or native range, where it has historically originated and lived, and the range where a species has more recently established itself. Many terms are used to describe the new range, such as non-native, naturalized, introduced, transplanted, invasive, or colonized range. Introduced typically means that a species has been transported by humans (intentionally or accidentally) across a major geographical barrier.
For species found in different regions at different times of year, especially seasons, terms such as summer range and winter range are often employed.
For species for which only part of their range is used for breeding activity, the terms breeding range and non-breeding range are used.
For mobile animals, the term natural range is often used, as opposed to areas where it occurs as a vagrant.
Geographic or temporal qualifiers are often added, such as in British range or pre-1950 range. The typical geographic ranges could be the latitudinal range and elevational range.
Disjunct distribution occurs when two or more areas of the range of a taxon are considerably separated from each other geographically.
Factors affecting species distribution
Distribution patterns may change by season, distribution by humans, in response to the availability of resources, and other abiotic and biotic factors.
Abiotic
There are three main types of abiotic factors:
climatic factors consist of sunlight, atmosphere, humidity, temperature, and salinity;
edaphic factors are abiotic factors regarding soil, such as the coarseness of soil, local geology, soil pH, and aeration; and
social factors include land use and water availability.
An example of the effects of abiotic factors on species distribution can be seen in drier areas, where most individuals of a species will gather around water sources, forming a clumped distribution.
Researchers from the Arctic Ocean Diversity (ARCOD) project have documented rising numbers of warm-water crustaceans in the seas around Norway's Svalbard Islands. ARCOD is part of the Census of Marine Life, a huge 10-year project involving researchers in more than 80 nations that aims to chart the diversity, distribution and abundance of life in the oceans. Marine Life has become largely affected by increasing effects of global climate change. This study shows that as the ocean temperatures rise species are beginning to travel into the cold and harsh Arctic waters. Even the snow crab has extended its range 500 km north.
Biotic
Biotic factors such as predation, disease, and inter- and intra-specific competition for resources such as food, water, and mates can also affect how a species is distributed. For example, biotic factors in a quail's environment would include their prey (insects and seeds), competition from other quail, and their predators, such as the coyote. An advantage of a herd, community, or other clumped distribution allows a population to detect predators earlier, at a greater distance, and potentially mount an effective defense. Due to limited resources, populations may be evenly distributed to minimize competition, as is found in forests, where competition for sunlight produces an even distribution of trees.
One key factor in determining species distribution is the phenology of the organism. Plants are well documented as examples showing how phenology is an adaptive trait that can influence fitness in changing climates. Physiology can influence species distributions in an environmentally sensitive manner because physiology underlies movement such as exploration and dispersal. Individuals that are more disperse-prone have higher metabolism, locomotor performance, corticosterone levels, and immunity.
Humans are one of the largest distributors due to the current trends in globalization and the expanse of the transportation industry. For example, large tankers often fill their ballasts with water at one port and empty them in another, causing a wider distribution of aquatic species.
Patterns on large scales
On large scales, the pattern of distribution among individuals in a population is clumped.
Bird wildlife corridors
One common example of bird species' ranges are land mass areas bordering water bodies, such as oceans, rivers, or lakes; they are called a coastal strip. A second example, some species of bird depend on water, usually a river, swamp, etc., or water related forest and live in a river corridor. A separate example of a river corridor would be a river corridor that includes the entire drainage, having the edge of the range delimited by mountains, or higher elevations; the river itself would be a smaller percentage of this entire wildlife corridor, but the corridor is created because of the river.
A further example of a bird wildlife corridor would be a mountain range corridor. In the U.S. of North America, the Sierra Nevada range in the west, and the Appalachian Mountains in the east are two examples of this habitat, used in summer, and winter, by separate species, for different reasons.
Bird species in these corridors are connected to a main range for the species (contiguous range) or are in an isolated geographic range and be a disjunct range. Birds leaving the area, if they migrate, would leave connected to the main range or have to fly over land not connected to the wildlife corridor; thus, they would be passage migrants over land that they stop on for an intermittent, hit or miss, visit.
Patterns on small scales
On large scales, the pattern of distribution among individuals in a population is clumped. On small scales, the pattern may be clumped, regular, or random.
Clumped
Clumped distribution, also called aggregated distribution, clumped dispersion or patchiness, is the most common type of dispersion found in nature. In clumped distribution, the distance between neighboring individuals is minimized. This type of distribution is found in environments that are characterized by patchy resources. Animals need certain resources to survive, and when these resources become rare during certain parts of the year animals tend to "clump" together around these crucial resources. Individuals might be clustered together in an area due to social factors such as selfish herds and family groups. Organisms that usually serve as prey form clumped distributions in areas where they can hide and detect predators easily.
Other causes of clumped distributions are the inability of offspring to independently move from their habitat. This is seen in juvenile animals that are immobile and strongly dependent upon parental care. For example, the bald eagle's nest of eaglets exhibits a clumped species distribution because all the offspring are in a small subset of a survey area before they learn to fly. Clumped distribution can be beneficial to the individuals in that group. However, in some herbivore cases, such as cows and wildebeests, the vegetation around them can suffer, especially if animals target one plant in particular.
Clumped distribution in species acts as a mechanism against predation as well as an efficient mechanism to trap or corner prey. African wild dogs, Lycaon pictus, use the technique of communal hunting to increase their success rate at catching prey. Studies have shown that larger packs of African wild dogs tend to have a greater number of successful kills. A prime example of clumped distribution due to patchy resources is the wildlife in Africa during the dry season; lions, hyenas, giraffes, elephants, gazelles, and many more animals are clumped by small water sources that are present in the severe dry season. It has also been observed that extinct and threatened species are more likely to be clumped in their distribution on a phylogeny. The reasoning behind this is that they share traits that increase vulnerability to extinction because related taxa are often located within the same broad geographical or habitat types where human-induced threats are concentrated. Using recently developed complete phylogenies for mammalian carnivores and primates it has been shown that in the majority of instances threatened species are far from randomly distributed among taxa and phylogenetic clades and display clumped distribution.
A contiguous distribution is one in which individuals are closer together than they would be if they were randomly or evenly distributed, i.e., it is clumped distribution with a single clump.
Regular or uniform
Less common than clumped distribution, uniform distribution, also known as even distribution, is evenly spaced. Uniform distributions are found in populations in which the distance between neighboring individuals is maximized. The need to maximize the space between individuals generally arises from competition for a resource such as moisture or nutrients, or as a result of direct social interactions between individuals within the population, such as territoriality. For example, penguins often exhibit uniform spacing by aggressively defending their territory among their neighbors. The burrows of great gerbils for example are also regularly distributed, which can be seen on satellite images. Plants also exhibit uniform distributions, like the creosote bushes in the southwestern region of the United States. Salvia leucophylla is a species in California that naturally grows in uniform spacing. This flower releases chemicals called terpenes which inhibit the growth of other plants around it and results in uniform distribution. This is an example of allelopathy, which is the release of chemicals from plant parts by leaching, root exudation, volatilization, residue decomposition and other processes. Allelopathy can have beneficial, harmful, or neutral effects on surrounding organisms. Some allelochemicals even have selective effects on surrounding organisms; for example, the tree species Leucaena leucocephala exudes a chemical that inhibits the growth of other plants but not those of its own species, and thus can affect the distribution of specific rival species. Allelopathy usually results in uniform distributions, and its potential to suppress weeds is being researched. Farming and agricultural practices often create uniform distribution in areas where it would not previously exist, for example, orange trees growing in rows on a plantation.
Random
Random distribution, also known as unpredictable spacing, is the least common form of distribution in nature and occurs when the members of a given species are found in environments in which the position of each individual is independent of the other individuals: they neither attract nor repel one another. Random distribution is rare in nature as biotic factors, such as the interactions with neighboring individuals, and abiotic factors, such as climate or soil conditions, generally cause organisms to be either clustered or spread. Random distribution usually occurs in habitats where environmental conditions and resources are consistent. This pattern of dispersion is characterized by the lack of any strong social interactions between species. For example; When dandelion seeds are dispersed by wind, random distribution will often occur as the seedlings land in random places determined by uncontrollable factors. Oyster larvae can also travel hundreds of kilometers powered by sea currents, which can result in their random distribution. Random distributions exhibit chance clumps (see Poisson clumping).
Statistical determination of distribution patterns
There are various ways to determine the distribution pattern of species. The Clark–Evans nearest neighbor method can be used to determine if a distribution is clumped, uniform, or random.
To utilize the Clark–Evans nearest neighbor method, researchers examine a population of a single species. The distance of an individual to its nearest neighbor is recorded for each individual in the sample. For two individuals that are each other's nearest neighbor, the distance is recorded twice, once for each individual. To receive accurate results, it is suggested that the number of distance measurements is at least 50. The average distance between nearest neighbors is compared to the expected distance in the case of random distribution to give the ratio:
If this ratio R is equal to 1, then the population is randomly dispersed. If R is significantly greater than 1, the population is evenly dispersed. Lastly, if R is significantly less than 1, the population is clumped. Statistical tests (such as t-test, chi squared, etc.) can then be used to determine whether R is significantly different from 1.
The variance/mean ratio method focuses mainly on determining whether a species fits a randomly spaced distribution, but can also be used as evidence for either an even or clumped distribution. To utilize the Variance/Mean ratio method, data is collected from several random samples of a given population. In this analysis, it is imperative that data from at least 50 sample plots is considered. The number of individuals present in each sample is compared to the expected counts in the case of random distribution. The expected distribution can be found using Poisson distribution. If the variance/mean ratio is equal to 1, the population is found to be randomly distributed. If it is significantly greater than 1, the population is found to be clumped distribution. Finally, if the ratio is significantly less than 1, the population is found to be evenly distributed. Typical statistical tests used to find the significance of the variance/mean ratio include Student's t-test and chi squared.
However, many researchers believe that species distribution models based on statistical analysis, without including ecological models and theories, are too incomplete for prediction. Instead of conclusions based on presence-absence data, probabilities that convey the likelihood a species will occupy a given area are more preferred because these models include an estimate of confidence in the likelihood of the species being present/absent. They are also more valuable than data collected based on simple presence or absence because models based on probability allow the formation of spatial maps that indicates how likely a species is to be found in a particular area. Similar areas can then be compared to see how likely it is that a species will occur there also; this leads to a relationship between habitat suitability and species occurrence.
Species distribution models
Species distribution can be predicted based on the pattern of biodiversity at spatial scales. A general hierarchical model can integrate disturbance, dispersal and population dynamics. Based on factors of dispersal, disturbance, resources limiting climate, and other species distribution, predictions of species distribution can create a bio-climate range, or bio-climate envelope. The envelope can range from a local to a global scale or from a density independence to dependence. The hierarchical model takes into consideration the requirements, impacts or resources as well as local extinctions in disturbance factors. Models can integrate the dispersal/migration model, the disturbance model, and abundance model. Species distribution models (SDMs) can be used to assess climate change impacts and conservation management issues. Species distribution models include: presence/absence models, the dispersal/migration models, disturbance models, and abundance models. A prevalent way of creating predicted distribution maps for different species is to reclassify a land cover layer depending on whether or not the species in question would be predicted to habit each cover type. This simple SDM is often modified through the use of range data or ancillary information, such as elevation or water distance.
Recent studies have indicated that the grid size used can have an effect on the output of these species distribution models. The standard 50x50 km grid size can select up to 2.89 times more area than when modeled with a 1x1 km grid for the same species. This has several effects on the species conservation planning under climate change predictions (global climate models, which are frequently used in the creation of species distribution models, usually consist of 50–100 km size grids) which could lead to over-prediction of future ranges in species distribution modeling. This can result in the misidentification of protected areas intended for a species future habitat.
Species Distribution Grids Project
The Species Distribution Grids Project is an effort led out of the University of Columbia to create maps and databases of the whereabouts of various animal species. This work is centered on preventing deforestation and prioritizing areas based on species richness. As of April 2009, data are available for global amphibian distributions, as well as birds and mammals in the Americas. The map gallery Gridded Species Distribution contains sample maps for the Species Grids data set. These maps are not inclusive but rather contain a representative sample of the types of data available for download:
See also
Geographic range limit
Animal migration
Biogeography
Colonisation
Cosmopolitan distribution
Occupancy frequency distribution
Notes
External links
Livestock Grazing Distribution Patterns: Does Animal Age Matter?
Discrete Uniform Random Distribution
Animal migration
Biogeography
Ecology terminology
Population ecology
Population genetics | Species distribution | [
"Biology"
] | 3,465 | [
"Ecology terminology",
"Behavior",
"Biogeography",
"Animal migration",
"Ethology"
] |
5,509,769 | https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von%20Mises%20criterion | In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as
In one-sample applications is the theoretical distribution and is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case.
The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930. The generalization to two samples is due to Anderson.
The Cramér–von Mises test is an alternative to the Kolmogorov–Smirnov test (1933).
Cramér–von Mises test (one sample)
Let be the observed values, in increasing order. Then the statistic is
If this value is larger than the tabulated value, then the hypothesis that the data came from the distribution can be rejected.
Watson test
A modified version of the Cramér–von Mises test is the Watson test which uses the statistic U2, where
where
Cramér–von Mises test (two samples)
Let and be the observed values in the first and second sample respectively, in increasing order. Let be the ranks of the xs in the combined sample, and let be the ranks of the ys in the combined sample. Anderson shows that
where U is defined as
If the value of T is larger than the tabulated values, the hypothesis that the two samples come from the same distribution can be rejected. (Some books give critical values for U, which is more convenient, as it avoids the need to compute T via the expression above. The conclusion will be the same.)
The above assumes there are no duplicates in the , , and sequences. So is unique, and its rank is in the sorted list . If there are duplicates, and through are a run of identical values in the sorted list, then one common approach is the midrank method: assign each duplicate a "rank" of . In the above equations, in the expressions and , duplicates can modify all four variables , , , and .
References
Further reading
Statistical distance
Nonparametric statistics
Normality tests | Cramér–von Mises criterion | [
"Physics"
] | 461 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
5,512,894 | https://en.wikipedia.org/wiki/Kronecker%20limit%20formula | In mathematics, the classical Kronecker limit formula describes the constant term at s = 1 of a real analytic Eisenstein series (or Epstein zeta function) in terms of the Dedekind eta function. There are many generalizations of it to more complicated Eisenstein series. It is named for Leopold Kronecker.
First Kronecker limit formula
The (first) Kronecker limit formula states that
where
E(τ,s) is the real analytic Eisenstein series, given by
for Re(s) > 1, and by analytic continuation for other values of the complex number s.
γ is Euler–Mascheroni constant
τ = x + iy with y > 0.
, with q = e2π i τ is the Dedekind eta function.
So the Eisenstein series has a pole at s = 1 of residue π, and the (first) Kronecker limit formula gives the constant term of the Laurent series at this pole.
This formula has an interpretation in terms of the spectral geometry of the elliptic curve associated to the lattice : it says that the zeta-regularized determinant of the Laplace operator associated to the flat metric on is given by . This formula has been used in string theory for the one-loop computation in Polyakov's perturbative approach.
Second Kronecker limit formula
The second Kronecker limit formula states that
where
u and v are real and not both integers.
q = e2π i τ and qa = e2π i aτ
p = e2π i z and pa = e2π i az
for Re(s) > 1, and is defined by analytic continuation for other values of the complex number s.
See also
Herglotz–Zagier function
References
Serge Lang, Elliptic functions,
C. L. Siegel, Lectures on advanced analytic number theory, Tata institute 1961.
External links
Chapter0.pdf
Theorems in analytic number theory
Modular forms | Kronecker limit formula | [
"Mathematics"
] | 402 | [
"Theorems in mathematical analysis",
"Theorems in analytic number theory",
"Theorems in number theory",
"Modular forms",
"Number theory"
] |
14,493,178 | https://en.wikipedia.org/wiki/Quasi-star | A quasi-star (also called black hole star) is a hypothetical type of extremely large and luminous star that may have existed early in the history of the Universe. They are thought to have existed for around 7–10 million years due to their immense mass. Unlike modern stars, which are powered by nuclear fusion in their cores, a quasi-star's energy would come from material falling into a black hole at its core. They were first proposed in the 1960s and have since provided valuable insights into the early universe, galaxy formation, and the behavior of black holes. Although they have not been observed, they are considered to be a possible progenitor of supermassive black holes.
Formation and properties
A quasi-star would have resulted from the core of a large protostar collapsing into a black hole, where the outer layers of the protostar are massive enough to absorb the resulting burst of energy without being blown away or falling into the black hole, as occurs with modern supernovae. Such a star would have to be at least . Quasi-stars may have also formed from dark matter halos drawing in enormous amounts of gas via gravity, which can produce supermassive stars with tens of thousands of solar masses. Formation of quasi-stars could only happen early in the development of the Universe before hydrogen and helium were contaminated by heavier elements; thus, they may have been very massive Population III stars. Such stars would dwarf VY Canis Majoris, Mu Cephei and VV Cephei A, three among the largest known modern stars.
Once the black hole had formed at the protostar's core, it would continue generating a large amount of radiant energy from the infall of stellar material. This constant outburst of energy would counteract the force of gravity, creating an equilibrium similar to the one that supports modern fusion-based stars. Quasi-stars would have had a short maximum lifespan, approximately 7 million years, during which the core black hole would have grown to about . These intermediate-mass black holes have been suggested as the progenitors of modern supermassive black holes such as the one in the center of the Galaxy.
Quasi-stars are predicted to have had surface temperatures higher than . At these temperatures, each one would be about as luminous as a small galaxy. As a quasi-star cools over time, its outer envelope would become transparent, until further cooling to a limiting temperature of . This limiting temperature would mark the end of the quasi-star's life since there is no hydrostatic equilibrium at or below this limiting temperature. The object would then quickly dissipate, leaving behind the intermediate mass black hole.
See also
References
Further reading
External links
Black holes
Star types
Hypothetical stars | Quasi-star | [
"Physics",
"Astronomy"
] | 549 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Astronomical classification systems",
"Stellar phenomena",
"Astronomical objects",
"Star types"
] |
14,498,136 | https://en.wikipedia.org/wiki/Biogenesis%20of%20lysosome-related%20organelles%20complex%201 | BLOC-1 or biogenesis of lysosome-related organelles complex 1 is a ubiquitously expressed multisubunit protein complex in a group of complexes that also includes BLOC-2 and BLOC-3. BLOC-1 is required for normal biogenesis of specialized organelles of the endosomal-lysosomal system, such as melanosomes and platelet dense granules. These organelles are called LROs (lysosome-related organelles) which are apparent in specific cell-types, such as melanocytes. The importance of BLOC-1 in membrane trafficking appears to extend beyond such LROs, as it has demonstrated roles in normal protein-sorting, normal membrane biogenesis, as well as vesicular trafficking. Thus, BLOC-1 is multi-purposed, with adaptable function depending on both organism and cell-type.
Mutations in all BLOC complexes lead to diseased states characterized by Hermansky-Pudlak Syndrome (HPS), a pigmentation disorder subdivided into multiple types depending on the mutation, highlighting the role of BLOC-1 in proper LRO-function. BLOC-1 mutations also are thought to be linked to schizophrenia, and BLOC-1 dysfunction in the brain has important ramifications in neurotransmission. Much effort has been given to uncovering the molecular mechanisms of BLOC-1 function to understand its role in these diseases.
Ultracentrifugation coupled with electron microscopy demonstrated that BLOC-1 has 8 subunits (pallidin, cappuccino, dysbindin, Snapin, Muted, BLOS1, BLOS2, and BLOS3) that are linked linearly to form a complex of roughly 300 Angstrom in length and 30 Angstrom in diameter. Bacterial recombination also demonstrated heterotrimeric subcomplexes containing pallidin, cappucinno, and BLOS1 as well as dysbindin, Snapin, and BLOS-2 as important intermediate structures. These subcomplexes may explain different functional outcomes observed by altering different BLOC-1 subunits. Furthermore, dynamic bending of the complex as much as 45 degrees indicates flexibility is likely linked to proper BLOC-1 function.
Within the endomembrane system, BLOC-1 acts at the early endosome, as witnessed in electron microscopy experiments, where it helps coordinate protein-sorting of LAMPS (lysosome-associate membrane proteins). Multiple studies recapitulate an association with the adaptor complex AP-3, a protein involved in vesicular trafficking of cargo from the early endosome to lysosomal compartments. BLOC-1 demonstrates physical association with AP-3 and BLOC-2 upon immunoprecipitation, although not to both complexes at the same time. Indeed, BLOC-1 functions in an AP-3 dependent route to sort CD63 (LAMP3) and Tyrp1. Furthermore, another study suggests an AP-3 dependent route of BLOC-1 also facilitates trafficking of LAMP1 and Vamp7-T1, a SNARE protein. An AP-3-independent, BLOC-2-dependent route of BLOC-1 sorting of Tyrp1 is also observed. Therefore, BLOC-1 appears to have multifaceted trafficking behavior. Indeed, AP-3 knockout mice maintain ability to deliver Tyrp1 to melanosomes, supporting existence of multiple BLOC-1 trafficking pathways. Evidence, however, suggests BLOC-2 may directly or indirectly intersect BLOC-1 trafficking downstream of early endosomes; BLOC-1 deficiency promotes missorted Tyrp1 at the plasma membrane, while BLOC-2 deficiency promotes Tyrp1 concentration at intermediate endosomal compartments. These studies demonstrate that BLOC-1 facilitates protein transport to lysosomal compartments, such as melanosomes, via multiple routes, although the exact functional association with BLOC-2 is unclear.
The majority of studies have focused on mammalian BLOC-1, presumably because of its association with multiple disease states in humans. Still, it is clear BLOC-1 has an evolutionarily conserved importance in trafficking because its yeast homolog, which contains Vab2, has been proposed to modulate Rab5 (Vps21), which is essential for its membrane localization, by acting as a receptor on early endosomes for Rab5-GAP Msb3. Although this study purports the function of BLOC-1 on early endosomes, it has recently been argued that yeast do not contain an early endosome. In light of these newer findings, it appears, BLOC-1 may actually act at the TGN in yeast. Nevertheless, BLOC-1 is important for proper endomembrane function in both lower and higher order eukaryotes.
In mammalian cells, most studies have focused on the ability of BLOC-1 to sort proteins. However, recent findings indicate that BLOC-1 has more complex functions in membrane biogenesis by associating with the cytoskeleton. Recycling endosome biogenesis is mediated by BLOC-1 as a hub for cytoskeletal activity. The kinesin KIF13A and actin machinery (AnxA2 and Arp2/3) appear to interact with BLOC-1 to generate recycling endosomes/recycling endosome tubules where microtubule action may lengthen tubules and microfilament action may stabilize or excise tubules. The BLOC-1 subunit pallidin associates with synaptic cytoskeletal components in Drosophila melanogaster neurons. Thus, BLOC-1 appears to engage in both protein sorting as well as membrane biogenesis via diverse mechanisms. Further study will be required to synthesize any of these molecular interactions into possible unified mechanisms.
Studies of BLOC-1 in the nervous system have begun to link numerous molecular and cellular mechanisms to its proposed contribution to schizophrenia. Knock-down studies of the dysbindin gene DTNBP1 via siRNA demonstrated that the dysbindin subunit is integral for the signaling and recycling of the D2 receptor (DRD2) but not the D1 receptor. BLOC-1 mutations in dysbindin therefore can alter dopaminergic signaling in the brain which may confer symptoms of schizophrenia. These results appear to be relevant to the whole complex as the majority of expressed dysbindin localized to the BLOC-1 complex in the mouse brain. Furthermore, proper neurite extension appears to be regulated by BLOC-1, which may have molecular links to the ability of BLOC-1 to physically associate in vitro with SNARE proteins such as SNAP-25, SNAP-17, and syntaxin 13. This interaction with SNAREs could aid in membrane trafficking toward neurite extensions. Studies in Drosophila melanogaster indicate pallidin is non-essential for synaptic vesicle homeostasis or anatomy but is essential under conditions of increased neuronal signaling to maintain vesicular trafficking from endosomes via recycling mechanisms. The effects of a non-functional Bloc1s6 gene (encoding for pallidin) on the metabolome of the post-natal mouse hippocampus were explored using LC-MS, revealing altered levels of a variety of metabolites. Particularly intriguing effects include an increase in glutamate (and its precursor glutamine), an excitatory neurotransmitter linked to schizophrenia, as well as decreases in the neurotransmitters phenylalanine and tryptophan. Overall, modifications in the metabolome of these mice extend to nucleobase molecules and lysophospholipids as well, implicating further dysregulation effects of BLOC-1 deficiencies to plausible molecular contributions of schizophrenia.
Complex components
The identified protein subunits of BLOC-1 include:
pallidin
muted (protein)
dysbindin
cappuccino (protein)
Snapin
BLOS1
BLOS2
BLOS3
References
Cell biology | Biogenesis of lysosome-related organelles complex 1 | [
"Biology"
] | 1,652 | [
"Cell biology"
] |
14,498,600 | https://en.wikipedia.org/wiki/Chromo%E2%80%93Weibel%20instability | The Chromo–Weibel instability is a plasma instability present in homogeneous or nearly homogeneous non-abelian plasmas which possess an anisotropy in momentum space. In the linear limit it is similar to the Weibel instability in electromagnetic plasmas but due to non-linear interactions present in non-abelian plasmas the late development of this instability is characterized by a turbulent cascade of modes. This instability is relevant in the understanding of the early-time dynamics of the quark-gluon plasma as produced in heavy-ion collisions.
See also
Weibel instability
References
Quantum chromodynamics
Plasma instabilities | Chromo–Weibel instability | [
"Physics"
] | 129 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Plasma instabilities",
"Plasma physics stubs"
] |
14,500,388 | https://en.wikipedia.org/wiki/%CE%91-L-fucosidase | {{DISPLAYTITLE:α-L-fucosidase}}
The enzyme α-L-fucosidase () catalyzes the following chemical reaction: an α-L-fucoside + H2O L-fucose + an alcohol
This enzyme belongs to the family of hydrolases, specifically those glycosidases that hydrolyse O- and S-glycosyl compounds. The systematic name of this enzyme class is α-L-fucoside fucohydrolase. This enzyme is also called α-fucosidase. It participates in N-glycan degradation and glycan structure degradation.
Deficiency of this enzyme is called fucosidosis.
In CAZy, α-L-fucosidases are found in glycoside hydrolase family 29 and glycoside hydrolase family 95.
Structural studies
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes , , and .
Human medical studies
It was in a recent study by Endreffy, Bjørklund and collaborators (2017) found an association between the activity of α-L-fucosidase-1 (FUCA-1) and chronic autoimmune disorders in children. This should encourage further research on FUCA-1 as a marker of chronic inflammation and autoimmunity.
See also
1,2-α-L-fucosidase
1,3-α-L-fucosidase
1,6-α-L-fucosidase
FUCA1
FUCA2
References
Further reading
External links
CAZy family GH29
CAZy family GH95
Protein families
EC 3.2.1
Enzymes of known structure | Α-L-fucosidase | [
"Biology"
] | 364 | [
"Protein families",
"Protein classification"
] |
14,500,732 | https://en.wikipedia.org/wiki/Salt-effect%20distillation | Salt-effect distillation is a method of extractive distillation in which a salt is dissolved in the mixture of liquids to be distilled. The salt acts as a separating agent by raising the relative volatility of the mixture and by breaking any azeotropes that may otherwise form. The technique is first attested in writings on alcohol attributed to Jabir ibn Hayyan (9th c. CE).
Setup
The salt is fed into the distillation column at a steady rate by adding it to the reflux stream at the top of the column. It dissolves in the liquid phase, and since it is non-volatile, flows out with the heavier bottoms stream. The bottoms are partially or completely evaporated to recover the salt for reuse.
Usage
Extractive distillation is more costly than ordinary fractional distillation due to costs associated with the recovery of the separating agent. One advantage of salt-effect distillation over other types of azeotropic distillation is the potential for reduced costs associated with energy usage. In addition, the salt ions have a greater effect on the volatility of the mixture to be distilled than other liquid-separating agents.
Commercial usage of salt-effect distillation includes adding magnesium nitrate to an aqueous solution of nitric acid to concentrate it further. Calcium chloride is added to acetone-methanol and water-isopropanol mixtures in order to facilitate separation.
References
See also
Distillation
Extractive distillation
Azeotrope
Salting out
Distillation | Salt-effect distillation | [
"Chemistry"
] | 322 | [
"Distillation",
"Separation processes"
] |
14,501,355 | https://en.wikipedia.org/wiki/Surface%20diffusion | Surface diffusion is a general process involving the motion of adatoms, molecules, and atomic clusters (adparticles) at solid material surfaces. The process can generally be thought of in terms of particles jumping between adjacent adsorption sites on a surface, as in figure 1. Just as in bulk diffusion, this motion is typically a thermally promoted process with rates increasing with increasing temperature. Many systems display diffusion behavior that deviates from the conventional model of nearest-neighbor jumps. Tunneling diffusion is a particularly interesting example of an unconventional mechanism wherein hydrogen has been shown to diffuse on clean metal surfaces via the quantum tunneling effect.
Various analytical tools may be used to elucidate surface diffusion mechanisms and rates, the most important of which are field ion microscopy and scanning tunneling microscopy. While in principle the process can occur on a variety of materials, most experiments are performed on crystalline metal surfaces. Due to experimental constraints most studies of surface diffusion are limited to well below the melting point of the substrate, and much has yet to be discovered regarding how these processes take place at higher temperatures.
Surface diffusion rates and mechanisms are affected by a variety of factors including the strength of the surface-adparticle bond, orientation of the surface lattice, attraction and repulsion between surface species and chemical potential gradients. It is an important concept in surface phase formation, epitaxial growth, heterogeneous catalysis, and other topics in surface science. As such, the principles of surface diffusion are critical for the chemical production and semiconductor industries. Real-world applications relying heavily on these phenomena include catalytic converters, integrated circuits used in electronic devices, and silver halide salts used in photographic film.
Kinetics
Surface diffusion kinetics can be thought of in terms of adatoms residing at adsorption sites on a 2D lattice, moving between adjacent (nearest-neighbor) adsorption sites by a jumping process. The jump rate is characterized by an attempt frequency and a thermodynamic factor that dictates the probability of an attempt resulting in a successful jump. The attempt frequency ν is typically taken to be simply the vibrational frequency of the adatom, while the thermodynamic factor is a Boltzmann factor dependent on temperature and Ediff, the potential energy barrier to diffusion. Equation 1 describes the relationship:
Where ν and Ediff are as described above, Γ is the jump or hopping rate, T is temperature, and kB is the Boltzmann constant. Ediff must be smaller than the energy of desorption for diffusion to occur, otherwise desorption processes would dominate. Importantly, equation 1 tells us how strongly the jump rate varies with temperature. The manner in which diffusion takes place is dependent on the relationship between Ediff and kBT as is given in the thermodynamic factor: when Ediff < kBT the thermodynamic factor approaches unity and Ediff ceases to be a meaningful barrier to diffusion. This case, known as mobile diffusion, is relatively uncommon and has only been observed in a few systems. For the phenomena described throughout this article, it is assumed that Ediff >> kBT and therefore Γ << ν. In the case of Fickian diffusion it is possible to extract both the ν and Ediff from an Arrhenius plot of the logarithm of the diffusion coefficient, D, versus 1/T. For cases where more than one diffusion mechanism is present (see below), there may be more than one Ediff such that the relative distribution between the different processes would change with temperature.
Random walk statistics describe the mean squared displacement of diffusing species in terms of the number of jumps N and the distance per jump a. The number of successful jumps is simply Γ multiplied by the time allowed for diffusion, t. In the most basic model only nearest-neighbor jumps are considered and a corresponds to the spacing between nearest-neighbor adsorption sites. The root mean squared displacement goes as:
The diffusion coefficient is given as:
where for 1D diffusion as would be the case for in-channel diffusion, for 2D diffusion, and for 3D diffusion.
Regimes
There are four different general schemes in which diffusion may take place. Tracer diffusion and chemical diffusion differ in the level of adsorbate coverage at the surface, while intrinsic diffusion and mass transfer diffusion differ in the nature of the diffusion environment. Tracer diffusion and intrinsic diffusion both refer to systems where adparticles experience a relatively homogeneous environment, whereas in chemical and mass transfer diffusion adparticles are more strongly affected by their surroundings.
Tracer diffusion describes the motion of individual adparticles on a surface at relatively low coverage levels. At these low levels (< 0.01 monolayer), particle interaction is low and each particle can be considered to move independently of the others. The single atom diffusing in figure 1 is a nice example of tracer diffusion.
Chemical diffusion describes the process at higher level of coverage where the effects of attraction or repulsion between adatoms becomes important. These interactions serve to alter the mobility of adatoms. In a crude way, figure 3 serves to show how adatoms may interact at higher coverage levels. The adatoms have no "choice" but to move to the right at first, and adjacent adatoms may block adsorption sites from one another.
Intrinsic diffusion occurs on a uniform surface (e.g. lacking steps or vacancies) such as a single terrace, where no adatom traps or sources are present. This regime is often studied using field ion microscopy, wherein the terrace is a sharp sample tip on which an adparticle diffuses. Even in the case of a clean terrace the process may be influenced by non-uniformity near the edges of the terrace.
Mass transfer diffusion takes place in the case where adparticle sources and traps such as kinks, steps, and vacancies are present. Instead of being dependent only on the jump potential barrier Ediff, diffusion in this regime is now also dependent on the formation energy of mobile adparticles. The exact nature of the diffusion environment therefore plays a role in dictating the diffusion rate, since the formation energy of an adparticle is different for each type of surface feature as is described in the Terrace Ledge Kink model.
Anisotropy
Orientational anisotropy takes the form of a difference in both diffusion rates and mechanisms at the various surface orientations of a given material. For a given crystalline material each Miller Index plane may display unique diffusion phenomena. Close packed surfaces such as the fcc (111) tend to have higher diffusion rates than the correspondingly more "open" faces of the same material such as fcc (100).
Directional anisotropy refers to a difference in diffusion mechanism or rate in a particular direction on a given crystallographic plane. These differences may be a result of either anisotropy in the surface lattice (e.g. a rectangular lattice) or the presence of steps on a surface. One of the more dramatic examples of directional anisotropy is the diffusion of adatoms on channeled surfaces such as fcc (110), where diffusion along the channel is much faster than diffusion across the channel.
Mechanisms
Adatom diffusion
Diffusion of adatoms may occur by a variety of mechanisms. The manner in which they diffuse is important as it may dictate the kinetics of movement, temperature dependence, and overall mobility of surface species, among other parameters. The following is a summary of the most important of these processes:
Hopping or jumping is conceptually the most basic mechanism for diffusion of adatoms. In this model, the adatoms reside on adsorption sites on the surface lattice. Motion occurs through successive jumps to adjacent sites, the number of which depends on the nature of the surface lattice. Figures 1 and 3 both display adatoms undergoing diffusion via the hopping process. Studies have shown the presence of metastable transition states between adsorption sites wherein it may be possible for adatoms to temporarily reside.
Atomic exchange involves exchange between an adatom and an adjacent atom within the surface lattice. As shown in figure 4, after an atomic exchange event the adatom has taken the place of a surface atom and the surface atom has been displaced and has now become an adatom. This process may take place in both heterodiffusion (e.g. Pt adatoms on Ni) and self-diffusion (e.g. Pt adatoms on Pt). It is still unclear from a theoretical point of view why the atomic exchange mechanism is more predominant in some systems than in others. Current theory points towards multiple possibilities, including tensile surface stresses, surface relaxation about the adatom, and increased stability of the intermediate due to the fact that both atoms involved maintain high levels of coordination throughout the process.
Tunneling diffusion is a physical manifestation of the quantum tunneling effect involving particles tunneling across diffusion barriers. It can occur in the case of low diffusing particle mass and low Ediff, and has been observed in the case of hydrogen diffusion on tungsten and copper surfaces. The phenomenon is unique in that in the regime where the tunneling mechanism dominates, the diffusion rate is nearly temperature-independent.
Vacancy diffusion can occur as the predominant method of surface diffusion at high coverage levels approaching complete coverage. This process is akin to the manner in which pieces slide around in a "sliding puzzle". It is very difficult to directly observe vacancy diffusion due to the typically high diffusion rates and low vacancy concentration. Figure 5 shows the basic theme of this mechanism in an albeit oversimplified manner.
Recent theoretical work as well as experimental work performed since the late 1970s has brought to light a remarkable variety of surface diffusion phenomena both with regard to kinetics as well as to mechanisms. Following is a summary of some of the more notable phenomena:
Long jumps consist of adatom displacement to a non-nearest-neighbor adsorption site. They may include double, triple, and longer jumps in the same direction as a nearest-neighbor jump would travel, or they may be in entirely different directions as shown in figure 6. They have been predicted by theory to exist in many different systems, and have been shown by experiment to take place at temperatures as low as 0.1 Tm (melting temperature). In some cases data indicate long jumps dominating the diffusion process over single jumps at elevated temperatures; the phenomena of variable jump lengths is expressed in different characteristic distributions of atomic displacement over time (see figure 7).
Rebound jumps have been shown by both experiment and simulations to take place in certain systems. Since the motion does not result in a net displacement of the adatom involved, experimental evidence for rebound jumps again comes from statistical interpretation of atomic distributions. A rebound jump is shown in figure 6. The figure is slightly misleading, however, as rebound jumps have only been shown experimentally to take place in the case of 1D diffusion on a channeled surface (in particular, the bcc (211) face of tungsten).
Cross-channel diffusion can occur in the case of channeled surfaces. Typically in-channel diffusion dominates due to the lower energy barrier for diffusion of this process. In certain cases cross-channel has been shown to occur, taking place in a manner similar to that shown in figure 8. The intermediate "dumbbell" position may lead to a variety of final adatom and surface atom displacements.
Long-range atomic exchange is a process involving an adatom inserting into the surface as in the normal atomic exchange mechanism, but instead of a nearest-neighbor atom it is an atom some distance further from the initial adatom that emerges. Shown in figure 9, this process has only been observed in molecular dynamics simulations and has yet to be confirmed experimentally. In spite of this long range atomic exchange, as well as a variety of other exotic diffusion mechanisms, are anticipated to contribute substantially at temperatures currently too high for direct observation.
Cluster diffusion
Cluster diffusion involves motion of atomic clusters ranging in size from dimers to islands containing hundreds of atoms. Motion of the cluster may occur via the displacement of individual atoms, sections of the cluster, or the entire cluster moving at once. All of these processes involve a change in the cluster’s center of mass.
Individual mechanisms are those that involve movement of one atom at a time.
Edge diffusion involves movement of adatoms or vacancies at edge or kink sites. As shown in figure 10, the mobile atom maintains its proximity to the cluster throughout the process.
Evaporation-condensation involves atoms “evaporating” from the cluster onto a terrace accompanied by “condensation” of terrace adatoms onto the cluster leading to a change in the cluster’s center of mass. While figure 10 appears to indicate the same atom evaporating from and condensing on the cluster, it may in fact be a different atom condensing from the 2D gas.
Leapfrog diffusion is similar to edge diffusion, but where the diffusing atom actually moves atop the cluster before settling in a different location from its starting position.
Sequential displacement refers to the process involving motion one atom at a time, moving to free nearest-neighbor sites.
Concerted mechanisms are those that involve movement of either sections of the cluster or the entire cluster all at once.
Dislocation diffusion occurs when adjacent sub-units of a cluster move in a row-by-row fashion through displacement of a dislocation. As shown in figure 11(a) the process begins with nucleation of the dislocation followed by what is essentially sequential displacement on a concerted basis.
Glide diffusion refers to the concerted motion of an entire cluster all at once (see figure 11(b)).
Reptation is a snake-like movement (hence the name) involving sequential motion of cluster sub-units (see figure 11(c)).
Shearing is a concerted displacement of a sub-unit of atoms within a cluster (see figure 11(d)).
Size-dependence: the rate of cluster diffusion has a strong dependence on the size of the cluster, with larger cluster size generally corresponding to slower diffusion. This is not, however, a universal trend and it has been shown in some systems that the diffusion rate takes on a periodic tendency wherein some larger clusters diffuse faster than those smaller than them.
Surface diffusion and heterogeneous catalysis
Surface diffusion is a critically important concept in heterogeneous catalysis, as reaction rates are often dictated by the ability of reactants to "find" each other at a catalyst surface. With increased temperature adsorbed molecules, molecular fragments, atoms, and clusters tend to have much greater mobility (see equation 1). However, with increased temperature the lifetime of adsorption decreases as the factor kBT becomes large enough for the adsorbed species to overcome the barrier to desorption, Q (see figure 2). Reaction thermodynamics aside because of the interplay between increased rates of diffusion and decreased lifetime of adsorption, increased temperature may in some cases decrease the overall rate of the reaction.
Experimental
Surface diffusion may be studied by a variety of techniques, including both direct and indirect observations. Two experimental techniques that have proved very useful in this area of study are field ion microscopy and scanning tunneling microscopy. By visualizing the displacement of atoms or clusters over time, it is possible to extract useful information regarding the manner in which the relevant species diffuse-both mechanistic and rate-related information. In order to study surface diffusion on the atomistic scale it is unfortunately necessary to perform studies on rigorously clean surfaces and in ultra high vacuum (UHV) conditions or in the presence of small amounts of inert gas, as is the case when using He or Ne as imaging gas in field-ion microscopy experiments.
See also
Surface engineering
Surface science
False diffusion
References
Cited works
G. Antczak, G. Ehrlich. Surface Science Reports 62 (2007), 39-61. (Review)
Materials science
Surface science | Surface diffusion | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,271 | [
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Condensed matter physics",
"nan"
] |
14,501,996 | https://en.wikipedia.org/wiki/Amidase | In enzymology, an amidase (, acylamidase, acylase (misleading), amidohydrolase (ambiguous), deaminase (ambiguous), fatty acylamidase, N-acetylaminohydrolase (ambiguous)) is an enzyme that catalyzes the hydrolysis of an amide. In this way, the two substrates of this enzyme are an amide and H2O, whereas its two products are monocarboxylate and NH3.
This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in linear amides. The systematic name of this enzyme class is acylamide amidohydrolase. Other names in common use include acylamidase, acylase, amidohydrolase, deaminase, fatty acylamidase, and N-acetylaminohydrolase. This enzyme participates in 6 metabolic pathways: urea cycle and metabolism of amino groups, phenylalanine metabolism, tryptophan metabolism, cyanoamino acid metabolism, benzoate degradation via coa ligation, and styrene degradation.
Amidases contain a conserved stretch of approximately 130 amino acids known as the AS sequence. They are widespread, being found in both prokaryotes and eukaryotes. AS enzymes catalyse the hydrolysis of amide bonds (CO-NH2), although the family has diverged widely with regard to substrate specificity and function. Nonetheless, these enzymes maintain a core alpha/beta/alpha structure, where the topologies of the N- and C-terminal halves are similar. AS enzymes characteristically have a highly conserved C-terminal region rich in serine and glycine residues, but devoid of aspartic acid and histidine residues, therefore they differ from classical serine hydrolases. These enzymes possess a unique, highly conserved Ser-Ser-Lys catalytic triad used for amide hydrolysis, although the catalytic mechanism for acyl-enzyme intermediate formation can differ between enzymes.
Examples of AS signature-containing enzymes include:
Peptide amidase (Pam), which catalyses the hydrolysis of the C-terminal amide bond of peptides.
Fatty acid amide hydrolases, which hydrolyse fatty acid amid substrates (e.g. cannabinoid anandamide and sleep-inducing oleamide), thereby controlling the level and duration of signalling induced by this diverse class of lipid transmitters.
Malonamidase E2, which catalyses the hydrolysis of malonamate into malonate and ammonia, and which is involved in the transport of fixed nitrogen from bacteroids to plant cells in symbiotic nitrogen metabolism.
Subunit A of Glu-tRNA(Gln) amidotransferase, a heterotrimeric enzyme that catalyses the formation of Gln-tRNA(Gln) by the transamidation of misacylated Glu-tRNA(Gln) via amidolysis of glutamine.
Structural studies
As of late 2018, 162 structures have been solved for this family, which can be accessed at the Pfam .
References
Further reading
Protein families
EC 3.5.1
Enzymes of known structure | Amidase | [
"Biology"
] | 687 | [
"Protein families",
"Protein classification"
] |
14,502,271 | https://en.wikipedia.org/wiki/Weakly%20measurable%20function | In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree.
Definition
If is a measurable space and is a Banach space over a field (which is the real numbers or complex numbers ), then is said to be weakly measurable if, for every continuous linear functional the function
is a measurable function with respect to and the usual Borel -algebra on
A measurable function on a probability space is usually referred to as a random variable (or random vector if it takes values in a vector space such as the Banach space ).
Thus, as a special case of the above definition, if is a probability space, then a function is called a (-valued) weak random variable (or weak random vector) if, for every continuous linear functional the function
is a -valued random variable (i.e. measurable function) in the usual sense, with respect to and the usual Borel -algebra on
Properties
The relationship between measurability and weak measurability is given by the following result, known as Pettis' theorem or Pettis measurability theorem.
A function is said to be almost surely separably valued (or essentially separably valued) if there exists a subset with such that is separable.
In the case that is separable, since any subset of a separable Banach space is itself separable, one can take above to be empty, and it follows that the notions of weak and strong measurability agree when is separable.
See also
References
Functional analysis
Measure theory
Types of functions | Weakly measurable function | [
"Mathematics"
] | 366 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
8,681,009 | https://en.wikipedia.org/wiki/CCL22 | C-C motif chemokine 22 is a protein that in humans is encoded by the CCL22 gene.
The protein encoded by this gene is secreted by dendritic cells and macrophages, and elicits its effects on its target cells by interacting with cell surface chemokine receptors such as CCR4. The gene for CCL22 is located in human chromosome 16 in a cluster with other chemokines called CX3CL1 and CCL17.
References
Further reading
External links
Cytokines | CCL22 | [
"Chemistry"
] | 111 | [
"Cytokines",
"Signal transduction"
] |
8,682,105 | https://en.wikipedia.org/wiki/CX3CL1 | Fractalkine, also known as chemokine (C-X3-C motif) ligand 1, is a protein that in humans is encoded by the CX3CL1 gene.
Function
Fractalkine is a large cytokine protein of 373 amino acids that contains multiple domains and is the only known member of the CX3C chemokine family. It is also commonly known under the names fractalkine (in humans) and neurotactin (in mice). The polypeptide structure of CX3CL1 differs from the typical structure of other chemokines. For example, the spacing of the characteristic N-terminal cysteines differs; there are three amino acids separating the initial pair of cysteines in CX3CL1, with none in CC chemokines and only one intervening amino acid in CXC chemokines. CX3CL1 is produced as a long protein (with 373-amino acid in humans) with an extended mucin-like stalk and a chemokine domain on top. The mucin-like stalk permits it to bind to the surface of certain cells. However a soluble (90 kD) version of this chemokine has also been observed. Soluble CX3CL1 potently chemoattracts T cells and monocytes, while the cell-bound chemokine promotes strong adhesion of leukocytes to activated endothelial cells, where it is primarily expressed. CX3CL1 elicits its adhesive and migratory functions by interacting with the chemokine receptor CX3CR1. Its gene is located on human chromosome 16 along with some CC chemokines known as CCL17 and CCL22.
Fractalkine is found commonly throughout the brain, particularly in neural cells, and its receptor is known to be present on microglial cells. It has also been found to be essential for microglial cell migration. CX3CL1 is also up-regulated in the hippocampus during a brief temporal window following spatial learning, the purpose of which may be to regulate glutamate-mediated neurotransmission tone. This indicates a possible role for the chemokine in the protective plasticity process of synaptic scaling.
References
External links
Further reading
Cytokines | CX3CL1 | [
"Chemistry"
] | 495 | [
"Cytokines",
"Signal transduction"
] |
8,685,313 | https://en.wikipedia.org/wiki/Straddle%20carrier | A straddle carrier or straddle truck is a freight-carrying vehicle that carries its load underneath by "straddling" it, rather than carrying it on top like a conventional truck. The advantage of the straddle carrier is its ability to load and unload without the assistance of cranes or forklifts. The lifting apparatus under the carrier is operated by the driver without any outside assistance and without leaving the driver's seat.
Lumber carriers
The straddle carrier was invented by H. B. Ross in 1913 as a road-going vehicle that could easily transport lumber around mills and yards. Lumber was stacked on special pallets known as carrier blocks; the carrier would then straddle the stack, grasp and lift the carrier block, and drive off with the load. Because a straddle carrier is open at both front and rear, it can transport lumber much longer than the carrier itself, over in length.
The Ross Carrier Company (now Northwest Caster & Equipment ) was founded in Seattle to manufacture and market the carrier, and similar designs were later manufactured by Gerlinger, Hyster, Yale, Caterpillar, and other companies. These "straddles" or "timber jinkers" were a common sight in seaports around the world until the 1970s, but were phased out as larger and faster conventional trucks came into use. An example of these road-going straddle carriers can be seen in the 1950 comedy film Watch the Birdie.
Industrial straddle carriers
Similar industrial straddle carriers are used in manufacturing and construction, both for handling oversized loads such as steel and pre-cast concrete and where transportation of special loads such as nitrogen tanks is required in restricted spaces not suitable for trucks. A key advantage of industrial straddle carriers and reach stackers over most forklifts is the ability to load or unload a semi-trailer in a single operation, which can improve efficiency.
Straddle carriers are also used for handling boats onshore. These are also often called travel lifts or travelifts.
Shipping container carriers
The most common use of straddle carriers is in port terminals and intermodal yards, where they are used for stacking and moving ISO standard containers. The carrier straddles its load, picking it up and carrying it by connecting to the top lifting points using a container spreader. Some machines have the ability to stack containers up to four high. They travel at relatively low speeds (up to ) with a laden container. Drivers of the carrier sit sideways at the very top, and face the middle, so they can see behind and in front of the vehicle. Straddle carriers can lift up to , which equals up to two full containers.
Gallery
See also
Crane
Gantry crane
References
External links
Combilift Straddle Carrier
Liebherr straddle carrier
Mobile cranes
Cranes (machines)
Intermodal containers
Port infrastructure | Straddle carrier | [
"Engineering"
] | 590 | [
"Engineering vehicles",
"Cranes (machines)"
] |
8,687,911 | https://en.wikipedia.org/wiki/Proper%20acceleration | In relativity theory, proper acceleration is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured. Gravitation therefore does not cause proper acceleration, because the same gravity acts equally on the inertial observer. As a consequence, all inertial observers always have a proper acceleration of zero.
Proper acceleration contrasts with coordinate acceleration, which is dependent on choice of coordinate systems and thus upon choice of observers (see three-acceleration in special relativity).
In the standard inertial coordinates of special relativity, for unidirectional motion, proper acceleration is the rate of change of proper velocity with respect to coordinate time.
In an inertial frame in which the object is momentarily at rest, the proper acceleration 3-vector, combined with a zero time-component, yields the object's four-acceleration, which makes proper-acceleration's magnitude Lorentz-invariant. Thus the concept is useful: (i) with accelerated coordinate systems, (ii) at relativistic speeds, and (iii) in curved spacetime.
In an accelerating rocket after launch, or even in a rocket standing on the launch pad, the proper acceleration is the acceleration felt by the occupants, and which is described as g-force (which is not a force but rather an acceleration; see that article for more discussion) delivered by the vehicle only. The "acceleration of gravity" (involved in the "force of gravity") never contributes to proper acceleration in any circumstances, and thus the proper acceleration felt by observers standing on the ground is due to the mechanical force from the ground, not due to the "force" or "acceleration" of gravity. If the ground is removed and the observer allowed to free-fall, the observer will experience coordinate acceleration, but no proper acceleration, and thus no g-force. Generally, objects in a state of inertial motion, also called free-fall or a ballistic path (including objects in orbit) experience no proper acceleration (neglecting small tidal accelerations for inertial paths in gravitational fields). This state is also known as "zero gravity" ("zero-g") or "free-fall," and it produces a sensation of weightlessness.
Proper acceleration reduces to coordinate acceleration in an inertial coordinate system in flat spacetime (i.e. in the absence of gravity), provided the magnitude of the object's proper-velocity (momentum per unit mass) is much less than the speed of light c. Only in such situations is coordinate acceleration entirely felt as a g-force (i.e. a proper acceleration, also defined as one that produces measurable weight).
In situations in which gravitation is absent but the chosen coordinate system is not inertial, but is accelerated with the observer (such as the accelerated reference frame of an accelerating rocket, or a frame fixed upon objects in a centrifuge), then g-forces and corresponding proper accelerations felt by observers in these coordinate systems are caused by the mechanical forces which resist their weight in such systems. This weight, in turn, is produced by fictitious forces or "inertial forces" which appear in all such accelerated coordinate systems, in a manner somewhat like the weight produced by the "force of gravity" in systems where objects are fixed in space with regard to the gravitating body (as on the surface of the Earth).
The total (mechanical) force that is calculated to induce the proper acceleration on a mass at rest in a coordinate system that has a proper acceleration, via Newton's law , is called the proper force. As seen above, the proper force is equal to the opposing reaction force that is measured as an object's "operational weight" (i.e. its weight as measured by a device like a spring scale, in vacuum, in the object's coordinate system). Thus, the proper force on an object is always equal and opposite to its measured weight.
Examples
When holding onto a carousel that turns at constant angular velocity an observer experiences a radially inward (centripetal) proper-acceleration due to the interaction between the handhold and the observer's hand. This cancels the radially outward geometric acceleration associated with their spinning coordinate frame. This outward acceleration (from the spinning frame's perspective) will become the coordinate acceleration when they let go, causing them to fly off along a zero proper-acceleration (geodesic) path. Unaccelerated observers, of course, in their frame simply see their equal proper and coordinate accelerations vanish when they let go.
Similarly, standing on a non-rotating planet (and on earth for practical purposes) observers experience an upward proper-acceleration due to the normal force exerted by the earth on the bottom of their shoes. This cancels the downward geometric acceleration due to the choice of coordinate system (a so-called shell-frame). That downward acceleration becomes coordinate if they inadvertently step off a cliff into a zero proper-acceleration (geodesic or rain-frame) trajectory.
Geometric accelerations (due to the connection term in the coordinate system's covariant derivative below) act on every gram of our being, while proper-accelerations are usually caused by an external force. Introductory physics courses often treat gravity's downward (geometric) acceleration as due to a mass-proportional force. This, along with diligent avoidance of unaccelerated frames, allows them to treat proper and coordinate acceleration as the same thing.
Even then if an object maintains a constant proper-acceleration from rest over an extended period in flat spacetime, observers in the rest frame will see the object's coordinate acceleration decrease as its coordinate velocity approaches lightspeed. The rate at which the object's proper-velocity goes up, nevertheless, remains constant.
Thus the distinction between proper-acceleration and coordinate acceleration allows one to track the experience of accelerated travelers from various non-Newtonian perspectives. These perspectives include those of accelerated coordinate systems (like a carousel), of high speeds (where proper and coordinate times differ), and of curved spacetime (like that associated with gravity on Earth).
Classical applications
At low speeds in the inertial coordinate systems of Newtonian physics, proper acceleration simply equals the coordinate acceleration a = d2x/dt2. As reviewed above, however, it differs from coordinate acceleration if one chooses (against Newton's advice) to describe the world from the perspective of an accelerated coordinate system like a motor vehicle accelerating from rest, or a stone being spun around in a slingshot. If one chooses to recognize that gravity is caused by the curvature of spacetime (see below), proper acceleration differs from coordinate acceleration in a gravitational field.
For example, an object subjected to physical or proper acceleration ao will be seen by observers in a coordinate system undergoing constant acceleration aframe to have coordinate acceleration:
Thus if the object is accelerating with the frame, observers fixed to the frame will see no acceleration at all.
Similarly, an object undergoing physical or proper acceleration ao will be seen by observers in a frame rotating with angular velocity to have coordinate acceleration:
In the equation above, there are three geometric acceleration terms on the right-hand side. The first "centrifugal acceleration" term depends only on the radial position and not the velocity of our object, the second "Coriolis acceleration" term depends only on the object's velocity in the rotating frame but not its position, and the third "Euler acceleration" term depends only on position and the rate of change of the frame's angular velocity.
In each of these cases, physical or proper acceleration differs from coordinate acceleration because the latter can be affected by your choice of coordinate system as well as by physical forces acting on the object. Those components of coordinate acceleration not caused by physical forces (like direct contact or electrostatic attraction) are often attributed (as in the Newtonian example above) to forces that: (i) act on every gram of the object, (ii) cause mass-independent accelerations, and (iii) don't exist from all points of view. Such geometric (or improper) forces include Coriolis forces, Euler forces, g-forces, centrifugal forces and (as we see below) gravity forces as well.
Viewed from a flat spacetime slice
Proper-acceleration's relationships to coordinate acceleration in a specified slice of flat spacetime follow from Minkowski's flat-space metric equation . Here a single reference frame of yardsticks and synchronized clocks define map position x and map time t respectively, the traveling object's clocks define proper time τ, and the "d" preceding a coordinate means infinitesimal change. These relationships allow one to tackle various problems of "anyspeed engineering", albeit only from the vantage point of an observer whose extended map frame defines simultaneity.
Acceleration in (1+1)D
In the unidirectional case i.e. when the object's acceleration is parallel or antiparallel to its velocity in the spacetime slice of the observer, proper acceleration α and coordinate acceleration a are related through the Lorentz factor by . Hence the change in proper-velocity w=dx/dτ is the integral of proper acceleration over map-time t i.e. for constant . At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map-time, i.e. Δv=aΔt.
For constant unidirectional proper-acceleration, similar relationships exist between rapidity η and elapsed proper time Δτ, as well as between Lorentz factor γ and distance traveled Δx. To be specific:
where the various velocity parameters are related by
These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at "1 gee" (10 m/s2 or about 1.0 light year per year squared) halfway to their destination, and then decelerate them at "1 gee" for the remaining half so as to provide earth-like artificial gravity from point A to point B over the shortest possible time. For a map-distance of ΔxAB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of . Hence the round-trip time on traveler clocks will be , during which the time elapsed on map clocks will be .
This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on Earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on Earth clocks). Unfortunately, sustaining 1-gee acceleration for years is easier said than done, as illustrated by the maximum payload to launch mass ratios shown in the figure at right.
In curved spacetime
In the language of general relativity, the components of an object's acceleration four-vector A (whose magnitude is proper acceleration) are related to elements of the four-velocity via a covariant derivative D with respect to proper time :
Here U is the object's four-velocity, and Γ represents the coordinate system's 64 connection coefficients or Christoffel symbols. Note that the Greek subscripts take on four possible values, namely 0 for the time-axis and 1–3 for spatial coordinate axes, and that repeated indices are used to indicate summation over all values of that index. Trajectories with zero proper acceleration are referred to as geodesics.
The left hand side of this set of four equations (one each for the time-like and three spacelike values of index λ) is the object's proper-acceleration 3-vector combined with a null time component as seen from the vantage point of a reference or book-keeper coordinate system in which the object is at rest. The first term on the right hand side lists the rate at which the time-like (energy/mc) and space-like (momentum/m) components of the object's four-velocity U change, per unit time τ on traveler clocks.
Let's solve for that first term on the right since at low speeds its spacelike components represent the coordinate acceleration. More generally, when that first term goes to zero the object's coordinate acceleration goes to zero. This yields
Thus, as exemplified with the first two animations above, coordinate acceleration goes to zero whenever proper-acceleration is exactly canceled by the connection (or geometric acceleration) term on the far right. Caution: This term may be a sum of as many as sixteen separate velocity and position dependent terms, since the repeated indices μ and ν are by convention summed over all pairs of their four allowed values.
Force and equivalence
The above equation also offers some perspective on forces and the equivalence principle. Consider local book-keeper coordinates for the metric (e.g. a local Lorentz tetrad like that which global positioning systems provide information on) to describe time in seconds, and space in distance units along perpendicular axes. If we multiply the above equation by the traveling object's rest mass m, and divide by Lorentz factor γ = dt/dτ, the spacelike components express the rate of momentum change for that object from the perspective of the coordinates used to describe the metric.
This in turn can be broken down into parts due to proper and geometric components of acceleration and force. If we further multiply the time-like component by lightspeed c, and define coordinate velocity as , we get an expression for rate of energy change as well:
(timelike) and (spacelike).
Here ao is an acceleration due to proper forces and ag is, by default, a geometric acceleration that we see applied to the object because of our coordinate system choice. At low speeds these accelerations combine to generate a coordinate acceleration like , while for unidirectional motion at any speed ao's magnitude is that of proper acceleration α as in the section above where α = γ3a when ag is zero. In general expressing these accelerations and forces can be complicated.
Nonetheless, if we use this breakdown to describe the connection coefficient (Γ) term above in terms of geometric forces, then the motion of objects from the point of view of any coordinate system (at least at low speeds) can be seen as locally Newtonian. This is already common practice e.g. with centrifugal force and gravity. Thus the equivalence principle extends the local usefulness of Newton's laws to accelerated coordinate systems and beyond.
Surface dwellers on a planet
For low speed observers being held at fixed radius from the center of a spherical planet or star, coordinate acceleration ashell is approximately related to proper acceleration ao by:
where the planet or star's Schwarzschild radius . As our shell observer's radius approaches the Schwarzschild radius, the proper acceleration ao needed to keep it from falling in becomes intolerable.
On the other hand, for , an upward proper force of only is needed to prevent one from accelerating downward. At the Earth's surface this becomes:
where is the downward 9.8 m/s2 acceleration due to gravity, and is a unit vector in the radially outward direction from the center of the gravitating body. Thus here an outward proper force of mg is needed to keep one from accelerating downward.
Four-vector derivations
The spacetime equations of this section allow one to address all deviations between proper and coordinate acceleration in a single calculation. For example, let's calculate the Christoffel symbols:
for the far-coordinate Schwarzschild metric , where rs is the Schwarzschild radius 2GM/c2. The resulting array of coefficients becomes:
From this you can obtain the shell-frame proper acceleration by setting coordinate acceleration to zero and thus requiring that proper acceleration cancel the geometric acceleration of a stationary object i.e. . This does not solve the problem yet, since Schwarzschild coordinates in curved spacetime are book-keeper coordinates but not those of a local observer. The magnitude of the above proper acceleration 4-vector, namely , is however precisely what we want i.e. the upward frame-invariant proper acceleration needed to counteract the downward geometric acceleration felt by dwellers on the surface of a planet.
A special case of the above Christoffel symbol set is the flat-space spherical coordinate set obtained by setting rs or M above to zero:
From this we can obtain, for example, the centripetal proper acceleration needed to cancel the centrifugal geometric acceleration of an object moving at constant angular velocity at the equator where . Forming the same 4-vector sum as above for the case of and zero yields nothing more than the classical acceleration for rotational motion given above, i.e. so that . Coriolis effects also reside in these connection coefficients, and similarly arise from coordinate-frame geometry alone.
See also
Acceleration: change in velocity
Proper velocity: momentum per mass in special relativity; composed of the spacelike components of the 4-velocity
Proper reference frame (flat spacetime): accelerated reference frame in special relativity (Minkowski space)
Fictitious force: one name for mass times geometric acceleration
Four-vector: making the connection between space and time explicit
Kinematics: for studying ways that position changes with time
Uniform acceleration: holding coordinate acceleration fixed
Footnotes
External links
Excerpts from the first edition of Spacetime Physics, and other resources posted by Edwin F. Taylor
James Hartle's gravity book page including Mathematica programs to calculate Christoffel symbols.
Andrew Hamilton's notes and programs for working with local tetrads at U. Colorado, Boulder.
Minkowski spacetime
Acceleration | Proper acceleration | [
"Physics",
"Mathematics"
] | 3,670 | [
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities",
"Acceleration"
] |
8,688,139 | https://en.wikipedia.org/wiki/Electronic%20circuit%20design | Electronic circuit design comprises the analysis and synthesis of electronic circuits.
Methods
To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Linear circuits, that is, circuits wherein the outputs are linearly dependent on the inputs, can be analyzed by hand using complex analysis. Simple nonlinear circuits can also be analyzed in this way. Specialized software has been created to analyze circuits that are either too complicated or too nonlinear to analyze by hand.
Circuit simulation software allows engineers to design circuits more efficiently, reducing the time cost and risk of error involved in building circuit prototypes. Some of these make use of hardware description languages such as VHDL or Verilog.
Network simulation software
More complex circuits are analyzed with circuit simulation software such as SPICE and EMTP.
Linearization around operating point
When faced with a new circuit, the software first tries to find a steady state solution wherein all the nodes conform to Kirchhoff's Current Law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element.
Once the steady state solution is found, the software can analyze the response to perturbations using piecewise approximation, harmonic balance or other methods.
Piece-wise linear approximation
Software such as the PLECS interface to Simulink uses piecewise linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
Synthesis
Simple circuits may be designed by connecting a number of elements or functional blocks such as integrated circuits.
More complex digital circuits are typically designed with the aid of computer software. Logic circuits (and sometimes mixed mode circuits) are often described in such hardware description languages as HDL, VHDL or Verilog, then synthesized using a logic synthesis engine.
See also
Circuit design
Integrated circuit design
References
Electronic design
Design | Electronic circuit design | [
"Engineering"
] | 428 | [
"Electronic design",
"Electronic engineering",
"Electronic circuits",
"Design"
] |
8,688,979 | https://en.wikipedia.org/wiki/Percy%20Nicholls%20Award | The Percy Nicholls Award is an American engineering prize.
It has been given annually since 1942 for "notable scientific or industrial achievement in the field of solid fuels". The prize is given jointly by the American Institute of Mining, Metallurgical, and Petroleum Engineers and American Society of Mechanical Engineers.
Recipients of this Prize
2023 - David G. Osborne
2022 - Michael A. Karmis
2021 - Not given
2019 - Not given
2018 - Not given
2017 - Not given
2016 - Not given
2015 - Yoginder Paul Chugh
2014 - Yiannis Levendis
2013 - Barbara J. Arnold
2012 - Not given
2011 - Sukumar Bandopadhyay
2010 - Ashwani K. Gupta
2009 - William Beck
2008 - George A. Richards
2007 - Peter J. Bethell
2006 - John L. Marion
2005 - Gerald H. Luttrell
2004 - Dr. Hisashi (Sho) Kobayashi
2003 - J. Brett Harvey
2002 - L. Douglas Smoot
2001 - Robert E. Murray
2000 - Klaus R. G. Hein
1999 - Peter T. Luckie
1998 - Not given
1997 - Frank F. Aplan
1996 - Adel F. Sarofim
1995 - Joseph W. Leonard, III
1994 - Robert H. Essenhigh
1993 - Robert L. Frantz
1992 - Richard W. Borio
1991 - Raja V. Ramani
1990 - Richard W. Bryers
1989 - Albert W. Duerbrouck
1988 - János M. Beér
1987 - Leonard G. Austin
1986 - Gordon H. Gronhovd
1985 - David A. Zegeer
1984 - George K. Lee
1983 - E. Minor Pace
1982 - James R. Jones
1981 - Jack A. Simon
1980 - George W. Land
1979 - William N. Poundstone
1978 - Albert F. Duzy
1977 - H. Beecher Charmbury
1976 - Richard B. Engdahl
1975 - Not given
1974 - George P. Cooper
1973 - Samuel M. Cassidy
1972 - Charles H. Sawyer
1971 - George E. Keller
1970 - Richard C. Corey
1969 - David R. Mitchell
1968 - W. T. Reid
1967 - Martin A. Elliott
1966 - C. T. Holland
1965 - L. F. Deming
1964 - Carroll F. Hardy
1963 - James R. Garvey
1962 - Charles E. Lawall
1961 - Otto de Lorenzi
1960 - Carl E. Lesher
1959 - Homer H. Lowry
1958 - Willibald Trinks
1957 - John Blizzard
1956 - Chester A. Reed
1955 - Ralph Hardgrove
1954 - John F. Barkley
1953 - Henry F. Hebley
1952 - Harry F. Yancey
1951 - Albert R. Humford
1950 - Julian E. Tobey
1949 - Lawrence A. Shipman
1948 - Ralph A. Sherman
1947 - Howard N. Eavenson
1946 - Arno C. Fieldner
1945 - Thomas A. Marsh
1944 - James B. Morrow
1943 - Henry Kreisinger
1942 - Ervin G. Bailey
See also
List of engineering awards
List of mechanical engineering awards
References
Percy Nicholls Award
Notes
Awards of the American Society of Mechanical Engineers
Awards of the American Institute of Mining, Metallurgical, and Petroleum Engineers
Combustion engineering awards
Awards established in 1942
1942 establishments in the United States | Percy Nicholls Award | [
"Chemistry",
"Technology"
] | 657 | [
"Awards of the American Institute of Mining",
" and Petroleum Engineers",
"Combustion",
"Science award stubs",
"Combustion engineering awards",
"Science and technology awards",
"American Institute of Mining",
" Metallurgical"
] |
4,130,888 | https://en.wikipedia.org/wiki/Darboux%27s%20theorem%20%28analysis%29 | In mathematics, Darboux's theorem is a theorem in real analysis, named after Jean Gaston Darboux. It states that every function that results from the differentiation of another function has the intermediate value property: the image of an interval is also an interval.
When ƒ is continuously differentiable (ƒ in C1([a,b])), this is a consequence of the intermediate value theorem. But even when ƒ′ is not continuous, Darboux's theorem places a severe restriction on what it can be.
Darboux's theorem
Let be a closed interval, be a real-valued differentiable function. Then has the intermediate value property: If and are points in with , then for every between and , there exists an in such that .
Proofs
Proof 1. The first proof is based on the extreme value theorem.
If equals or , then setting equal to or , respectively, gives the desired result. Now assume that is strictly between and , and in particular that . Let such that . If it is the case that we adjust our below proof, instead asserting that has its minimum on .
Since is continuous on the closed interval , the maximum value of on is attained at some point in , according to the extreme value theorem.
Because , we know cannot attain its maximum value at . (If it did, then for all , which implies .)
Likewise, because , we know cannot attain its maximum value at .
Therefore, must attain its maximum value at some point . Hence, by Fermat's theorem, , i.e. .
Proof 2. The second proof is based on combining the mean value theorem and the intermediate value theorem.
Define .
For define and .
And for define and .
Thus, for we have .
Now, define with .
is continuous in .
Furthermore, when and when ; therefore, from the Intermediate Value Theorem, if then, there exists such that .
Let's fix .
From the Mean Value Theorem, there exists a point such that .
Hence, .
Darboux function
A Darboux function is a real-valued function ƒ which has the "intermediate value property": for any two values a and b in the domain of ƒ, and any y between ƒ(a) and ƒ(b), there is some c between a and b with ƒ(c) = y. By the intermediate value theorem, every continuous function on a real interval is a Darboux function. Darboux's contribution was to show that there are discontinuous Darboux functions.
Every discontinuity of a Darboux function is essential, that is, at any point of discontinuity, at least one of the left hand and right hand limits does not exist.
An example of a Darboux function that is discontinuous at one point is the topologist's sine curve function:
By Darboux's theorem, the derivative of any differentiable function is a Darboux function. In particular, the derivative of the function is a Darboux function even though it is not continuous at one point.
An example of a Darboux function that is nowhere continuous is the Conway base 13 function.
Darboux functions are a quite general class of functions. It turns out that any real-valued function ƒ on the real line can be written as the sum of two Darboux functions. This implies in particular that the class of Darboux functions is not closed under addition.
A strongly Darboux function is one for which the image of every (non-empty) open interval is the whole real line. The Conway base 13 function is again an example.
Notes
External links
Theorems in calculus
Theory of continuous functions
Theorems in real analysis
Articles containing proofs | Darboux's theorem (analysis) | [
"Mathematics"
] | 764 | [
"Theorems in mathematical analysis",
"Theorems in calculus",
"Calculus",
"Theory of continuous functions",
"Theorems in real analysis",
"Topology",
"Articles containing proofs"
] |
4,131,413 | https://en.wikipedia.org/wiki/Gidazepam | Gidazepam, also known as hydazepam or hidazepam, is a drug which is an atypical benzodiazepine derivative, developed in the Soviet Union. It is a selectively anxiolytic benzodiazepine. It also has therapeutic value in the management of certain cardiovascular disorders.
Pharmacology
Gidazepam and several of its analogs, in contrast to other benzodiazepines, are comparatively more selective agonists of TSPO (formerly the peripheral benzodiazepine receptor) than the benzodiazepine receptor.
Gidazepam acts as a prodrug to its active metabolite 7-bromo-2,3-dihydro-5-phenyl-1H-1,4-benzodiazepin-2-one (desalkylgidazepam or bromo-nordazepam). Its anxiolytic effects can take several hours to manifest presumably due to its slow metabolism (half-life 87 hours). The onset and intensity of anxiolytic effects correlate with blood levels of desalkylgidazepam.
See also
Phenazepam—another benzodiazepine widely used in Russia and other CIS countries
Cinazepam
Cloxazolam
List of Russian drugs
References
Benzodiazepines
Organobromides
Lactams
Hydrazides
Russian drugs
Anxiolytics
Prodrugs
Drugs in the Soviet Union | Gidazepam | [
"Chemistry"
] | 323 | [
"Chemicals in medicine",
"Prodrugs"
] |
4,133,201 | https://en.wikipedia.org/wiki/Whitehead%27s%20theory%20of%20gravitation | In theoretical physics, Whitehead's theory of gravitation was introduced by the mathematician and philosopher Alfred North Whitehead in 1922. While never broadly accepted, at one time it was a scientifically plausible alternative to general relativity. However, after further experimental and theoretical consideration, the theory is now generally regarded as obsolete.
Principal features
Whitehead developed his theory of gravitation by considering how the world line of a particle is affected by those of nearby particles. He arrived at an expression for what he called the "potential impetus" of one particle due to another, which modified Newton's law of universal gravitation by including a time delay for the propagation of gravitational influences. Whitehead's formula for the potential impetus involves the Minkowski metric, which is used to determine which events are causally related and to calculate how gravitational influences are delayed by distance. The potential impetus calculated by means of the Minkowski metric is then used to compute a physical spacetime metric , and the motion of a test particle is given by a geodesic with respect to the metric . Unlike the Einstein field equations, Whitehead's theory is linear, in that the superposition of two solutions is again a solution. This implies that Einstein's and Whitehead's theories will generally make different predictions when more than two massive bodies are involved.
Following the notation of Chiang and Hamity
, introduce a Minkowski spacetime with metric tensor , where the indices run from 0 through 3, and let the masses of a set of gravitating particles be .
The Minkowski arc length of particle is denoted by . Consider an event with co-ordinates . A retarded event with co-ordinates on the world-line of particle is defined by the relations . The unit tangent vector at is . We also need the invariants . Then, a gravitational tensor potential is defined by
where
It is the metric that appears in the geodesic equation.
Experimental tests
Whitehead's theory is equivalent with the Schwarzschild metric and makes the same predictions as general relativity regarding the four classical solar system tests (gravitational red shift, light bending, perihelion shift, Shapiro time delay), and was regarded as a viable competitor of general relativity for several decades. In 1971, Will argued that Whitehead's theory predicts a periodic variation in local gravitational acceleration 200 times longer than the bound established by experiment. Misner, Thorne and Wheeler's textbook Gravitation states that Will demonstrated "Whitehead's theory predicts a time-dependence for the ebb and flow of ocean tides that is completely contradicted by everyday experience".
Fowler argued that different tidal predictions can be obtained by a more realistic model of the galaxy. Reinhardt and Rosenblum claimed that the disproof of Whitehead's theory by tidal effects was "unsubstantiated". Chiang and Hamity argued that Reinhardt and Rosenblum's approach "does not provide a unique space-time geometry for a general gravitation system", and they confirmed Will's calculations by a different method. In 1989, a modification of Whitehead's theory was proposed that eliminated the unobserved sidereal tide effects. However, the modified theory did not allow the existence of black holes.
Subrahmanyan Chandrasekhar wrote, "Whitehead's philosophical acumen has not served him well in his criticisms of Einstein."
Philosophical disputes
Clifford M. Will argued that Whitehead's theory features a prior geometry. Under Will's presentation (which was inspired by John Lighton Synge's interpretation of the theory), Whitehead's theory has the curious feature that electromagnetic waves propagate along null geodesics of the physical spacetime (as defined by the metric determined from geometrical measurements and timing experiments), while gravitational waves propagate along null geodesics of a flat background represented by the metric tensor of Minkowski spacetime. The gravitational potential can be expressed entirely in terms of waves retarded along the background metric, like the Liénard–Wiechert potential in electromagnetic theory.
A cosmological constant can be introduced by changing the background metric to a de Sitter or anti-de Sitter metric. This was first suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955.
Will's work was disputed by Dean R. Fowler, who argued that Will's presentation of Whitehead's theory contradicts Whitehead's philosophy of nature. For Whitehead, the geometric structure of nature grows out of the relations among what he termed "actual occasions". Fowler claimed that a philosophically consistent interpretation of Whitehead's theory makes it an alternate, mathematically equivalent, presentation of general relativity. In turn, Jonathan Bain argued that Fowler's criticism of Will was in error.
See also
Classical theories of gravitation
Eddington–Finkelstein coordinates
References
Further reading
Alfred North Whitehead
Obsolete theories in physics
Theories of gravity | Whitehead's theory of gravitation | [
"Physics"
] | 1,012 | [
"Theories of gravity",
"Theoretical physics",
"Obsolete theories in physics"
] |
4,133,969 | https://en.wikipedia.org/wiki/Ethyl%20loflazepate | Ethyl loflazepate (marketed under the brand names Meilax, Ronlax and Victan) is a drug which is a benzodiazepine derivative. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. In animal studies it was found to have low toxicity, although in rats evidence of pulmonary phospholipidosis occurred with pulmonary foam cells developing with long-term use of very high doses. Its elimination half-life is 51–103 hours. Its mechanism of action is similar to other benzodiazepines. Ethyl loflazepate also produces an active metabolite which is stronger than the parent compound. Ethyl loflazepate was designed to be a prodrug for descarboxyloflazepate, its active metabolite. It is the active metabolite which is responsible for most of the pharmacological effects rather than ethyl loflazepate. The main metabolites of ethyl loflazepate are descarbethoxyloflazepate, loflazepate and 3-hydroxydescarbethoxyloflazepate. Accumulation of the active metabolites of ethyl loflazepate are not affected by those with kidney failure or impairment. The symptoms of an overdose of ethyl loflazepate include sleepiness, agitation and ataxia. Hypotonia may also occur in severe cases. These symptoms occur much more frequently and severely in children. Death from therapeutic maintenance doses of ethyl loflazepate taken for 2 – 3 weeks has been reported in 3 elderly patients. The cause of death was asphyxia due to benzodiazepine toxicity. High doses of the antidepressant fluvoxamine may potentiate the adverse effects of ethyl loflazepate.
Ethyl loflazeplate is commercialized in Mexico, under the trade name Victan. It is officially approved for the following conditions:
Anxiety
Post-trauma anxiety
Anxiety associated with severe neuropathic pain
Generalized anxiety disorder (GAD)
Obsessive–compulsive disorder
Panic attack
Delirium tremens
See also
Benzodiazepine
References
External links
Benzodiazepines
Chloroarenes
Ethyl esters
Hypnotics
Lactams
2-Fluorophenyl compounds | Ethyl loflazepate | [
"Biology"
] | 499 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
4,134,885 | https://en.wikipedia.org/wiki/Contact%20immunity | Contact immunity is the property of some vaccines, where a vaccinated individual can confer immunity upon unimmunized individuals through contact with bodily fluids or excrement. In other words, if person "A" has been vaccinated for virus X and person "B" has not, person "B" can receive immunity to virus X just by coming into contact with person "A". The term was coined by Romanian physician Ioan Cantacuzino.
The potential for contact immunity exists primarily in "live" or attenuated vaccines. Vaccination with a live, but attenuated, virus can produce immunity to more dangerous forms of the virus. These attenuated viruses produce little or no illness in most people. However, the live virus multiplies briefly, may be shed in body fluids or excrement, and can be contracted by another person. If this contact produces immunity and carries no notable risk, it benefits an additional person, and further increases the immunity of the group.
The most prominent example of contact immunity was the oral polio vaccine (OPV). This live, attenuated polio vaccine was widely used in the US between 1960 and 1990; it continues to be used in polio eradication programs in developing countries because of its low cost and ease of administration. It is popular, in part, because it is capable of contact immunity. Recently immunized children "shed" live virus in their feces for a few days after immunization. About 25 percent of people coming into contact with someone immunized with OPV gained protection from polio through this form of contact immunity. Although contact immunity is an advantage of OPV, the risk of vaccine-associated paralytic poliomyelitis—affecting 1 child per 2.4 million OPV doses administered—led the Centers for Disease Control and Prevention (CDC) to cease recommending its use in the US as of January 1, 2010, in favor of inactivated poliovirus vaccine (IPV). The CDC continues to recommend OPV over IPV for global polio eradication activities.
The main drawback of live virus–based vaccines is that a few people who are vaccinated or exposed to those who have been vaccinated may develop severe disease. Those with defective immune function are the most vulnerable. In the case of OPV, an average of eight to nine adults contracted paralytic polio from contact with a recently immunized child each year. As the risk of catching polio in the Western Hemisphere diminished, the risk of contact infection with the attenuated polio virus outweighed the advantages of OPV, leading the CDC to recommend its discontinuation.
Contact immunity differs from herd immunity, a different type of group protection, in which risk for unimmunized individuals is reduced if they are surrounded by immunized individuals who are unlikely to contract, harbor, or transmit the disease.
References
Epidemiology
Polio
Vaccination | Contact immunity | [
"Biology",
"Environmental_science"
] | 612 | [
"Epidemiology",
"Vaccination",
"Environmental social science"
] |
4,135,000 | https://en.wikipedia.org/wiki/Laurel%20wreath | A laurel wreath is a symbol of triumph, a wreath made of connected branches and leaves of the bay laurel (), an aromatic broadleaf evergreen. It was also later made from spineless butcher's broom (Ruscus hypoglossum) or cherry laurel (Prunus laurocerasus). It is worn as a chaplet around the head, or as a garland around the neck.
Wreaths and crowns in antiquity, including the laurel wreath, trace back to Ancient Greece. In Greek mythology, the god Apollo, who is patron of lyrical poetry, musical performance
and skill-based athletics, is conventionally depicted wearing a laurel wreath on his head in all three roles. Wreaths were awarded to victors in athletic competitions, including the ancient Olympics; for victors in athletics they were made of wild olive tree known as "kotinos" (), (sc. at Olympia) – and the same for winners of musical and poetic competitions. In Rome they were symbols of martial victory, crowning a successful commander during his triumph. Whereas ancient laurel wreaths are most often depicted as a horseshoe shape, modern versions are usually complete rings.
In common modern idiomatic usage, a laurel wreath or "crown" refers to a victory. The expression "resting on one's laurels" refers to someone relying entirely on long-past successes for continued fame or recognition, whereas to "look to one's laurels" means to be careful of losing rank to competition.
Background
Apollo, the patron of sport, is associated with the wearing of a laurel wreath. This association arose from the ancient Greek mythology story of Apollo and Daphne. Apollo mocked the god of love, Eros (Cupid), for his use of bow and arrow, since Apollo is also patron of archery. The insulted Eros then prepared two arrows—one of gold and one of lead. He shot Apollo with the gold arrow, instilling in the god a passionate love for the river nymph Daphne. He shot Daphne with the lead arrow, instilling in her a hatred of Apollo. Apollo pursued Daphne until she begged to be free of him and was turned into a laurel tree.
Apollo vowed to honor Daphne forever and used his powers of eternal youth and immortality to render the laurel tree evergreen. Apollo then crafted himself a wreath out of the laurel branches and turned Daphne into a cultural symbol for him and other poets and musicians.
Academic use
In some countries, the laurel wreath is used as a symbol of the master's degree. The wreath is given to young masters at the university graduation ceremony. The word "laureate" in 'poet laureate' refers to the laurel wreath. For example, the greatly admired medieval Florentine poet and philosopher Dante Alighieri is often represented in paintings and sculpture wearing a laurel wreath.
In Italy, the term laureato is used in academia to refer to any student who has graduated. Right after the graduation ceremony, or laurea in Italian, the student receives a laurel wreath to wear for the rest of the day. This tradition originated at the University of Padua and has spread in the last two centuries to all Italian universities.
At Connecticut College in the United States, members of the junior class carry a laurel chain, which the seniors pass through during commencement. It represents nature and the continuation of life from year to year. Immediately following commencement, the junior girls write out with the laurels their class year, symbolizing they have officially become seniors and the period will repeat itself the following spring.
At Mount Holyoke College in South Hadley, Massachusetts, USA, laurel has been a fixture of commencement traditions since 1900, when graduating students carried or wore laurel wreaths. In 1902, the chain of mountain laurel was introduced; since then, tradition has been for seniors to parade around the campus, carrying and linked by the chain. The mountain laurel represents the bay laurel used by the Romans in wreaths and crowns of honor.
At Reed College in Portland, Oregon, United States, members of the senior class receive laurel wreaths upon submitting their senior thesis in May. The tradition stems from the use of laurel wreaths in athletic competitions; the seniors have "crossed the finish line", so to speak.
At St. Mark's School in Southborough, Massachusetts, students who successfully complete three years of one classical language and two of the other earn the distinction of the Classics Diploma and the honor of wearing a laurel wreath on Prize Day.
In Sweden, those receiving a doctorate or an honorary doctorate in subjects traditionally falling within the Faculty of Philosophy (meaning philosophy, languages, arts, history and social sciences, as well as the natural sciences), receive a laurel wreath during the ceremony of conferral of the degree.
In Finland, in University of Helsinki a laurel wreath is given during the ceremony of conferral for master's degree.
Architectural and decorative arts motif
The laurel wreath is a common motif in architecture, furniture, and textiles. The laurel wreath is seen carved in the stone and decorative plaster works of Robert Adam, and in Federal, Regency, Directoire, and Beaux-Arts periods of architecture. In decorative arts, especially during the Empire period, the laurel wreath is seen woven in textiles, inlaid in marquetry, and applied to furniture in the form of gilded brass mounts.
Alfa Romeo added a laurel wreath to their logo after they won the inaugural Automobile World Championship in 1925 with the P2 racing car.
As used in heraldry
Laurel wreaths are commonly used in heraldry. They may be used as a charge in the shield, around the shield, or on top of it like an annular form.
Wreaths are a form of headgear akin to circlets.
In heraldry, a twisted band of cloth holds a mantling onto a helmet. This type of charge is called a "torse". A wreath is a circlet of foliage, usually with leaves, but sometimes with flowers. Wreaths may also be made from oak leaves, flowers, holly and rosemary; and are different from chaplets. While usually annular, they may also be penannular like a brooch.
In the Society for Creative Anachronism, laurel wreaths are reserved for use in the arms of a territorial branch, which are required to include one or more.
Wreath of service
The "wreath of service" is located on all commissioner position patches in the Boy Scouts of America. This is a symbol for the service rendered to units and the continued partnership between volunteers and professional Scouter. The wreath of service represents commitment to program and unit service.
Further reading
See also
Footnotes
References
External links
Wreaths (attire)
Visual motifs
Architectural elements
Headgear in heraldry
Roman-era clothing
Plants in culture | Laurel wreath | [
"Mathematics",
"Technology",
"Engineering"
] | 1,368 | [
"Visual motifs",
"Building engineering",
"Symbols",
"Architectural elements",
"Components",
"Architecture"
] |
4,135,795 | https://en.wikipedia.org/wiki/Mitochondrial%20permeability%20transition%20pore | The mitochondrial permeability transition pore (mPTP or MPTP; also referred to as PTP, mTP or MTP) is a protein that is formed in the inner membrane of the mitochondria under certain pathological conditions such as traumatic brain injury and stroke. Opening allows increase in the permeability of the mitochondrial membranes to molecules of less than 1500 daltons in molecular weight. Induction of the permeability transition pore, mitochondrial membrane permeability transition (mPT or MPT), can lead to mitochondrial swelling and cell death through apoptosis or necrosis depending on the particular biological setting.
Roles in pathology
The MPTP was originally discovered by Haworth and Hunter in 1979 and has been found to be involved in neurodegeneration, hepatotoxicity from Reye-related agents, cardiac necrosis and nervous and muscular dystrophies among other deleterious events inducing cell damage and death.
MPT is one of the major causes of cell death in a variety of conditions. For example, it is key in neuronal cell death in excitotoxicity, in which overactivation of glutamate receptors causes excessive calcium entry into the cell. MPT also appears to play a key role in damage caused by ischemia, as occurs in a heart attack and stroke. However, research has shown that the MPT pore remains closed during ischemia, but opens once the tissues are reperfused with blood after the ischemic period, playing a role in reperfusion injury.
MPT is also thought to underlie the cell death induced by Reye's syndrome, since chemicals that can cause the syndrome, like salicylate and valproate, cause MPT. MPT may also play a role in mitochondrial autophagy. Cells exposed to toxic amounts of Ca2+ ionophores also undergo MPT and death by necrosis.
Structure
While the MPT modulation has been widely studied, little is known about its structure. Initial experiments by Szabó and Zoratti proposed the MPT may comprise Voltage Dependent Anion Channel (VDAC) molecules. Nevertheless, this hypothesis was shown to be incorrect as VDAC−/− mitochondria were still capable to undergo MPT. Further hypothesis by Halestrap's group convincingly suggested the MPT was formed by the inner membrane Adenine Nucleotide Translocase (ANT), but genetic ablation of such protein still led to MPT onset. Thus, the only MPTP components identified so far are the TSPO (previously known as the peripheral benzodiazepine receptor) located in the mitochondrial outer membrane and cyclophilin-D in the mitochondrial matrix. Mice lacking the gene for cyclophilin-D develop normally, but their cells do not undergo Cyclosporin A-sensitive MPT, and they are resistant to necrotic death from ischemia or overload of Ca2+ or free radicals. However, these cells do die in response to stimuli that kill cells through apoptosis, suggesting that MPT does not control cell death by apoptosis.
MPTP blockers
Agents that transiently block MPT include the immune suppressant cyclosporin A (CsA); N-methyl-Val-4-cyclosporin A (MeValCsA), a non-immunosuppressant derivative of CsA; another non-immunosuppressive agent, NIM811, 2-aminoethoxydiphenyl borate (2-APB), bongkrekic acid and alisporivir (also known as Debio-025). TRO40303 is a newly synthetitised MPT blocker developed by Trophos company and currently is in Phase I clinical trial.
Factors in MPT induction
Various factors enhance the likelihood of MPTP opening. In some mitochondria, such as those in the central nervous system, high levels of Ca2+ within mitochondria can cause the MPT pore to open. This is possibly because Ca2+ binds to and activates Ca2+ binding sites on the matrix side of the MPTP.
MPT induction is also due to the dissipation of the difference in voltage across the inner mitochondrial membrane (known as transmembrane potential, or Δψ).
In neurons and astrocytes, the contribution of membrane potential to MPT induction is complex, see.
The presence of free radicals, another result of excessive intracellular calcium concentrations, can also cause the MPT pore to open.
Other factors that increase the likelihood that the MPTP will be induced include the presence of certain fatty acids, and inorganic phosphate. However, these factors cannot open the pore without Ca2+, though at high enough concentrations, Ca2+ alone can induce MPT.
Stress in the endoplasmic reticulum can be a factor in triggering MPT.
Conditions that cause the pore to close or remain closed include acidic conditions, high concentrations of ADP, high concentrations of ATP, and high concentrations of NADH. Divalent cations like Mg2+ also inhibit MPT, because they can compete with Ca2+ for the Ca2+ binding sites on the matrix and/or cytoplasmic side of the MPTP.
Effects
Multiple studies have found the MPT to be a key factor in the damage to neurons caused by excitotoxicity.
The induction of MPT, which increases mitochondrial membrane permeability, causes mitochondria to become further depolarized, meaning that Δψ is abolished. When Δψ is lost, protons and some molecules are able to flow across the outer mitochondrial membrane uninhibited.
Loss of Δψ interferes with the production of adenosine triphosphate (ATP), the cell's main source of energy, because mitochondria must have an electrochemical gradient to provide the driving force for ATP production.
In cell damage resulting from conditions such as neurodegenerative diseases and head injury, opening of the mitochondrial permeability transition pore can greatly reduce ATP production, and can cause ATP synthase to begin hydrolysing, rather than producing, ATP. This produces an energy deficit in the cell, just when it most needs ATP to fuel activity of ion pumps.
MPT also allows Ca2+ to leave the mitochondrion, which can place further stress on nearby mitochondria, and which can activate harmful calcium-dependent proteases such as calpain.
Reactive oxygen species (ROS) are also produced as a result of opening the MPT pore. MPT can allow antioxidant molecules such as glutathione to exit mitochondria, reducing the organelles' ability to neutralize ROS. In addition, the electron transport chain (ETC) may produce more free radicals due to loss of components of the ETC, such as cytochrome c, through the MPTP. Loss of ETC components can lead to escape of electrons from the chain, which can then reduce molecules and form free radicals.
MPT causes mitochondria to become permeable to molecules smaller than 1.5 kDa, which, once inside, draw water in by increasing the organelle's osmolar load. This event may lead mitochondria to swell and may cause the outer membrane to rupture, releasing cytochrome c. Cytochrome c can in turn cause the cell to go through apoptosis ("commit suicide") by activating pro-apoptotic factors. Other researchers contend that it is not mitochondrial membrane rupture that leads to cytochrome c release, but rather another mechanism, such as translocation of the molecule through channels in the outer membrane, which does not involve the MPTP.
Much research has found that the fate of the cell after an insult depends on the extent of MPT. If MPT occurs to only a slight extent, the cell may recover, whereas if it occurs more it may undergo apoptosis. If it occurs to an even larger degree the cell is likely to undergo necrotic cell death.
Possible evolutionary purpose
Although the MPTP has been studied mainly in mitochondria from mammalian sources, mitochondria from diverse species also undergo a similar transition. While its occurrence can be easily detected, its purpose still remains elusive. Some have speculated that the regulated opening of the MPT pore may minimize cell injury by causing ROS-producing mitochondria to undergo selective lysosome-dependent mitophagy during nutrient starvation conditions. Under severe stress/pathologic conditions, MPTP opening would trigger injured cell death mainly through necrosis.
There is controversy about the question of whether the MPTP is able to exist in a harmless, "low-conductance" state. This low-conductance state would not induce MPT and would allow certain molecules and ions to cross the mitochondrial membranes. The low-conductance state may allow small ions like Ca2+ to leave mitochondria quickly, in order to aid in the cycling of Ca2+ in healthy cells. If this is the case, MPT may be a harmful side effect of abnormal activity of a usually beneficial MPTP.
MPTP has been detected in mitochondria from plants, yeasts, such as Saccharomyces cerevisiae, birds, such as guinea fowl and primitive vertebrates such as the Baltic lamprey. While the permeability transition is evident in mitochondria from these sources, its sensitivity to its classic modulators may differ when compared with mammalian mitochondria. Nevertheless, CsA-insensitive MPTP can be triggered in mammalian mitochondria given appropriate experimental conditions strongly suggesting this event may be a conserved characteristic throughout the eukaryotic domain.
See also
Crista
NMDA receptor
NMDA receptor antagonist
References
External links
Mitochondrial permeability transition pore: an enigmatic gatekeeper (2012) NHS&T, Vol 1(3):47-51
Mitochondrial Permeability Transition (PT) from Celldeath.de. Accessed January 1, 2007.
Cellular respiration
Neurotrauma
Mitochondria | Mitochondrial permeability transition pore | [
"Chemistry",
"Biology"
] | 2,123 | [
"Biochemistry",
"Mitochondria",
"Cellular respiration",
"Metabolism"
] |
4,136,136 | https://en.wikipedia.org/wiki/Electronic%20Cultural%20Atlas%20Initiative | The Electronic Cultural Atlas Initiative (ECAI) is a digital humanities initiative involving numerous academic professors and institutions around the world with the stated goal of creating a networked digital atlas by creating tools and setting standards for dynamic, digital maps.
ECAI was established in 1997 by Emeritus Prof. Lewis Lancaster of the University of California, Berkeley, and has held two meetings per year most years from 1998 - 2009 (ongoing), one of which is often in conjunction with the Pacific Neighbourhood Consortium. The initiative is based at UC Berkeley.
The ECAI 'clearinghouse' of distributed digital datasets was developed from 1998 by the Archaeological Computing Laboratory at the University of Sydney, and uses the ACL's TimeMap software.
See also
GIS
Wikimaps
External links
http://www.ecai.org/
Historical Geographic Information Systems Online Forum on Google
Cartography organizations
Geographic information systems organizations
Digital humanities
Historical geographic information systems
University of California, Berkeley
Research institutes in the San Francisco Bay Area
Digital humanities projects
1997 establishments in California | Electronic Cultural Atlas Initiative | [
"Technology"
] | 208 | [
"Digital humanities",
"Computing and society"
] |
4,137,502 | https://en.wikipedia.org/wiki/Bartoli%20indole%20synthesis | The Bartoli indole synthesis (also called the Bartoli reaction) is the chemical reaction of ortho-substituted nitroarenes and nitrosoarenes with vinyl Grignard reagents to form substituted indoles.
The reaction is often unsuccessful without substitution ortho to the nitro group, with bulkier ortho substituents usually resulting in higher yields for the reaction. The steric bulk of the ortho group assists in the [3,3]-sigmatropic rearrangement required for product formation. Three equivalents of the vinyl Grignard reagent are necessary for the reaction to achieve full conversion when performed on nitroarenes, and only two equivalents when performed on nitrosoarenes.
This method has become one of the shortest and most flexible routes to 7-substituted indoles. The Leimgruber-Batcho indole synthesis gives similar flexibility and regiospecificity to indole derivatives. One advantage of the Bartoli indole synthesis is the ability to produce indoles substituted on both the carbocyclic ring and the pyrrole ring, which is difficult to do with the Leimgruber-Batcho indole synthesis.
Reaction mechanism
The reaction mechanism of the Bartoli indole synthesis is illustrated below using o-nitrotoluene (1) and propenyl Grignard (2) to form 3,7-dimethylindole (13).
The mechanism begins by the addition of the Grignard reagent (2) onto the nitroarene (1) to form intermediate 3. Intermediate 3 spontaneously decomposes to form a nitrosoarene (4) and a magnesium salt (5). (Upon reaction workup, the magnesium salt will liberate a carbonyl compound (6).) Reaction of the nitrosoarene (4) with a second equivalent of the Grignard reagent (2) forms intermediate 7. The steric bulk of the ortho group causes a [3,3]-sigmatropic rearrangement forming the intermediate 8. Cyclization and tautomerization give intermediate 10, which will react with a third equivalent of the Grignard reagent (2) to give a dimagnesium indole salt (12). Reaction workup eliminates water and gives the final desired indole (13).
Therefore, three equivalents of the Grignard reagent are necessary, as one equivalent becomes carbonyl compound 6, one equivalent deprotonates 10 forming an alkene (11), and one equivalent gets incorporated into the indole ring.
The nitroso intermediate (4) has been isolated from the reaction. Additionally, reaction of the nitroso intermediate (4) with two equivalents of the Grignard reagent produces the expected indole.
The scope of the reaction includes substituted pyridines which can be used to make 4-azaindoles(left) and 6-azaindoles(right).
Variations
Dobbs modification
Adrian Dobbs greatly enhanced the scope of the Bartoli indole synthesis by using an ortho-bromine as a directing group, which is subsequently removed by AIBN and tributyltin hydride.
The synthesis of 4-methylindole (3) highlights the ability of this technique to produce highly substituted indoles.
See also
Fischer indole synthesis
References
Indole forming reactions
Carbon-heteroatom bond forming reactions
Name reactions | Bartoli indole synthesis | [
"Chemistry"
] | 731 | [
"Name reactions",
"Carbon-heteroatom bond forming reactions",
"Ring forming reactions",
"Organic reactions"
] |
4,138,124 | https://en.wikipedia.org/wiki/Southern%20Hemisphere%20Auroral%20Radar%20Experiment | The Southern Hemisphere Auroral Radar Experiment, or SHARE, started in 1988, is an Antarctic research project designed to observe velocities and irregularities of electrical fields in the ionosphere and magnetosphere. It is operated jointly by the University of Natal, Potchefstroom University, the British Antarctic Survey and Johns Hopkins University and operates out of British Halley Station, South African SANAE IV Station and Japanese Showa Station.
Using a total of 16 antennas, each mounted on a 12 m tower and radiating on fixed frequencies in the 8–20 MHz range, SHARE transmits a radio frequency pulse into the upper atmosphere every two minutes. The three stations' ranges overlap to cover most of the Antarctic continent.
SHARE is part of the international Super Dual Auroral Radar Network (SuperDARN). It supplies valuable data to track space weather.
Meteorology research and field projects
Radio frequency propagation
Plasma physics facilities
Ground radars
Astronomical experiments in the Antarctic
1988 establishments in Antarctica | Southern Hemisphere Auroral Radar Experiment | [
"Physics"
] | 195 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Plasma physics",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Plasma physics stubs",
"Plasma physics facilities"
] |
4,138,437 | https://en.wikipedia.org/wiki/Nujol | Nujol is a brand of mineral oil by Plough Inc., cas number 8012-95-1, and density 0.838 g/mL at 25 °C, used in infrared spectroscopy. It is a heavy paraffin oil so it is chemically inert and has a relatively uncomplicated IR spectrum, with major peaks between 2950-2800, 1465-1450, and 1380–1300 cm−1. The empirical formula of Nujol is hard to determine exactly because it is a mixture but it is essentially the alkane formula where n is very large.
To obtain an IR spectrum of a solid, a sample is combined with Nujol in a mortar and pestle or some other device to make a mull (a very thick suspension), and is usually sandwiched between potassium- or sodium chloride plates before being placed in the spectrometer. For very reactive samples, the layer of Nujol can provide a protective coating, preventing sample decomposition during acquisition of the IR spectrum. When preparing the sample it is important to keep the sample from being saturated with Nujol, this will result in erroneous spectra since the Nujol peaks will dominate, silencing the actual sample's peaks.
References
External links
MSDS data sheet
Nujol's historic use as an alternative medicine
CAS Number for Nujol
Hydrocarbon solvents
Infrared spectroscopy
Alkanes | Nujol | [
"Physics",
"Chemistry",
"Astronomy"
] | 291 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Organic compounds",
"Alkanes",
"Infrared spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
"Organic chemistry stubs"
] |
1,532,037 | https://en.wikipedia.org/wiki/Materials%20recovery%20facility | A materials recovery facility, materials reclamation facility, materials recycling facility or multi re-use facility (MRF, pronounced "murf") is a specialized waste sorting and recycling system that receives, separates and prepares recyclable materials for marketing to end-user manufacturers. Generally, the main recyclable materials include ferrous metal, non-ferrous metal, plastics, paper, glass. Organic food waste is used to assist anaerobic digestion or composting. Inorganic inert waste is used to make building materials. Non-recyclable high calorific value waste is used to making RDF (Refuse Derived Fuel) and SRF (Solid Recovered Fuel.)
Industry and locations
In the United States, there are over 300 materials recovery facilities. The total market size is estimated at $6.6B as of 2019.
As of 2016, the top 75 were headed by Sims Municipal Recycling out of Brooklyn, New York. Waste Management operated 95 MRF facilities total, with 26 in the top 75. ReCommunity operated 6 in the top 75. Republic Services operated 6 in the top 75. Waste Connections operated 4 in the top 75.
Business economics
In 2018, a survey in the Northeast United States found that the processing cost per ton was $82, versus a value of around $45 per ton. Composition of the ton included 28% mixed paper and 24% old corrugated containers (OCC).
Prices for OCC declined into 2019. Three paper mill companies have announced initiatives to use more recycled fiber.
Glass recycling is expensive for these facilities, but a study estimated that costs could be cut significantly by investments in improved glass processing. In Texas, Austin and Houston have facilities which have invested glass recycling, built and operated by Balcones Recycling and FCC Environment, respectively.
Robots have spread across the industry, helping with sorting.
Process
Waste enters a MRF when it is dumped onto the tipping floor by the collection trucks. The materials are then scooped up and placed onto conveyor belts, which transports it to the pre-sorting area. Here, human workers remove some items that are not recyclable, which will either be sent to a landfill or an incinerator. Between 5 and 45% of "dirty" MRF material is recovered. Potential hazards are also removed, such as lithium batteries, propane tanks, and aerosol cans, which can create fires. Materials like plastic bags and hoses, which can entangle the recycling equipment, are also removed. From there, materials are transported via another conveyer belt to the disk screen, which separates wide and flat materials like flattened cardboard boxes from items like cans, jars, paper, and bottles. Flattened boxes ride across the disk screen to the other side, while all other materials fall below, where paper is separated from the waste stream with a blower. The stream of cardboard and paper is overseen by more human workers, who ensure no plastic, metal, or glass is present. Newer MRFs or retrofitted ones may use industrial robots instead of humans for pre-sorting and for quality control. However, complete removal of human labor from the sortation process is unlikely for the foreseeable future, as one needs to replicate the dexterity of the human hand and nervous system for removing every type of contaminant within a material stream. The technical limitations of this involve advanced concepts in mechatronics and computer science, where a robot hand would need to be designed, and a highly flexible algorithm that creates another precise movement algorithm within the time constraints of the system (say, the highly approximate estimate of 30,000 lines of code to do this on a modern processor would trigger too long of a delay to be effective on a sortation line). In other words, one would need to search an encyclopedia of said robotic hand motions for every configuration of waste for every pick, and this may be computationally insurmountable, even with quantum computing, as every conditional would need to be checked every iteration.
Metal is separated from plastics and glass first with electromagnets, which removes ferrous metals. Non-ferrous metals like aluminum are then removed with eddy current separators.
The glass and plastic streams are separated by further disk screens. The glass is crushed into cullet for ease of transportation. The plastics are then separated by polymer type, often using infrared technology (optical sorting). Infrared light reflects differently off different polymer types; once identified, a jet of air shoots the plastic into the appropriate bin. MRFs might only collect and recycle a few polymers of plastic, sending the rest to landfills or incinerators. The separated materials are baled and sent to the shipping dock of the facility.
Types
Clean
A clean MRF accepts recyclable materials that have already been separated at the source from municipal solid waste generated by either residential or commercial sources. There are a variety of clean MRFs. The most common are single stream where all recyclable material is mixed, or dual stream MRFs, where source-separated recyclables are delivered in a mixed container stream (typically glass, ferrous metal, aluminum and other non-ferrous metals, PET [No.1] and HDPE [No.2] plastics) and a mixed paper stream including corrugated cardboard boxes, newspapers, magazines, office paper and junk mail. Material is sorted to specifications, then baled, shredded, crushed, compacted, or otherwise prepared for shipment to market.
Mixed-waste processing facility (MWPF) / Dirty MRF
A mixed-waste processing system, sometimes referred to as a dirty MRF, accepts a mixed solid waste stream and then proceeds to separate out designated recyclable materials through a combination of manual and mechanical sorting. The sorted recyclable materials may undergo further processing required to meet technical specifications established by end-markets while the balance of the mixed waste stream is sent to a disposal facility such as a landfill. Today, MWPFs are attracting renewed interest as a way to address low participation rates for source-separated recycling collection systems and prepare fuel products and/or feedstocks for conversion technologies. MWPFs can give communities the opportunity to recycle at much higher rates than has been demonstrated by curbside or other waste collection systems. Advances in technology make today’s MWPF different and, in many respects better, than older versions.
Wet MRF
Around 2004, new mechanical biological treatment technologies were beginning to utilise wet MRFs. These combine a dirty MRF with water, which acts to densify, separate and clean the output streams. It also hydrocrushes and dissolves biodegradable organics in solution to make them suitable for anaerobic digestion.
History
In the United States, modern MRFs began in the 1970s. Peter Karter established Resource Recovery Systems, Inc. in Branford, Connecticut, the "first materials recovery facility (MRF)" in the US.
See also
Cradle-to-cradle design
Curbside collection
List of waste treatment technologies
List of waste types
Mechanical biological treatment
Resource recovery
Transfer station (waste management)
Waste characterization
Waste sorting
References
External links
"Coming soon! van der Linde's amazing recycling machine"
"Materials Recovery Facility Solutions"
The Role of MRFS in Modern Day Waste Management
Environmental engineering
Recycling
Waste treatment technology
Articles containing video clips | Materials recovery facility | [
"Chemistry",
"Engineering"
] | 1,501 | [
"Water treatment",
"Chemical engineering",
"Civil engineering",
"Environmental engineering",
"Waste treatment technology"
] |
1,532,606 | https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Riemann%E2%80%93Roch%20theorem | In mathematics, specifically in algebraic geometry, the Grothendieck–Riemann–Roch theorem is a far-reaching result on coherent cohomology. It is a generalisation of the Hirzebruch–Riemann–Roch theorem, about complex manifolds, which is itself a generalisation of the classical Riemann–Roch theorem for line bundles on compact Riemann surfaces.
Riemann–Roch type theorems relate Euler characteristics of the cohomology of a vector bundle with their topological degrees, or more generally their characteristic classes in (co)homology or algebraic analogues thereof. The classical Riemann–Roch theorem does this for curves and line bundles, whereas the Hirzebruch–Riemann–Roch theorem generalises this to vector bundles over manifolds. The Grothendieck–Riemann–Roch theorem sets both theorems in a relative situation of a morphism between two manifolds (or more general schemes) and changes the theorem from a statement about a single bundle, to one applying to chain complexes of sheaves.
The theorem has been very influential, not least for the development of the Atiyah–Singer index theorem. Conversely, complex analytic analogues of the Grothendieck–Riemann–Roch theorem can be proved using the index theorem for families. Alexander Grothendieck gave a first proof in a 1957 manuscript, later published. Armand Borel and Jean-Pierre Serre wrote up and published Grothendieck's proof in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof.
Formulation
Let X be a smooth quasi-projective scheme over a field. Under these assumptions, the Grothendieck group of bounded complexes of coherent sheaves is canonically isomorphic to the Grothendieck group of bounded complexes of finite-rank vector bundles. Using this isomorphism, consider the Chern character (a rational combination of Chern classes) as a functorial transformation:
where is the Chow group of cycles on X of dimension d modulo rational equivalence, tensored with the rational numbers. In case X is defined over the complex numbers, the latter group maps to the topological cohomology group:
Now consider a proper morphism between smooth quasi-projective schemes and a bounded complex of sheaves on
The Grothendieck–Riemann–Roch theorem relates the pushforward map
(alternating sum of higher direct images) and the pushforward
by the formula
Here is the Todd genus of (the tangent bundle of) X. Thus the theorem gives a precise measure for the lack of commutativity of taking the push forwards in the above senses and the Chern character and shows that the needed correction factors depend on X and Y only. In fact, since the Todd genus is functorial and multiplicative in exact sequences, we can rewrite the Grothendieck–Riemann–Roch formula as
where is the relative tangent sheaf of f, defined as the element in . For example, when f is a smooth morphism, is simply a vector bundle, known as the tangent bundle along the fibers of f.
Using A1-homotopy theory, the Grothendieck–Riemann–Roch theorem has been extended by to the situation where f is a proper map between two smooth schemes.
Generalising and specialising
Generalisations of the theorem can be made to the non-smooth case by considering an appropriate generalisation of the combination and to the non-proper case by considering cohomology with compact support.
The arithmetic Riemann–Roch theorem extends the Grothendieck–Riemann–Roch theorem to arithmetic schemes.
The Hirzebruch–Riemann–Roch theorem is (essentially) the special case where Y is a point and the field is the field of complex numbers.
A version of Riemann–Roch theorem for oriented cohomology theories was proven by Ivan Panin and Alexander Smirnov. It is concerned with multiplicative operations between algebraic oriented cohomology theories (such as algebraic cobordism). The Grothendieck-Riemann-Roch is a particular case of this result, and the Chern character comes up naturally in this setting.
Examples
Vector bundles on a curve
A vector bundle of rank and degree (defined as the degree of its determinant; or equivalently the degree of its first Chern class) on a smooth projective curve over a field has a formula similar to Riemann–Roch for line bundles. If we take and a point, then the Grothendieck–Riemann–Roch formula can be read as
hence,
This formula also holds for coherent sheaves of rank and degree .
Smooth proper maps
One of the advantages of the Grothendieck–Riemann–Roch formula is it can be interpreted as a relative version of the Hirzebruch–Riemann–Roch formula. For example, a smooth morphism has fibers which are all equi-dimensional (and isomorphic as topological spaces when base changing to ). This fact is useful in moduli-theory when considering a moduli space parameterizing smooth proper spaces. For example, David Mumford used this formula to deduce relationships of the Chow ring on the moduli space of algebraic curves.
Moduli of curves
For the moduli stack of genus curves (and no marked points) there is a universal curve where is the moduli stack of curves of genus and one marked point. Then, he defines the tautological classes
where and is the relative dualizing sheaf. Note the fiber of over a point this is the dualizing sheaf . He was able to find relations between the and describing the in terms of a sum of (corollary 6.2) on the chow ring of the smooth locus using Grothendieck–Riemann–Roch. Because is a smooth Deligne–Mumford stack, he considered a covering by a scheme which presents for some finite group . He uses Grothendieck–Riemann–Roch on to get
Because
this gives the formula
The computation of can then be reduced even further. In even dimensions ,
Also, on dimension 1,
where is a class on the boundary. In the case and on the smooth locus there are the relations
which can be deduced by analyzing the Chern character of .
Closed embedding
Closed embeddings have a description using the Grothendieck–Riemann–Roch formula as well, showing another non-trivial case where the formula holds. For a smooth variety of dimension and a subvariety of codimension , there is the formula
Using the short exact sequence
,
there is the formula
for the ideal sheaf since .
Applications
Quasi-projectivity of moduli spaces
Grothendieck–Riemann–Roch can be used in proving that a coarse moduli space , such as the moduli space of pointed algebraic curves , admits an embedding into a projective space, hence is a quasi-projective variety. This can be accomplished by looking at canonically associated sheaves on and studying the degree of associated line bundles. For instance, has the family of curves
with sections
corresponding to the marked points. Since each fiber has the canonical bundle , there are the associated line bundles
and
It turns out that
is an ample line bundlepg 209, hence the coarse moduli space is quasi-projective.
History
Alexander Grothendieck's version of the Riemann–Roch theorem was originally conveyed in a letter to Jean-Pierre Serre around 1956–1957. It was made public at the initial Bonn Arbeitstagung, in 1957. Serre and Armand Borel subsequently organized a seminar at Princeton University to understand it. The final published paper was in effect the Borel–Serre exposition.
The significance of Grothendieck's approach rests on several points. First, Grothendieck changed the statement itself: the theorem was, at the time, understood to be a theorem about a variety, whereas Grothendieck saw it as a theorem about a morphism between varieties. By finding the right generalization, the proof became simpler while the conclusion became more general. In short, Grothendieck applied a strong categorical approach to a hard piece of analysis. Moreover, Grothendieck introduced K-groups, as discussed above, which paved the way for algebraic K-theory.
See also
Kawasaki's Riemann–Roch formula
Notes
References
External links
The Grothendieck-Riemann-Roch Theorem
The thread "Applications of Grothendieck-Riemann-Roch?" on MathOverflow.
The thread "how does one understand GRR? (Grothendieck Riemann Roch)" on MathOverflow.
The thread "Chern class of ideal sheaf" on Stack Exchange.
Topological methods of algebraic geometry
Theorems in algebraic geometry
Bernhard Riemann | Grothendieck–Riemann–Roch theorem | [
"Mathematics"
] | 1,859 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
1,533,051 | https://en.wikipedia.org/wiki/Eruption%20column | An eruption column or eruption plume is a cloud of super-heated ash and tephra suspended in gases emitted during an explosive volcanic eruption. The volcanic materials form a vertical column or plume that may rise many kilometers into the air above the vent of the volcano. In the most explosive eruptions, the eruption column may rise over , penetrating the stratosphere. Stratospheric injection of aerosols by volcanoes is a major cause of short-term climate change.
A common occurrence in explosive eruptions is column collapse when the eruption column is or becomes too dense to be lifted high into the sky by air convection, and instead falls down the slopes of the volcano to form pyroclastic flows or surges (although the latter is less dense). On some occasions, if the material is not dense enough to fall, it may create pyrocumulonimbus clouds.
Formation
Eruption columns form in explosive volcanic activity, when the high concentration of volatile materials in the rising magma causes it to be disrupted into fine volcanic ash and coarser tephra. The ash and tephra are ejected at speeds of several hundred metres per second, and can rise rapidly to heights of several kilometres, lifted by enormous convection currents.
Eruption columns may be transient, if formed by a discrete explosion, or sustained, if produced by a continuous eruption or closely spaced discrete explosions.
Structure
The solid and liquid materials in an eruption column are lifted by processes that vary as the material ascends:
At the base of the column, material is violently forced upward out of the crater by the pressure of rapidly expanding gases, mainly steam. The gases expand because the pressure of rock above it rapidly reduces as it approaches the surface. This region is called the gas thrust region and typically reaches to only one or two kilometers above the vent.
The convective thrust region covers most of the height of the column. The gas thrust region is very turbulent and surrounding air becomes mixed into it and heated. The air expands, reducing its density and rising. The rising air carries all the solid and liquid material from the eruption entrained in it upwards.
As the column rises into less dense surrounding air, it will eventually reach an altitude where the hot, rising air is of the same density as the surrounding cold air. In this neutral buoyancy region, the erupted material will then no longer rise through convection, but solely through any upward momentum which it has. This is called the umbrella region, and is usually marked by the column spreading out sideways. The eruptive material and the surrounding cold air has the same density at the base of the umbrella region, and the top is marked by the maximum height which momentum carries the material upward. Because the speeds are very low or negligible in this region it is often distorted by stratospheric winds.
Column heights
The column will stop rising once it attains an altitude where it is more dense than the surrounding air. Several factors control the height that an eruption column can reach.
Intrinsic factors include the diameter of the erupting vent, the gas content of the magma, and the velocity at which it is ejected. Extrinsic factors can be important, with winds sometimes limiting the height of the column, and the local thermal temperature gradient also playing a role. The atmospheric temperature in the troposphere normally decreases by about 6-7 K/km, but small changes in this gradient can have a large effect on the final column height. Theoretically, the maximum achievable column height is thought to be about . In practice, column heights ranging from about are seen.
Eruption columns with heights of over break through the tropopause and inject particulates into the stratosphere. Ashes and aerosols in the troposphere are quickly removed by precipitation, but material injected into the stratosphere is much more slowly dispersed, in the absence of weather systems. Substantial amounts of stratospheric injection can have global effects: after Mount Pinatubo erupted in 1991, global temperatures dropped by about . The largest eruptions are thought to cause temperature drops down to several degrees, and are potentially the cause of some of the known mass extinctions.
Eruption column heights are a useful way of measuring eruption intensity since for a given atmospheric temperature, the column height is proportional to the fourth root of the mass eruption rate. Consequently, given similar conditions, to double the column height requires an eruption ejecting 16 times as much material per second. The column height of eruptions which have not been observed can be estimated by mapping the maximum distance that pyroclasts of different sizes are carried from the vent—the higher the column the further ejected material of a particular mass (and therefore size) can be carried.
The approximate maximum height of an eruption column is given by the equation.
H = k(MΔT)1/4
Where:
k is a constant that depends on various properties, such as atmospheric conditions.
M is the mass eruption rate.
ΔT is the difference in temperature between the erupting magma and the surrounding atmosphere.
Hazards
Column collapse
Eruption columns may become so laden with dense material that they are too heavy to be supported by convection currents. This can suddenly happen if, for example, the rate at which magma is erupted increases to a point where insufficient air is entrained to support it, or if the magma density suddenly increases as denser magma from lower regions in a stratified magma chamber is tapped.
If it does happen, then material reaching the bottom of the convective thrust region can no longer be adequately supported by convection and will fall under gravity, forming a pyroclastic flow or surge which can travel down the slopes of a volcano at speeds of over . Column collapse is one of the most common and dangerous volcanic hazards in column-creating eruptions.
Aircraft
Several eruptions have seriously endangered aircraft which have encountered or passed by the eruption column. In two separate incidents in 1982, airliners flew into the upper reaches of an eruption column blasted off by Mount Galunggung, and the ash severely damaged both aircraft. Particular hazards were the ingestion of ash stopping the engines, the sandblasting of the cockpit windows rendering them largely opaque and the contamination of fuel through the ingestion of ash through pressurisation ducts. The damage to engines is a particular problem since temperatures inside a gas turbine are sufficiently high that volcanic ash is melted in the combustion chamber, and forms a glass coating on components farther downstream of it, for example on turbine blades.
In the case of British Airways Flight 9, the aircraft lost power on all four engines, and in the other, nineteen days later, three of the four engines failed on a Singapore Airlines 747. In both cases, engines were successfully restarted, but the aircraft were forced to make emergency landings in Jakarta.
Similar damage to aircraft occurred due to an eruption column over Redoubt volcano in Alaska in 1989. Following the eruption of Mount Pinatubo in 1991, aircraft were diverted to avoid the eruption column, but nonetheless, fine ash dispersing over a wide area in Southeast Asia caused damage to 16 aircraft, some as far as from the volcano.
Eruption columns are not usually visible on weather radar and may be obscured by ordinary clouds or night. Because of the risks posed to aviation by eruption columns, there is a network of nine Volcanic Ash Advisory Centers around the world which continuously monitor for eruption columns using data from satellites, ground reports, pilot reports and meteorological models.
See also
Cryovolcano
Enceladus – a volcanically active moon of planet Saturn
Mount Pelée
Pele (volcano)
Peléan eruption
Plinian eruption
References
Further reading
External links
USGS information
Description of Galunggung eruption column
Volcanoes
Volcanic eruptions
Explosive eruptions
Volcanic degassing
Tephra | Eruption column | [
"Chemistry"
] | 1,573 | [
"Explosive eruptions",
"Explosions"
] |
1,533,133 | https://en.wikipedia.org/wiki/Triplet%20state | In quantum mechanics, a triplet state, or spin triplet, is the quantum state of an object such as an electron, atom, or molecule, having a quantum spin S = 1. It has three allowed values of the spin's projection along a given axis mS = −1, 0, or +1, giving the name "triplet".
Spin, in the context of quantum mechanics, is not a mechanical rotation but a more abstract concept that characterizes a particle's intrinsic angular momentum. It is particularly important for systems at atomic length scales, such as individual atoms, protons, or electrons.
A triplet state occurs in cases where the spins of two unpaired electrons, each having spin s = 1/2, align to give S = 1, in contrast to the more common case of two electrons aligning oppositely to give S = 0, a spin singlet. Most molecules encountered in daily life exist in a singlet state because all of their electrons are paired, but molecular oxygen is an exception. At room temperature, O2 exists in a triplet state, which can only undergo a chemical reaction by making the forbidden transition into a singlet state. This makes it kinetically nonreactive despite being thermodynamically one of the strongest oxidants. Photochemical or thermal activation can bring it into the singlet state, which makes it kinetically as well as thermodynamically a very strong oxidant.
Two spin-1/2 particles
In a system with two spin-1/2 particlesfor example the proton and electron in the ground state of hydrogenmeasured on a given axis, each particle can be either spin up or spin down so the system has four basis states in all
using the single particle spins to label the basis states, where the first arrow and second arrow in each combination indicate the spin direction of the first particle and second particle respectively.
More rigorously
where and are the spins of the two particles, and and are their projections onto the z axis. Since for spin-1/2 particles, the basis states span a 2-dimensional space, the basis states span a 4-dimensional space.
Now the total spin and its projection onto the previously defined axis can be computed using the rules for adding angular momentum in quantum mechanics using the Clebsch–Gordan coefficients. In general
substituting in the four basis states
returns the possible values for total spin given along with their representation in the basis. There are three states with total spin angular momentum 1:
which are symmetric and a fourth state with total spin angular momentum 0:
which is antisymmetric. The result is that a combination of two spin-1/2 particles can carry a total spin of 1 or 0, depending on whether they occupy a triplet or singlet state.
A mathematical viewpoint
In terms of representation theory, what has happened is that the two conjugate 2-dimensional spin representations of the spin group SU(2) = Spin(3) (as it sits inside the 3-dimensional Clifford algebra) have tensored to produce a 4-dimensional representation. The 4-dimensional representation descends to the usual orthogonal group SO(3) and so its objects are tensors, corresponding to the integrality of their spin. The 4-dimensional representation decomposes into the sum of a one-dimensional trivial representation (singlet, a scalar, spin zero) and a three-dimensional representation (triplet, spin 1) that is nothing more than the standard representation of SO(3) on . Thus the "three" in triplet can be identified with the three rotation axes of physical space.
See also
Singlet state
Doublet state
Diradical
Angular momentum
Pauli matrices
Spin multiplicity
Spin quantum number
Spin-1/2
Spin tensor
Spinor
References
Quantum states
Rotational symmetry
Spectroscopy | Triplet state | [
"Physics",
"Chemistry"
] | 780 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
"Quantum states",
"Spectroscopy",
"Symmetry",
"Rotational symmetry"
] |
1,533,196 | https://en.wikipedia.org/wiki/Photoelasticity | In materials science, photoelasticity describes changes in the optical properties of a material under mechanical deformation. It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material.
History
The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster, who immediately recognized it as stress-induced birefringence. That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel. Experimental frameworks were developed at the beginning of the twentieth century with the works of E.G. Coker and L.N.G. Filon of University of London. Their book Treatise on Photoelasticity, published in 1930 by Cambridge Press, became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, Photoelasticity, in the field. At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels, however this was proved inadequate almost a century later by Nelson & Lax as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material.
With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials.
Applications
Photoelasticity has been used for a variety of stress analyses and even for routine use in design, particularly before the advent of numerical methods, such as finite elements or boundary elements. Digitization of polariscopy enables fast image acquisition and data processing, which allows its industrial applications to control quality of manufacturing process for materials such as glass and polymer. Dentistry utilizes photoelasticity to analyze strain in denture materials.
Photoelasticity can successfully be used to investigate the highly localized stress state within masonry or in proximity of a rigid line inclusion (stiffener) embedded in an elastic medium. In the former case, the problem is nonlinear due to the contacts between bricks, while in the latter case the elastic solution is singular, so that numerical methods may fail to provide correct results. These can be obtained through photoelastic techniques. Dynamic photoelasticity integrated with high-speed photography is utilized to investigate fracture behavior in materials. Another important application of the photoelasticity experiments is to study the stress field around bi-material notches. Bi-material notches exist in many engineering application like welded or adhesively bonded structures.
For example, some elements of Gothic cathedrals previously thought decorative were first proved essential for structural support by photoelastic methods.
Formal definition
For a linear dielectric material the change in the inverse permittivity tensor with respect to the deformation (the gradient of the displacement ) is described by
where is the fourth-rank photoelasticity tensor, is the linear displacement from equilibrium, and denotes differentiation with respect to the Cartesian coordinate . For isotropic materials, this definition simplifies to
where is the symmetric part of the photoelastic tensor (the photoelastic strain tensor), and is the linear strain. The antisymmetric part of is known as the roto-optic tensor. From either definition, it is clear that deformations to the body may induce optical anisotropy, which can cause an otherwise optically isotropic material to exhibit birefringence. Although the symmetric photoelastic tensor is most commonly defined with respect to mechanical strain, it is also possible to express photoelasticity in terms of the mechanical stress.
Experimental principles
The experimental procedure relies on the property of birefringence, as exhibited by certain transparent materials. Birefringence is a phenomenon in which a ray of light passing through a given material experiences two refractive indices. The property of birefringence (or double refraction) is observed in many optical crystals. Upon the application of stresses, photoelastic materials exhibit the property of birefringence, and the magnitude of the refractive indices at each point in the material is directly related to the state of stresses at that point. Information such as maximum shear stress and its orientation are available by analyzing the birefringence with an instrument called a polariscope.
When a ray of light passes through a photoelastic material, its electromagnetic wave components are resolved along the two principal stress directions and each component experiences a different refractive index due to the birefringence. The difference in the refractive indices leads to a relative phase retardation between the two components. Assuming a thin specimen made of isotropic materials, where two-dimensional photoelasticity is applicable, the magnitude of the relative retardation is given by the stress-optic law:
where Δ is the induced retardation, C is the , t is the specimen thickness, λ is the vacuum wavelength, and σ1 and σ2 are the first and second principal stresses, respectively. The retardation changes the polarization of transmitted light. The polariscope combines the different polarization states of light waves before and after passing the specimen. Due to optical interference of the two waves, a fringe pattern is revealed. The number of fringe order N is denoted as
which depends on relative retardation. By studying the fringe pattern one can determine the state of stress at various points in the material.
For materials that do not show photoelastic behavior, it is still possible to study the stress distribution. The first step is to build a model, using photoelastic materials, which has geometry similar to the real structure under investigation. The loading is then applied in the same way to ensure that the stress distribution in the model is similar to the stress in the real structure.
Isoclinics and isochromatics
Isoclinics are the loci of the points in the specimen along which the principal stresses are in the same direction.
Isochromatics are the loci of the points along which the difference in the first and second principal stress remains the same. Thus they are the lines which join the points with equal maximum shear stress magnitude.
Two-dimensional photoelasticity
Photoelasticity can describe both three-dimensional and two-dimensional states of stress. However, examining photoelasticity in three-dimensional systems is more involved than two-dimensional or plane-stress system. So the present section deals with photoelasticity in a plane stress system. This condition is achieved when the thickness of the prototype is much smaller than the dimensions in the plane. Thus one is only concerned with stresses acting parallel to the plane of the model, as other stress components are zero. The experimental setup varies from experiment to experiment. The two basic kinds of setup used are plane polariscope and circular polariscope.
The working principle of a two-dimensional experiment allows the measurement of retardation, which can be converted to the difference between the first and second principal stress and their orientation. To further get values of each stress component, a technique called stress-separation is required. Several theoretical and experimental methods are utilized to provide additional information to solve individual stress components.
Plane polariscope setup
The setup consists of two linear polarizers and a light source. The light source can either emit monochromatic light or white light depending upon the experiment. First the light is passed through the first polarizer which converts the light into plane polarized light. The apparatus is set up in such a way that this plane polarized light then passes through the stressed specimen. This light then follows, at each point of the specimen, the direction of principal stress at that point. The light is then made to pass through the analyzer and we finally get the fringe pattern.
The fringe pattern in a plane polariscope setup consists of both the isochromatics and the isoclinics. The isoclinics change with the orientation of the polariscope while there is no change in the isochromatics.
Circular polariscope setup
In a circular polariscope setup two quarter-wave plates are added to the experimental setup of the plane polariscope. The first quarter-wave plate is placed in between the polarizer and the specimen and the second quarter-wave plate is placed between the specimen and the analyzer. The effect of adding the quarter-wave plate after the source-side polarizer is that we get circularly polarized light passing through the sample. The analyzer-side quarter-wave plate converts the circular polarization state back to linear before the light passes through the analyzer.
The basic advantage of a circular polariscope over a plane polariscope is that in a circular polariscope setup we only get the isochromatics and not the isoclinics. This eliminates the problem of differentiating between the isoclinics and the isochromatics.
See also
Acousto-optic modulator
Electrostriction
Mechanochromism
Photoelastic modulator
Polarimetry
References
External links
University of Cambridge Page on Photoelasticity.
Laboratory for Physical Modeling of Structures and Photoelasticity (University of Trento, Italy)
Build your own polariscope
Materials science
Mechanical engineering
Mechanics
Optics | Photoelasticity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,967 | [
"Applied and interdisciplinary physics",
"Optics",
"Materials science",
"Mechanics",
" molecular",
"Mechanical engineering",
"nan",
"Atomic",
" and optical physics"
] |
1,533,644 | https://en.wikipedia.org/wiki/Reactive%20intermediate | In chemistry, a reactive intermediate or an intermediate is a short-lived, high-energy, highly reactive molecule. When generated in a chemical reaction, it will quickly convert into a more stable molecule. Only in exceptional cases can these compounds be isolated and stored, e.g. low temperatures, matrix isolation. When their existence is indicated, reactive intermediates can help explain how a chemical reaction takes place.
Most chemical reactions take more than one elementary step to complete, and a reactive intermediate is a high-energy, hence unstable, product that exists only in one of the intermediate steps. The series of steps together make a reaction mechanism. A reactive intermediate differs from a reactant or product or a simple reaction intermediate only in that it cannot usually be isolated but is sometimes observable only through fast spectroscopic methods. It is stable in the sense that an elementary reaction forms the reactive intermediate and the elementary reaction in the next step is needed to destroy it.
When a reactive intermediate is not observable, its existence must be inferred through experimentation. This usually involves changing reaction conditions such as temperature or concentration and applying the techniques of chemical kinetics, chemical thermodynamics, or spectroscopy. Reactive intermediates based on carbon are radicals, carbenes, carbocations, carbanions, arynes, and carbynes.
Common features
Reactive intermediates have several features in common:
low concentration with respect to reaction substrate and final reaction product
with the exception of carbanions, these intermediates do not obey the lewis octet rule, hence the high reactivity
often generated on chemical decomposition of a chemical compound
it is often possible to prove the existence of this species by spectroscopic means
cage effects have to be taken into account
often stabilisation by conjugation or resonance
often difficult to distinguish from a transition state
prove existence by means of chemical trapping
Carbon
Other reactive intermediates
Carbenoid
Ion-neutral complex
Keto anions
Nitrenes
Oxocarbenium ions
Phosphinidenes
Phosphoryl nitride
Tetrahedral intermediates in carbonyl addition reactions
See also
Activated complex
Transition state
References
Extranol links
Reaction mechanisms | Reactive intermediate | [
"Chemistry"
] | 442 | [
"Reaction mechanisms",
"Organic compounds",
"Physical organic chemistry",
"Chemical kinetics",
"Reactive intermediates"
] |
1,534,314 | https://en.wikipedia.org/wiki/Spin%20transistor | The magnetically sensitive transistor, also known as the spin transistor, spin field-effect transistor (spinFET), Datta–Das spin transistor or spintronic transistor (named for spintronics, the technology which this development spawned), originally proposed in 1990 by Supriyo Datta and Biswajit Das, is an alternative design on the common transistor invented in the 1940s. This device was considered one of the Nature milestones in spin in 2008.
Description
The spin transistor comes about as a result of research on the ability of electrons (and other fermions) to naturally exhibit one of two (and only two) states of spin: known as "spin up" and "spin down". Thus, spin transistors operate on electron spin as embodying a two-state quantum system. Unlike its namesake predecessor, which operates on an electric current, spin transistors operate on electrons on a more fundamental level; it is essentially the application of electrons set in particular states of spin to store information.
One advantage over regular transistors is that these spin states can be detected and altered without necessarily requiring the application of an electric current. This allows for detection hardware (such as hard drive heads) that are much smaller but even more sensitive than today's devices, which rely on noisy amplifiers to detect the minute charges used on today's data storage devices. The potential result is devices that can store more data in less space and consume less power, using less costly materials. The increased sensitivity of spin transistors is also being researched in creating more sensitive automotive sensors, a move being encouraged by a push for environmentally friendlier vehicles.
A second advantage of a spin transistor is that the spin of an electron is semi-permanent and can be used as means of creating cost-effective non-volatile solid state storage that does not require the constant application of current to sustain. It is one of the technologies being explored for magnetic random access memory (MRAM).
Because of its high potential for practical use in the computer world, spin transistors are currently being researched in various firms throughout the world, such as in England and in Sweden. Recent breakthroughs have allowed the production of spin transistors, using readily available substances, that can operate at room temperature: a precursor to commercial viability.
References
Transistor types
Spintronics | Spin transistor | [
"Physics",
"Materials_science"
] | 494 | [
"Spintronics",
"Condensed matter physics"
] |
1,534,483 | https://en.wikipedia.org/wiki/Motion%20estimation | In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.
Related terms
More often than not, the term motion estimation and the term optical flow are used interchangeably. It is also related in concept to image registration and stereo correspondence. In fact all of these terms refer to the process of finding corresponding points between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are "usually" the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for as well as the optimisation strategy in the estimation process.
Each motion vector is used to represent a macroblock in a picture based on the position of this macroblock (or a similar one) in another picture, called the reference picture.
The H.264/MPEG-4 AVC standard defines motion vector as:
motion vector: a two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture.
Algorithms
The methods for finding motion vectors can be categorised into pixel based methods ("direct") and feature based methods ("indirect"). A famous debate resulted in two papers from the opposing factions being produced to try to establish a conclusion.
Direct methods
Block-matching algorithm
Phase correlation and frequency domain methods
Pixel recursive algorithms
Optical flow
Indirect methods
Indirect methods use features, such as corner detection, and match corresponding features between frames, usually with a statistical function applied over a local or global area. The purpose of the statistical function is to remove matches that do not correspond to the actual motion.
Statistical functions that have been successfully used include RANSAC.
Additional note on the categorization
It can be argued that almost all methods require some kind of definition of the matching criteria. The difference is only whether you summarise over a local image region first and then compare the summarisation (such as feature based methods), or you compare each pixel first (such as squaring the difference) and then summarise over a local image region (block base motion and filter based motion). An emerging type of matching criteria summarises a local image region first for every pixel location (through some feature transform such as Laplacian transform), compares each summarised pixel and summarises over a local image region again. Some matching criteria have the ability to exclude points that do not actually correspond to each other albeit producing a good matching score, others do not have this ability, but they are still matching criteria.
Affine motion estimation
Affine motion estimation is a technique used in computer vision and image processing to estimate the motion between two images or frames. It assumes that the motion can be modeled as an affine transformation (translation + rotation + zooming), which is a linear transformation followed by a translation.
Applications
Video coding
Applying the motion vectors to an image to synthesize the transformation to the next image is called motion compensation. It is most easily applied to discrete cosine transform (DCT) based video coding standards, because the coding is performed in blocks.
As a way of exploiting temporal redundancy, motion estimation and compensation are key parts of video compression. Almost all video coding standards use block-based motion estimation and compensation such as the MPEG series including the most recent HEVC.
3D reconstruction
In simultaneous localization and mapping, a 3D model of a scene is reconstructed using images from a moving camera.
See also
Moving object detection
Graphics processing unit
Vision processing unit
Scale-invariant feature transform
References
Video processing
Motion (physics)
Motion in computer vision | Motion estimation | [
"Physics"
] | 910 | [
"Physical phenomena",
"Motion (physics)",
"Space",
"Mechanics",
"Motion in computer vision",
"Spacetime"
] |
1,536,956 | https://en.wikipedia.org/wiki/Lineworker | A lineworker (also called a lineman or powerline worker) constructs and maintains the electric transmission and distribution facilities that deliver electrical energy to industrial, commercial, and residential establishments. A lineworker installs, services, and emergency repairs electrical lines in the case of lightning, wind, ice storm, or ground disruptions. Whereas those who install and maintain electrical wiring inside buildings are electricians, lineworkers generally work at outdoor installations.
History
The occupation had begun in 1844 when the first telegraph wires were strung between Washington, D.C., and Baltimore carrying the famous message of Samuel Morse, "What hath God wrought?" The first telegraph station was built in Chicago in 1848, by 1861 a web of lines spanned the United States and in 1868 the first permanent telegraph cable was successfully laid across the Atlantic Ocean. Telegraph lines could be strung on trees, but wooden poles were quickly adopted as the preferred method. The term lineworker was used for those who set wooden poles and strung wire. The term continued in use with the invention of the telephone in the 1870s and the beginning of electrification in the 1890s.
This new electrical power work was more hazardous than telegraph or telephone work because of the risk of electrocution. Between the 1890s and the 1930s, line work was considered one of the most hazardous jobs. This led to the formation of labor organizations to represent the workers and advocate for their safety. This also led to the establishment of apprenticeship programs and the establishment of more stringent safety standards, starting in the late 1930s. The union movement in the United States was led by lineworker Henry Miller, who in 1890 was elected president of the Electrical Wiremen and Linemen's Union, No. 5221 of the American Federation of Labor.
United States
The rural electrification drive during the New Deal led to a wide expansion in the number of jobs in the electric power industry. Many powerline workers during that period traveled around the country following jobs as they became available in tower construction, substation construction, and wire stringing. They often lived in temporary camps set up near the project they were working on, or in boarding houses if the work was in a town or city, and relocating every few weeks or months. The occupation was lucrative at the time, but the hazards and the extensive travel limited its appeal.
A brief drive to electrify some railroads on the East Coast of the US-led to the development of specialization of powerline workers who installed and maintained catenary overhead lines. Growth in this branch of linework declined after most railroads favored diesel over electric engines for replacement of steam engines.
The occupation evolved during the 1940s and 1950s with the expansion of residential electrification. This led to an increase in the number of powerline workers needed to maintain power distribution circuits and provide emergency repairs. Maintenance powerline workers mostly stayed in one place, although sometimes they were called to travel to assist repairs. During the 1950s, some electric lines began to be installed in tunnels, expanding the scope of the work.
Duties
Powerline workers work on electrically energized (live) and de-energized (dead) power lines. They may perform several tasks associated with power lines, including installation or replacement of distribution equipment such as capacitor banks, distribution transformers on poles, insulators and fuses. These duties include the use of ropes, knots, and lifting equipment. These tasks may have to be performed with primitive manual tools where accessibility is limited. Such conditions are common in rural or mountainous areas that are inaccessible to trucks.
High voltage transmission lines can be worked live with proper setups. The lineworker must be isolated from the ground. The lineworker wears special conductive clothing that is connected to the live power line, at which point the line and the lineworker are at the same potential, allowing the lineworker to handle the wire. The lineworker may still be electrocuted if he or she completes an electrical circuit, for example by handling both ends of a broken conductor. Such work is often done by helicopter by specially trained powerline workers. Isolated line work is only used for transmission-level voltages and sometimes for the higher distribution voltages. Live wire work is common on low voltage distribution systems within the UK and Australia as all linesmen are trained to work 'live'. Live wire work on high voltage distribution systems within the UK and Australia is carried out by specialist teams.
Training
Becoming a lineworker usually involves starting as an apprentice and a four-year training program before becoming a "Journey Lineworker". Apprentice powerline workers are trained in all types of work from operating equipment and climbing to proper techniques and safety standards. Schools throughout the United States offer a pre-apprentice lineworker training program such as Southeast Lineman Training Center and Northwest Lineman College.
Safety
Lineworkers, especially those who deal with live electrical apparatus, use personal protective equipment (PPE) as protection against inadvertent contact. This includes rubber gloves, rubber sleeves, bucket liners, and protective blankets.
When working with energized power lines, powerline workers must use protection to eliminate any contact with the energized line. The requirements for PPEs and associated permissible voltage depends on applicable regulations in the jurisdiction as well as company policy. Voltages higher than those that can be worked using gloves are worked with special sticks known as hot-line tools or hot sticks, with which power lines can be safely handled from a distance. Powerline workers must also wear special rubber insulating gear when working with live wires to protect against any accidental contact with the wire. The buckets powerline workers sometimes work from are also insulated with fiberglass.
De-energized power lines can be hazardous as they can still be energized from another source such as interconnection or interaction with another circuit even when they appear to be shut off. For example, a higher-voltage distribution level circuit may feed several lower-voltage distribution circuits through transformers. If the higher voltage circuit is de-energized, but if lower-voltage circuits connected remain energized, the higher voltage circuit will remain energized. Another problem can arise when de-energized wires become energized through electrostatic or electromagnetic induction from energized wires nearby.
All live line work PPE must be kept clean from contaminants and regularly tested for di-electric integrity. This is done by the use of high voltage electrical testing equipment.
Other general items of PPE such as helmets are usually replaced at regular intervals.
See also
Overhead cable
References
External links
Thomas M. Shoemaker and James E. Mack. (2002) The Lineman's and Cableman's Handbook. Edwin B. Kurtz. .
"How Linemen Handle Hot Wires And Stay Alive" , July 1949, Popular Science basics explained on lineman safety for the general public
Inter-Utility Overhead Trainers Association
http://fallenlinemen.org/
Construction trades workers
Crafts
Electric power
Skills | Lineworker | [
"Physics",
"Engineering"
] | 1,425 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,537,097 | https://en.wikipedia.org/wiki/Voltage-regulator%20tube | A voltage-regulator tube (VR tube) is an electronic component used as a shunt regulator to hold a voltage constant at a predetermined level.
Physically, these devices resemble vacuum tubes, but there are two main differences:
Their glass envelopes are filled with a gas mixture, and
They have a cold cathode; the cathode is not heated with a filament to emit electrons.
Electrically, these devices resemble Zener diodes, with the following major differences:
They rely on gas ionization, rather than Zener breakdown
The unregulated supply voltage must be 15–20% above the nominal output voltage to ensure that the discharge starts
The output can be higher than nominal if the current through the tube is too low.
When sufficient voltage is applied across the electrodes, the gas ionizes, forming a glow discharge around the cathode electrode. The VR tube then acts as a negative resistance device; as the current through the device increases, the amount of ionization also increases, reducing the resistance of the device to further current flow. In this way, the device conducts sufficient current to hold the voltage across its terminals to the desired value.
Because the device would conduct a nearly unlimited amount of current, there must be some external means of limiting the current. Usually, this is provided by an external resistor upstream from the VR tube. The VR tube then conducts any portion of the current that does not flow into the downstream load, maintaining an approximately constant voltage across the VR tube's electrodes. The VR tube's regulation voltage was only guaranteed when conducting an amount of current within the allowable range. In particular, if the current through the tube is too low to maintain ionization, the output voltage can rise above the nominal output—as far as the input supply voltage. If the current through the tube is too high, it can enter an arc discharge mode where the voltage will be significantly lower than nominal and the tube may be damaged.
Some voltage-regulator tubes contained small amounts of radionuclides to produce a more reliable ionization.
The Corona VR tube is a high-voltage version that is filled with hydrogen at close to atmospheric pressure, and is designed for voltages ranging from 400 V to 30 kV at tens of microamperes. It has a coaxial form; the outer cylindrical electrode is the cathode and the inner one is the anode. The voltage stability depends on the gas pressure.
A successful hydrogen voltage regulator tube, from 1925, was the Raytheon tube, which allowed radios of the time to be operated from AC power instead of batteries.
Specific models
In America, VR tubes were given RETMA tube part numbers. Lacking a heater (filament), the tube's part numbers began with "0" (zero).
In Europe, VR tubes were given part numbers under the professional system ("ZZ1xxx") and under a dedicated system.
In USSR, glow-discharge stabilitrons were given designation in Cyrillic with serial number of development. For example, "СГ21Б", "СГ204К" and i.e.
VR tubes were only available in certain voltages. Common models were:
Octal-based tubes, 5–40 mA current:
0A3 – 75 volts
0B3 – 90 volts
0C3 – 105 volts (best regulation of these four)
0D3 – 150 volts
Miniature tubes, 5–30 mA current:
0A2 – 150 volts
0B2 – 108 volts (best regulation of these three)
0C2 – 72 volts
Miniature tubes, 1–10 mA current:
85A2 – 85 volts (equivalents: 0G3, CV449, CV4048, QS83/3, QS1209)
Voltage reference 1.5–3.0 mA current:
5651 – 87 volts (the most popular voltage reference ever made)
5651A – 85.5 volts
Subminiature tubes:
Various models such as the 991 that resembled neon lamps, but were optimized for more-accurate voltage regulation
Miniature corona tubes, 5–55 μA current:
CK1022 1 kV
Wire-ended, subminiature corona tubes:
CK1037 (6437) 700 volts, 5–125 μA
CK1038 900 volts, 5–55 μA
CK1039 (6438) 1.2 kV, 5–125 μA
Design considerations
Some voltage regulator tubes have an internal jumper connected between two of the pins. This jumper could be used in series with the secondary transformer winding. Then, if the tube was removed, rather than leaving the voltage unregulated, the output would turn off.
Because the glow discharge is a "statistical" process, a certain amount of electrical noise is introduced into the regulated voltage as the level of ionization varies. In most cases, this can be easily filtered out by placing a small capacitor in parallel with the VR tube or using an RC decoupling network downstream of the VR tube. Too large a capacitance (>0.1 μF for an 0D3, for instance), however, and the circuit will form a relaxation oscillator, definitely ruining the voltage regulation and possibly causing the tube to fail catastrophically.
VR tubes can be operated in series for greater voltage ranges. They cannot be operated in parallel: because of manufacturing variations, the current would not be shared equally among several tubes in parallel. (Note the equivalent behavior with series and parallel connected Zener diodes.)
In the present day, VR tubes have been almost-entirely supplanted by solid state regulators based on Zener diodes and avalanche breakdown diodes.
VR tube information
Correctly operating VR tubes glow during normal operation. The color of the glow varies depending upon the gas mixture used to fill the tubes.
Though they lack a heater, VR tubes often do become warm during operation due to the current and voltage drop through them.
References
Electrical breakdown
Vacuum tubes
Tube | Voltage-regulator tube | [
"Physics"
] | 1,244 | [
"Physical phenomena",
"Physical quantities",
"Voltage regulation",
"Vacuum tubes",
"Vacuum",
"Electrical phenomena",
"Electrical breakdown",
"Voltage",
"Matter"
] |
1,537,176 | https://en.wikipedia.org/wiki/Intercellular%20adhesion%20molecule | In molecular biology, intercellular adhesion molecules (ICAMs) and vascular cell adhesion molecule-1 (VCAM-1) are part of the immunoglobulin superfamily. They are important in inflammation, immune responses and in intracellular signalling events. The ICAM family consists of five members, designated ICAM-1 to ICAM-5. They are known to bind to leucocyte integrins CD11/CD18 such as LFA-1 and Macrophage-1 antigen, during inflammation and in immune responses. In addition, ICAMs may exist in soluble forms in human plasma, due to activation and proteolysis mechanisms at cell surfaces.
Mammalian intercellular adhesion molecules include:
ICAM-1
ICAM2
ICAM3
ICAM4
ICAM5
References
Cell biology
Protein families | Intercellular adhesion molecule | [
"Chemistry",
"Biology"
] | 172 | [
"Cell biology",
"Biotechnology stubs",
"Protein classification",
"Biochemistry stubs",
"Biochemistry",
"Protein families"
] |
1,537,546 | https://en.wikipedia.org/wiki/Redevelopment | Redevelopment is any new construction on a site that has pre-existing uses. It represents a process of land development uses to revitalize the physical, economic and social fabric of urban space.
Description
Variations on redevelopment include:
Urban infill on vacant parcels that have no existing activity but were previously developed, especially on brownfield land, such as the redevelopment of an industrial site into a mixed-use development.
Constructing with a denser land usage, such as the redevelopment of a block of townhouses into a large apartment building.
Adaptive reuse, where older structures are converted for improved current market use, such as an industrial mill into housing lofts.
Redevelopment projects can be small or large ranging from a single building to entire new neighborhoods or "new town in town" projects.
Redevelopment also refers to state and federal statutes which give cities and counties the authority to establish redevelopment agencies and give the agencies the authority to attack problems of urban decay. The fundamental tools of a redevelopment agency include the authority to acquire real property, the power of eminent domain, to develop and sell property without bidding and the authority and responsibility of relocating persons who have interests in the property acquired by the agency. The financing/funding of such operations might come from government grants, borrowing from federal or state governments and selling bonds and from tax increment financing.
Other terms sometimes used to describe redevelopment include urban renewal (urban revitalization). While efforts described as urban revitalization often involve redevelopment, they do not always involve redevelopment as they do not always involve the demolition of any existing structures but may instead describe the rehabilitation of existing buildings or other neighborhood improvement initiatives.
A new example of other neighborhood improvement initiatives is the funding mechanism associated with high carbon footprint air quality urban blight. Assembly Bill AB811 is the State of California's answer to funding renewable energy and allows cities to craft their own sustainability action plans. These cutting edge action plans needs the funding structure; which can easily come forward through redevelopment funding.
Urban renewal
Some redevelopment projects and programs have been incredibly controversial including the Urban Renewal program in the United States in the mid-twentieth century or the urban regeneration program in Great Britain. Controversy usually results either from the use of eminent domain, from objections to the change in use or increases in density and intensity on the site or from disagreement on the appropriate use of taxpayer funds to pay for some element of the project.
Urban redevelopment in the United States has been controversial because it can displace poor and lower middle class residents, often transferring residents' land and homes to developers for free or a below-market-value price. This is done on the condition that the developer will use that land to construct new commercial and residential developments.
The residents displaced by redevelopment are often undercompensated, and some (notably month-to-month tenants and business owners) are not compensated at all. Historically, redevelopment agencies have been buying many properties in redevelopment areas for prices below fair market value, or even below the agencies' own appraisal figures because the displaced people are often unaware of their legal rights and lack the will and the funds to mount a proper legal defense in a valuation trial. Those who do so usually recover more in compensation than what is offered by the redevelopment agencies.
The controversy over misuse of eminent domain for redevelopment reached a climax in the wake of the U.S. Supreme Court's 2005 decision in Kelo v. City of New London, which ruled that the general benefits a community enjoyed from economic growth qualified private redevelopment plans as a permissible "public use" under the Takings Clause of the Fifth Amendment. The Kelo decision was widely denounced and remains the subject of severe criticism. Remedial legislation to restrict the use of eminent domain for private development has been enacted or introduced in a number of states.
Golf course redevelopment
Golf course redevelopment, also known as golf course conversion is a real estate niche, in which investors purchase failing golf courses. Investors then subdivide the golf course into individual plots of lands. They then resell the plots of land for builders, or build on the plots then resell it to residential home buyers. This process is usually done with the assistance of a real estate broker.
The main challenge of this niche is the difficulties that investors face in requesting a variance from cities.
Notable examples
North America:
Atlantic Station, Atlanta, Georgia
Atlantic Yards, Brooklyn, New York
American Tobacco Historic District, Durham, North Carolina
CFB Downsview -> Downsview Park, Toronto, Ontario
CFB Griesbach -> Griesbach, Edmonton, Alberta
CN rail yard -> Station Lands (Edmonton), MacEwan University, Edmonton downtown arena; Edmonton, Alberta
Edmonton City Centre (Blatchford Field) Airport -> Blatchford, Edmonton, Alberta
HOPE VI
Hudson Yards, New York, New York
Lincoln Center for the Performing Arts, New York, New York
Midtown Detroit, Michigan
Mission Bay, Treasure Island, Western Addition, and the part of South of Market that become Moscone Center and Yerba Buena Gardens in San Francisco, California
Pearl District, Portland, Oregon
Old Port of Montreal, Quebec
Downtown San Diego, California
Central Park, Denver, Colorado, on a former airport site
Toronto Waterfront, Toronto, Canada
West End, Boston, Massachusetts
World Trade Center site in Lower Manhattan following the September 11 attacks
Europe:
Canary Wharf, London (UK)
Edinburgh Waterfront, UK
Redevelopment of Norrmalm (Sweden)
Liverpool One, Liverpool (UK)
Greenwich Millennium Village, London (UK)
Tigné Point, Sliema (Malta)
Porta Nuova, Milan (Italy)
Asia:
Taichung's seventh Redevelopment Zone, Taichung, Taiwan
Beijing Olympic Village, Beijing, China
Sheung Wan, Hong Kong, China
Central America:
Panama in Casco Antiguo (Casco Viejo)
See also
References
Urban decay | Redevelopment | [
"Engineering"
] | 1,180 | [
"Construction",
"Redevelopment"
] |
1,538,007 | https://en.wikipedia.org/wiki/Matrix%20chain%20multiplication | Matrix chain multiplication (or the matrix chain ordering problem) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.
There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, there are five possible options:
((AB)C)D = (A(BC))D = (AB)(CD) = A((BC)D) = A(B(CD)).
Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is by a matrix that is requires ordinary multiplications and ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity.
If A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then
computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while
computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.
Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" The number of possible parenthesizations is given by the (n–1)th Catalan number, which is O(4n / n3/2), so checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.
A dynamic programming algorithm
To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm:
Take the sequence of matrices and separate it into two subsequences.
Find the minimum cost of multiplying out each subsequence.
Add these costs together, and add in the cost of multiplying the two result matrices.
Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them.
For example, if we have four matrices ABCD, we compute the cost required to find each of (A)(BCD), (AB)(CD), and (ABC)(D), making recursive calls to find the minimum cost to compute ABC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor.
However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ABC and AB. But finding the best cost for computing ABC also requires finding the best cost for AB. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs.
One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about n2/2 different subsequences, where n is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(n3) from O(2n), which is more than efficient enough for real applications. This is top-down dynamic programming.
The following bottom-up approach computes, for each 2 ≤ k ≤ n, the minimum costs of all subsequences of length k using the costs of smaller subsequences already computed.
It has the same asymptotic runtime and requires no recursion.
Pseudocode:
// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n
MatrixChainOrder(int dims[])
{
// length[dims] = n + 1
n = dims.length - 1;
// m[i,j] = Minimum number of scalar multiplications (i.e., cost)
// needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]
// The cost is zero when multiplying one matrix
for (i = 1; i <= n; i++)
m[i, i] = 0;
for (len = 2; len <= n; len++) { // Subsequence lengths
for (i = 1; i <= n - len + 1; i++) {
j = i + len - 1;
m[i, j] = MAXINT;
for (k = i; k <= j - 1; k++) {
cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j];
if (cost < m[i, j]) {
m[i, j] = cost;
s[i, j] = k; // Index of the subsequence split that achieved minimal cost
}
}
}
}
}
Note : The first index for dims is 0 and the first index for m and s is 1.
A Python implementation using the memoization decorator from the standard library:
from functools import cache
def matrixChainOrder(dims: list[int]) -> int:
@cache
def a(i, j):
return min((a(i, k) + dims[i] * dims[k] * dims[j] + a(k, j)
for k in range(i + 1, j)), default=0)
return a(0, len(dims) - 1)
More efficient algorithms
There are algorithms that are more efficient than the O(n3) dynamic programming algorithm, though they are more complex.
Hu & Shing
An algorithm published by T. C. Hu and M.-T. Shing achieves O(n log n) computational complexity.
They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of triangulation of a regular polygon. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other n sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With n matrices in the multiplication chain there are n−1 binary operations and Cn−1 ways of placing parentheses, where Cn−1 is the (n−1)-th Catalan number. The algorithm exploits that there are also Cn−1 possible triangulations of a polygon with n+1 sides.
This image illustrates possible triangulations of a regular hexagon. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices.
For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square:
The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are:
The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles:
(AB)C: (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications
A(BC): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications
Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in O(n log n) time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper. The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof.
Other O(n log n) algorithms
Wang, Zhu and Tian have published a simplified O(n log m) algorithm, where n is the number of matrices in the chain and m is the number of local minimums in the dimension sequence of the given matrix chain.
Nimbark, Gohel, and Doshi have published a greedy O(n log n) algorithm, but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains.
Chin-Hu-Shing approximate solution
An algorithm created independently by Chin and Hu & Shing runs in O(n) and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one.
The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex V of the polygon is associated a weight w. Suppose we have three consecutive vertices , and that is the vertex with minimum weight .
We look at the quadrilateral with vertices (in clockwise order).
We can triangulate it in two ways:
and , with cost
and with cost .
Therefore, if
or equivalently
we remove the vertex from the polygon and add the side to the triangulation.
We repeat this process until no satisfies the condition above.
For all the remaining vertices , we add the side to the triangulation.
This gives us a nearly optimal triangulation.
Generalizations
The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence. A practical instance of this comes from the ordering of join operations in databases; see .
Another somewhat contrived special case of this is string concatenation of a list of strings. In C, for example, the cost of concatenating two strings of length m and n using strcat is O(m + n), since we need O(m) time to find the end of the first string and O(n) time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists.
Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches.
See also
Associahedron
Tamari lattice
References
Optimization algorithms and methods
Matrices
Dynamic programming
Articles with example Python (programming language) code | Matrix chain multiplication | [
"Mathematics"
] | 2,680 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
1,539,042 | https://en.wikipedia.org/wiki/Syntactic%20foam | Syntactic foams are composite materials synthesized by filling a metal, polymer, cementitious or ceramic matrix with hollow spheres called microballoons or cenospheres or non-hollow spheres (e.g. perlite) as aggregates. In this context, "syntactic" means "put together." The presence of hollow particles results in lower density, higher specific strength (strength divided by density), lower coefficient of thermal expansion, and, in some cases, radar or sonar transparency.
History
The term was originally coined by the Bakelite Company, in 1955, for their lightweight composites made of hollow phenolic microspheres bonded to a matrix of phenolic, epoxy, or polyester.
These materials were developed in early 1960s as improved buoyancy materials for marine applications. Other characteristics led these materials to aerospace and ground transportation vehicle applications.
Research on syntactic foams has recently been advanced by Nikhil Gupta.
Characteristics
Tailorability is one of the biggest advantages of these materials. The matrix material can be selected from almost any metal, polymer, or ceramic. Microballoons are available in a variety of sizes and materials, including glass microspheres, cenospheres, carbon, and polymers. The most widely used and studied foams are glass microspheres (in epoxy or polymers), and cenospheres or ceramics (in aluminium). One can change the volume fraction of microballoons or use microballoons of different effective density, the latter depending on the average ratio between the inner and outer radii of the microballoons.
A manufacturing method for low density syntactic foams is based on the principle of buoyancy.
Strength
The compressive properties of syntactic foams, in most cases, strongly depend on the properties of the filler particle material. In general, the compressive strength of the material is proportional to its density. Cementitious syntactic foams are reported to achieve compressive strength values greater than while maintaining densities lower than .
The matrix material has more influence on the tensile properties. Tensile strength may be highly improved by a chemical surface treatment of the particles, such as silanization, which allows the formation of strong bonds between glass particles and epoxy matrix. Addition of fibrous materials can also increase the tensile strength.
Applications
Current applications for syntactic foam include buoyancy modules for marine riser tensioners, remotely operated underwater vehicles (ROVs), autonomous underwater vehicles (AUVs), deep-sea exploration, boat hulls, and helicopter and airplane components.
Cementitious syntactic foams have also been investigated as a potential lightweight structural composite material. These materials include glass microspheres dispersed in a cement paste matrix to achieve a closed cell foam structure, instead of a metallic or a polymeric matrix. Cementitious syntactic foams have also been tested for their mechanical performance under high strain rate loading conditions to evaluate their energy dissipation capacity in crash cushions, blast walls, etc. Under these loading conditions, the glass microspheres of the cementitious syntactic foams did not show progressive crushing. Ultimately, unlike the polymeric and metallic syntactic foams, they did not emerge as suitable materials for energy dissipation applications. Structural applications of syntactic foams include use as the intermediate layer (that is, the core) of sandwich panels.
Though the cementitious syntactic foams demonstrate superior specific strength values in comparison to most conventional cementitious materials, it is challenging to manufacture them. Generally, the hollow inclusions tend to buoy and segregate in the low shear strength and high-density fresh cement paste. Therefore, maintaining a uniform microstructure across the material must be achieved through a strict control of the composite rheology. In addition, certain glass types of microspheres may lead to an alkali silica reaction. Therefore, the adverse effects of this reaction must be considered and addressed to ensure the long-term durability of these composites.
Other applications include;
Deep-sea buoyancy foams. A method of creating submarine hulls by 3D printing was developed in 2018.
Thermoforming plug assist
Radar transparent materials
Acoustically attenuating materials
Cores for sandwich composites
Blast mitigating materials
Sporting goods such as bowling balls, tennis rackets, and soccer balls.
References
External links
Composite materials
Foams
Materials science | Syntactic foam | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 918 | [
"Applied and interdisciplinary physics",
"Foams",
"Composite materials",
"Materials science",
"Materials",
"nan",
"Matter"
] |
1,539,324 | https://en.wikipedia.org/wiki/FIPS%20140 | The 140 series of Federal Information Processing Standards (FIPS) are U.S. government computer security standards that specify requirements for cryptographic modules.
, FIPS 140-2 and FIPS 140-3 are both accepted as current and active. FIPS 140-3 was approved on March 22, 2019 as the successor to FIPS 140-2 and became effective on September 22, 2019. FIPS 140-3 testing began on September 22, 2020, and a small number of validation certificates have been issued. FIPS 140-2 testing is still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022), creating an overlapping transition period of one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date.
Purpose of FIPS 140
The National Institute of Standards and Technology (NIST) issues the 140 Publication Series to coordinate the requirements and standards for cryptographic modules which include both hardware and software components for use by departments and agencies of the United States federal government. FIPS 140 does not purport to provide sufficient conditions to guarantee that a module conforming to its requirements is secure, still less that a system built using such modules is secure. The requirements cover not only the cryptographic modules themselves but also their documentation and (at the highest security level) some aspects of the comments contained in the source code.
User agencies desiring to implement cryptographic modules should confirm that the module they are using is covered by an existing validation certificate. FIPS 140-1 and FIPS 140-2 validation certificates specify the exact module name, hardware, software, firmware, and/or applet version numbers. For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Vendors do not always maintain their baseline validations.
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of validated cryptographic modules is required by the United States Government for all unclassified uses of cryptography. The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Security levels
FIPS 140-2 defines four levels of security, simply named "Level 1" to "Level 4". It does not specify in detail what level of security is required by any particular application.
FIPS 140-2 Level 1 the lowest, imposes very limited requirements; loosely, all components must be "production-grade" and various egregious kinds of insecurity must be absent.
FIPS 140-2 Level 2 adds requirements for physical tamper-evidence and role-based authentication.
FIPS 140-2 Level 3 adds requirements for physical tamper-resistance (making it difficult for attackers to gain access to sensitive information contained in the module) and identity-based authentication, and for a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces.
FIPS 140-2 Level 4 makes the physical security requirements more stringent, and requires robustness against environmental attacks.
In addition to the specified levels, Section 4.1.1 of the specification describes additional attacks that may require mitigation, such as differential power analysis. If a product contains countermeasures against these attacks, they must be documented and tested, but protections are not required to achieve a given level. Thus, a criticism of FIPS 140-2 is that the standard gives a false sense of security at Levels 2 and above because the standard implies that modules will be tamper-evident and/or tamper-resistant, yet modules are permitted to have side channel vulnerabilities that allow simple extraction of keys.
Scope of requirements
FIPS 140 imposes requirements in eleven different areas:
Cryptographic module specification (what must be documented)
Cryptographic module ports and interfaces (what information flows in and out, and how it must be segregated)
Roles, services and authentication (who can do what with the module, and how this is checked)
Finite state model (documentation of the high-level states the module can be in, and how transitions occur)
Physical security (tamper evidence and resistance, and robustness against extreme environmental conditions)
Operational environment (what sort of operating system the module uses and is used by)
Cryptographic key management (generation, entry, output, storage and destruction of keys)
EMI/EMC
Self-tests (what must be tested and when, and what must be done if a test fails)
Design assurance (what documentation must be provided to demonstrate that the module has been well designed and implemented)
Mitigation of other attacks (if a module is designed to mitigate against, say, TEMPEST attacks then its documentation must say how)
Brief history
FIPS 140-1, issued on 11 January 1994 and withdrawn on May 25, 2002, was developed by a government and industry working group, composed of vendors and users of cryptographic equipment. The group identified the four "security levels" and eleven "requirement areas" listed above, and specified requirements for each area at each level.
FIPS 140-2, issued on 25 May 2001, takes account of changes in available technology and official standards since 1994, and of comments received from the vendor, tester, and user communities. It was the main input document to the international standard ISO/IEC 19790:2006 Security requirements for cryptographic modules issued on 1 March 2006. NIST issued Special Publication 800-29 outlining the significant changes from FIPS 140-1 to FIPS 140-2.
FIPS 140-3, issued on 22 March 2019 and announced in May 2019 is currently in the overlapping transition period to supersede FIPS 140-2 and aligns the NIST guidance around two international standards documents: ISO/IEC 19790:2012(E) Information technology — Security techniques — Security requirements for cryptographic modules and ISO/IEC 24759:2017(E) Information technology — Security techniques — Test requirements for cryptographic modules. In the first draft version of the FIPS 140-3 standard, NIST introduced a new software security section, one additional level of assurance (Level 5) and new Simple Power Analysis (SPA) and Differential Power Analysis (DPA) requirements. The draft issued on 11 Sep 2009, however, reverted to four security levels and limits the security levels of software to levels 1 and 2.
Criticism
Due to the way in which the validation process is set up, a software vendor is required to re-validate their FIPS-validated module for every change, no matter how small, to the software; this re-validation is required even for obvious bug or security fixes. Since validation is an expensive process, this gives software vendors an incentive to postpone changes to their software and can result in software that does not receive security updates until the next validation. The result may be that validated software is less safe than a non-validated equivalent.
This criticism has been countered more recently by some industry experts who instead put the responsibility on the vendor to narrow their validation boundary. As most of the re-validation efforts are triggered by bugs and security fixes outside the core cryptographic operations, a properly scoped validation is not subject to the common re-validation as described.
See also
Common Criteria
FIPS 140-2
FIPS 140-3
ISO/IEC 19790
:Category: Computer security standards
:Category: Cryptography standards
References
External links
Computer security standards
Cryptography standards
Standards of the United States | FIPS 140 | [
"Technology",
"Engineering"
] | 1,596 | [
"Computer security standards",
"Computer standards",
"Cybersecurity engineering"
] |
1,539,804 | https://en.wikipedia.org/wiki/Sheet%20resistance | Sheet resistance is the resistance of a square piece of a thin material with contacts made to two opposite sides of the square. It is usually a measurement of electrical resistance of thin films that are uniform in thickness. It is commonly used to characterize materials made by semiconductor doping, metal deposition, resistive paste printing, and glass coating. Examples of these processes are: doped semiconductor regions (e.g., silicon or polysilicon), and the resistors that are screen printed onto the substrates of thick-film hybrid microcircuits.
The utility of sheet resistance as opposed to resistance or resistivity is that it is directly measured using a four-terminal sensing measurement (also known as a four-point probe measurement) or indirectly by using a non-contact eddy-current-based testing device. Sheet resistance is invariable under scaling of the film contact and therefore can be used to compare the electrical properties of devices that are significantly different in size.
Calculations
Sheet resistance is applicable to two-dimensional systems in which thin films are considered two-dimensional entities. When the term sheet resistance is used, it is implied that the current is along the plane of the sheet, not perpendicular to it.
In a regular three-dimensional conductor, the resistance can be written aswhere
is material resistivity,
is the length,
is the cross-sectional area, which can be split into:
width ,
thickness .
Upon combining the resistivity with the thickness, the resistance can then be written aswhere is the sheet resistance. If the film thickness is known, the bulk resistivity (in Ω·m) can be calculated by multiplying the sheet resistance by the film thickness in m:
Units
Sheet resistance is a special case of resistivity for a uniform sheet thickness. Commonly, resistivity (also known as bulk resistivity, specific electrical resistivity, or volume resistivity) is in units of Ω·m, which is more completely stated in units of Ω·m2/m (Ω·area/length). When divided by the sheet thickness (m), the units are Ω·m·(m/m)/m = Ω. The term "(m/m)" cancels, but represents a special "square" situation yielding an answer in ohms. An alternative, common unit is "ohms square" (denoted "") or "ohms per square" (denoted "Ω/sq" or ""), which is dimensionally equal to an ohm, but is exclusively used for sheet resistance. This is an advantage, because sheet resistance of 1 Ω could be taken out of context and misinterpreted as bulk resistance of 1 ohm, whereas sheet resistance of 1 Ω/sq cannot thus be misinterpreted.
The reason for the name "ohms per square" is that a square sheet with sheet resistance 10 ohm/square has an actual resistance of 10 ohm, regardless of the size of the square. (For a square, , so .) The unit can be thought of as, loosely, "ohms · aspect ratio". Example: A 3-unit long by 1-unit wide (aspect ratio = 3) sheet made of material having a sheet resistance of 21 Ω/sq would measure 63 Ω (since it is composed of three 1-unit by 1-unit squares), if the 1-unit edges were attached to an ohmmeter that made contact entirely over each edge.
For semiconductors
For semiconductors doped through diffusion or surface peaked ion implantation we define the sheet resistance using the average resistivity of the material:which in materials with majority-carrier properties can be approximated by (neglecting intrinsic charge carriers):where is the junction depth, is the majority-carrier mobility, is the carrier charge, and is the net impurity concentration in terms of depth. Knowing the background carrier concentration and the surface impurity concentration, the sheet resistance-junction depth product can be found using Irvin's curves, which are numerical solutions to the above equation.
Measurement
A four-point probe is used to avoid contact resistance, which can often have the same magnitude as the sheet resistance. Typically a constant current is applied to two probes, and the potential on the other two probes is measured with a high-impedance voltmeter. A geometry factor needs to be applied according to the shape of the four-point array. Two common arrays are square and in-line. For more details see Van der Pauw method.
Measurement may also be made by applying high-conductivity bus bars to opposite edges of a square (or rectangular) sample. Resistance across a square area will be measured in Ω/sq (often written as Ω/◻). For a rectangle, an appropriate geometric factor is added. Bus bars must make ohmic contact.
Inductive measurement is used as well. This method measures the shielding effect created by eddy currents. In one version of this technique a conductive sheet under test is placed between two coils. This non-contact sheet resistance measurement method also allows to characterize encapsulated thin-films or films with rough surfaces.
A very crude two-point probe method is to measure resistance with the probes close together and the resistance with the probes far apart. The difference between these two resistances will be of the order of magnitude of the sheet resistance.
Typical applications
Sheet resistance measurements are very common to characterize the uniformity of conductive or semiconductive coatings and materials, e.g. for quality assurance. Typical applications include the inline process control of metal, TCO, conductive nanomaterials, or other coatings on architectural glass, wafers, flat panel displays, polymer foils, OLED, ceramics, etc. The contacting four-point probe is often applied for single-point measurements of hard or coarse materials. Non-contact eddy current systems are applied for sensitive or encapsulated coatings, for inline measurements and for high-resolution mapping.
See also
ESD materials
References
Measuring Sheet Resistance
General references
Measuring Sheet Resistance
Semiconductors
Electrical resistance and conductance | Sheet resistance | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,250 | [
"Matter",
"Physical quantities",
"Semiconductors",
"Quantity",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Wikipedia categories named after physical quantities",
"Solid state engineering",
"Electrical resistance and conductance"
] |
9,342,843 | https://en.wikipedia.org/wiki/Fujiki%20class%20C | In algebraic geometry, a complex manifold is called Fujiki class if it is bimeromorphic to a compact Kähler manifold. This notion was defined by Akira Fujiki.
Properties
Let M be a compact manifold of Fujiki class , and
its complex subvariety. Then X
is also in Fujiki class (, Lemma 4.6). Moreover, the Douady space of X (that is, the moduli of deformations of a subvariety , M fixed) is compact and in Fujiki class .
Fujiki class manifolds are examples of compact complex manifolds which are not necessarily Kähler, but for which the -lemma holds.
Conjectures
J.-P. Demailly and M. Pǎun have
shown that a manifold is in Fujiki class if and only
if it supports a Kähler current.
They also conjectured that a manifold M is in Fujiki class if it admits a nef current which is big, that is, satisfies
For a cohomology class which is rational, this statement is known: by Grauert-Riemenschneider conjecture, a holomorphic line bundle L with first Chern class
nef and big has maximal Kodaira dimension, hence the corresponding rational map to
is generically finite onto its image, which is algebraic, and therefore Kähler.
Fujiki and Ueno asked whether the property is stable under deformations. This conjecture was disproven in 1992 by Y.-S. Poon and Claude LeBrun
References
Algebraic geometry
Complex manifolds | Fujiki class C | [
"Mathematics"
] | 322 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
11,826,062 | https://en.wikipedia.org/wiki/Auxiliary%20function | In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point.
Definition
Auxiliary functions are not a rigorously defined kind of function, rather they are functions which are either explicitly constructed or at least shown to exist and which provide a contradiction to some assumed hypothesis, or otherwise prove the result in question. Creating a function during the course of a proof in order to prove the result is not a technique exclusive to transcendence theory, but the term "auxiliary function" usually refers to the functions created in this area.
Explicit functions
Liouville's transcendence criterion
Because of the naming convention mentioned above, auxiliary functions can be dated back to their source simply by looking at the earliest results in transcendence theory. One of these first results was Liouville's proof that transcendental numbers exist when he showed that the so called Liouville numbers were transcendental. He did this by discovering a transcendence criterion which these numbers satisfied. To derive this criterion he started with a general algebraic number α and found some property that this number would necessarily satisfy. The auxiliary function he used in the course of proving this criterion was simply the minimal polynomial of α, which is the irreducible polynomial f with integer coefficients such that f(α) = 0. This function can be used to estimate how well the algebraic number α can be estimated by rational numbers p/q. Specifically if α has degree d at least two then he showed that
and also, using the mean value theorem, that there is some constant depending on α, say c(α), such that
Combining these results gives a property that the algebraic number must satisfy; therefore any number not satisfying this criterion must be transcendental.
The auxiliary function in Liouville's work is very simple, merely a polynomial that vanishes at a given algebraic number. This kind of property is usually the one that auxiliary functions satisfy. They either vanish or become very small at particular points, which is usually combined with the assumption that they do not vanish or can't be too small to derive a result.
Fourier's proof of the irrationality of e
Another simple, early occurrence is in Fourier's proof of the irrationality of e, though the notation used usually disguises this fact. Fourier's proof used the power series of the exponential function:
By truncating this power series after, say, N + 1 terms we get a polynomial with rational coefficients of degree N which is in some sense "close" to the function ex. Specifically if we look at the auxiliary function defined by the remainder:
then this function—an exponential polynomial—should take small values for x close to zero. If e is a rational number then by letting x = 1 in the above formula we see that R(1) is also a rational number. However, Fourier proved that R(1) could not be rational by eliminating every possible denominator. Thus e cannot be rational.
Hermite's proof of the irrationality of er
Hermite extended the work of Fourier by approximating the function ex not with a polynomial but with a rational function, that is a quotient of two polynomials. In particular he chose polynomials A(x) and B(x) such that the auxiliary function R defined by
could be made as small as he wanted around x = 0. But if er were rational then R(r) would have to be rational with a particular denominator, yet Hermite could make R(r) too small to have such a denominator, hence a contradiction.
Hermite's proof of the transcendence of e
To prove that e was in fact transcendental, Hermite took his work one step further by approximating not just the function ex, but also the functions ekx for integers k = 1,...,m, where he assumed e was algebraic with degree m. By approximating ekx by rational functions with integer coefficients and with the same denominator, say Ak(x) / B(x), he could define auxiliary functions Rk(x) by
For his contradiction Hermite supposed that e satisfied the polynomial equation with integer coefficients a0 + a1e + ... + amem = 0. Multiplying this expression through by B(1) he noticed that it implied
The right hand side is an integer and so, by estimating the auxiliary functions and proving that 0 < |R| < 1 he derived the necessary contradiction.
Auxiliary functions from the pigeonhole principle
The auxiliary functions sketched above can all be explicitly calculated and worked with. A breakthrough by Axel Thue and Carl Ludwig Siegel in the twentieth century was the realisation that these functions don't necessarily need to be explicitly known – it can be enough to know they exist and have certain properties. Using the Pigeonhole Principle Thue, and later Siegel, managed to prove the existence of auxiliary functions which, for example, took the value zero at many different points, or took high order zeros at a smaller collection of points. Moreover they proved it was possible to construct such functions without making the functions too large. Their auxiliary functions were not explicit functions, then, but by knowing that a certain function with certain properties existed, they used its properties to simplify the transcendence proofs of the nineteenth century and give several new results.
This method was picked up on and used by several other mathematicians, including Alexander Gelfond and Theodor Schneider who used it independently to prove the Gelfond–Schneider theorem. Alan Baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately Baker's theorem. Another example of the use of this method from the 1960s is outlined below.
Auxiliary polynomial theorem
Let β equal the cube root of b/a in the equation ax3 + bx3 = c and assume m is an integer that satisfies m + 1 > 2n/3 ≥ m ≥ 3 where n is a positive integer.
Then there exists
such that
The auxiliary polynomial theorem states
A theorem of Lang
In the 1960s Serge Lang proved a result using this non-explicit form of auxiliary functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems. The theorem deals with a number field K and meromorphic functions f1,...,fN of order at most ρ, at least two of which are algebraically independent, and such that if we differentiate any of these functions then the result is a polynomial in all of the functions. Under these hypotheses the theorem states that if there are m distinct complex numbers ω1,...,ωm such that fi (ωj ) is in K for all combinations of i and j, then m is bounded by
To prove the result Lang took two algebraically independent functions from f1,...,fN, say f and g, and then created an auxiliary function which was simply a polynomial F in f and g. This auxiliary function could not be explicitly stated since f and g are not explicitly known. But using Siegel's lemma Lang showed how to make F in such a way that it vanished to a high order at the m complex numbers
ω1,...,ωm. Because of this high order vanishing it can be shown that a high-order derivative of F takes a value of small size one of the ωis, "size" here referring to an algebraic property of a number. Using the maximum modulus principle Lang also found a separate way to estimate the absolute values of derivatives of F, and using standard results comparing the size of a number and its absolute value he showed that these estimates were contradicted unless the claimed bound on m holds.
Interpolation determinants
After the myriad of successes gleaned from using existent but not explicit auxiliary functions, in the 1990s Michel Laurent introduced the idea of interpolation determinants. These are alternants – determinants of matrices of the form
where φi are a set of functions interpolated at a set of points ζj. Since a determinant is just a polynomial in the entries of a matrix, these auxiliary functions succumb to study by analytic means. A problem with the method was the need to choose a basis before the matrix could be worked with. A development by Jean-Benoît Bost removed this problem with the use of Arakelov theory, and research in this area is ongoing. The example below gives an idea of the flavour of this approach.
A proof of the Hermite–Lindemann theorem
One of the simpler applications of this method is a proof of the real version of the Hermite–Lindemann theorem. That is, if α is a non-zero, real algebraic number, then eα is transcendental. First we let k be some natural number and n be a large multiple of k. The interpolation determinant considered is the determinant Δ of the n4×n4 matrix
The rows of this matrix are indexed by 1 ≤ i1 ≤ n4/k and 1 ≤ i2 ≤ k, while the columns are indexed by 1 ≤ j1 ≤ n3 and 1 ≤ j2 ≤ n. So the functions in our matrix are monomials in x and ex and their derivatives, and we are interpolating at the k points 0,α,2α,...,(k − 1)α. Assuming that eα is algebraic we can form the number field Q(α,eα) of degree m over Q, and then multiply Δ by a suitable denominator as well as all its images under the embeddings of the field Q(α,eα) into C. For algebraic reasons this product is necessarily an integer, and using arguments relating to Wronskians it can be shown that it is non-zero, so its absolute value is an integer Ω ≥ 1.
Using a version of the mean value theorem for matrices it is possible to get an analytic bound on Ω as well, and in fact using big-O notation we have
The number m is fixed by the degree of the field Q(α,eα), but k is the number of points we are interpolating at, and so we can increase it at will. And once k > 2(m + 1)/3 we will have Ω → 0, eventually contradicting the established condition Ω ≥ 1. Thus eα cannot be algebraic after all.
Notes
References
Number theory
Diophantine approximation | Auxiliary function | [
"Mathematics"
] | 2,187 | [
"Discrete mathematics",
"Mathematical relations",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
11,827,482 | https://en.wikipedia.org/wiki/Wellcome%20Genome%20Campus | The Wellcome Genome Campus is a scientific research campus built in the grounds of Hinxton Hall, Hinxton in Cambridgeshire, England.
Campus
The Campus is home to some institutes and organisations in genomics and computational biology. The Campus is part of the Wellcome Trust, a global charitable foundation that exists to improve health, and houses the Wellcome Sanger Institute, the European Bioinformatics Institute (EBI), the bioinformatics outstation of the European Molecular Biology Laboratory (EMBL), and a number of biotech companies whose UK offices are located in the BioData Innovation Centre acting as an incubator for businesses of all sizes.
In 2020, the South Cambridgeshire District Council granted outline planning permission for an expansion of the Campus. The expansion will increase the overall Campus grounds from 125 acres to 440 acres. The first buildings are expected to be completed in 2026.
Activities
At the Campus, genome and biodata research takes place. The Campus provides bioinformatics services and delivers training in genomics and biodata to scientists and clinicians.
History
Opening of the Campus in 1994
At the time of its official opening by the Princess Royal in 1994, the Wellcome Genome Campus was already home to the Wellcome Sanger Institute (then called the Sanger Centre), the Medical Research Council’s Human Genome Mapping Project Resource Centre, the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL-EBI).
Wellcome funded the establishment of the Sanger Centre in 1993 and chose Hinxton as the home for its new genome research institute. Shortly after, EMBL-EBI located on the same site, and the two institutes formed a natural fit, consolidating expertise, facilities and knowledge in one place and enabling both to contribute a major role in the Human Genome Project – a global collaboration to sequence the first ‘reference’ human genome.
One third of the human genome was sequenced for the first time at the Wellcome Trust Sanger Institute, and the data was stored and shared through EMBL-EBI. This was the largest single contribution of any centre to the Human Genome Project, making the Campus and its collaborations uniquely important in the history of genomics.
Since the announcement of the completion of the draft human genome in 2000, and final completion in 2003, rapid progress in sequencing technology has enabled new areas of Science to be opened up for exploration. At its opening in 1994, the Campus housed approximately 400 employees. This has grown to over 2,600 people employed at the Wellcome Genome Campus today, making the Campus a densely concentrated and globally significant cluster for biodata and genomics expertise.
Before 1993
The first recorded owner of the estate, in 1506, was the college of Michaelhouse in Cambridge but it wasn’t until the early eighteenth century that the first building – a modest hunting and fishing lodge – was erected by Captain Joseph Richardson of Horseheath. It became a gentleman’s retreat with well-stocked trout ponds and fields full of partridge.
The current Hall was built by John Bromwell Jones in 1748 and remains today as the central three-storey block on the Campus. Opposite the house were stables, a kitchen garden and an orchard, all of which still exist, albeit in altered form.
By 1800 ownership of the Hall and estate had passed to the Green family, who remained until 1920, when the Hall was sold to the Robinsons. During the Second World War, the Hall was used for billeting American soldiers, stationed at the local airbase at Duxford.
In 1953 the Hall and grounds were sold to Tube Investments Plc for us as research laboratories, which closed in the late 1980s. The site remained under their ownership until it was sold to Genome Research Limited in 1992.
Sanger Institute's History
The Wellcome Trust established the Sanger Centre in 1992 to undertake the most ambitious project ever attempted in biology, sequencing the human genome. The new facility developed laboratory infrastructure, robotics, team working and computational approaches on a scale unprecedented in life sciences.
In 2000, the first draft of the human genome was announced with the Sanger Centre championing open access to the data and making the largest contribution to the global collaborative endeavour. Genomes began to convert biology into big data science. The subsequently renamed Wellcome Trust Sanger Institute established long term research programmes to explore and apply genome sequences.
References
1993 establishments in England
Biotechnology in the United Kingdom
DNA sequencing
Genomics organizations
Hinxton
Research institutes established in 1993
Research institutes in Cambridgeshire
Science parks in the United Kingdom
Buildings and structures in South Cambridgeshire District
Genome Campus | Wellcome Genome Campus | [
"Chemistry",
"Biology"
] | 942 | [
"Molecular biology techniques",
"DNA sequencing",
"Biotechnology in the United Kingdom",
"Biotechnology by country"
] |
11,827,758 | https://en.wikipedia.org/wiki/PEST%20sequence | A PEST sequence is a peptide sequence that is rich in proline (P), glutamic acid (E), serine (S) and threonine (T). It is associated with proteins that have a short intracellular half-life, so might act as a signal peptide for protein degradation. This may be mediated via the proteasome or calpain.
References
Peptide sequences
Proteins
Post-translational modification | PEST sequence | [
"Chemistry"
] | 91 | [
"Biomolecules by chemical classification",
"Molecular and cellular biology stubs",
"Gene expression",
"Biochemical reactions",
"Biochemistry stubs",
"Post-translational modification",
"Molecular biology",
"Proteins"
] |
11,830,303 | https://en.wikipedia.org/wiki/Dissipative%20soliton | Dissipative solitons (DSs) are stable solitary localized structures that arise in nonlinear spatially extended dissipative systems due to mechanisms of self-organization. They can be considered as an extension of the classical soliton concept in conservative systems. An alternative terminology includes autosolitons, spots and pulses.
Apart from aspects similar to the behavior of classical particles like the formation of bound states, DSs exhibit interesting behavior – e.g. scattering, creation and annihilation – all without the constraints of energy or momentum
conservation. The excitation of internal degrees of freedom may result in a dynamically stabilized intrinsic speed, or periodic oscillations of the shape.
Historical development
Origin of the soliton concept
DSs have been experimentally observed for a long time.
Helmholtz measured the propagation velocity of nerve pulses in
1850. In 1902, Lehmann found the formation of localized anode
spots in long gas-discharge tubes. Nevertheless, the term
"soliton" was originally developed in a different context. The
starting point was the experimental detection of "solitary
water waves" by Russell in 1834.
These observations initiated the theoretical work of
Rayleigh and Boussinesq around
1870, which finally led to the approximate description of such
waves by Korteweg and de Vries in 1895; that description is known today as the (conservative)
KdV equation.
On this background the term "soliton" was
coined by Zabusky and Kruskal in 1965. These
authors investigated certain well localised solitary solutions
of the KdV equation and named these objects solitons. Among
other things they demonstrated that in 1-dimensional space
solitons exist, e.g. in the form of two unidirectionally
propagating pulses with different size and speed and exhibiting the
remarkable property that number, shape and size are the same
before and after collision.
Gardner et al. introduced the inverse scattering technique
for solving the KdV equation and proved that this equation is
completely integrable. In 1972 Zakharov and
Shabat found another integrable equation and
finally it turned out that the inverse scattering technique can
be applied successfully to a whole class of equations (e.g. the
nonlinear Schrödinger and
sine-Gordon equations). From 1965
up to about 1975, a common agreement was reached: to reserve the term soliton to
pulse-like solitary solutions of conservative nonlinear partial
differential equations that can be solved by using the inverse
scattering technique.
Weakly and strongly dissipative systems
With increasing knowledge of classical solitons, possible
technical applicability came into perspective, with the most
promising one at present being the transmission of optical
solitons via glass fibers for the purpose of
data transmission. In contrast to conservative systems, solitons in fibers dissipate energy and
this cannot be neglected on an intermediate and long time
scale. Nevertheless, the concept of a classical soliton can
still be used in the sense that on a short time scale
dissipation of energy can be neglected. On an intermediate time
scale one has to take small energy losses into account as a
perturbation, and on a long scale the amplitude of the soliton
will decay and finally vanish.
There are however various types of systems which are capable of
producing solitary structures and in which dissipation plays an
essential role for their formation and stabilization. Although
research on certain types of these DSs has been carried out for
a long time (for example, see the research on nerve pulses culminating
in the work of Hodgkin and Huxley in 1952), since
1990 the amount of research has significantly increased (see e.g.)
Possible reasons are improved experimental devices and
analytical techniques, as well as the availability of more
powerful computers for numerical computations. Nowadays, it is
common to use the term dissipative solitons for solitary structures in
strongly dissipative systems.
Experimental observations
Today, DSs can be found in many different
experimental set-ups. Examples include
Gas-discharge systems: plasmas confined in a discharge space which often has a lateral extension large compared to the main discharge length. DSs arise as current filaments between the electrodes and were found in DC systems with a high-ohmic barrier, AC systems with a dielectric barrier, and as anode spots, as well as in an obstructed discharge with metallic electrodes.
Semiconductor systems: these are similar to gas-discharges; however, instead of a gas, semiconductor material is sandwiched between two planar or spherical electrodes. Set-ups include Si and GaAs pin diodes, n-GaAs, and Si p+−n+−p−n−, and ZnS:Mn structures.
Nonlinear optical systems: a light beam of high intensity interacts with a nonlinear medium. Typically the medium reacts on rather slow time scales compared to the beam propagation time. Often, the output is fed back into the input system via single-mirror feedback or a feedback loop. DSs may arise as bright spots in a two-dimensional plane orthogonal to the beam propagation direction; one may, however, also exploit other effects like polarization. DSs have been observed for saturable absorbers, degenerate optical parametric oscillators (DOPOs), liquid crystal light valves (LCLVs), alkali vapor systems, photorefractive media, and semiconductor microresonators.
If the vectorial properties of DSs are considered, vector dissipative soliton could also be observed in a fiber laser passively mode locked through saturable absorber,
In addition, multiwavelength dissipative soliton in an all normal dispersion fiber laser passively mode-locked with a SESAM has been obtained. It is confirmed that depending on the cavity birefringence, stable single-, dual- and triple-wavelength dissipative soliton can be formed in the laser. Its generation mechanism can be traced back to the nature of dissipative soliton.
Chemical systems: realized either as one- and two-dimensional reactors or via catalytic surfaces, DSs appear as pulses (often as propagating pulses) of increased concentration or temperature. Typical reactions are the Belousov–Zhabotinsky reaction, the ferrocyanide-iodate-sulphite reaction as well as the oxidation of hydrogen, CO, or iron. Nerve pulses or migraine aura waves also belong to this class of systems.
Vibrated media: vertically shaken granular media, colloidal suspensions, and Newtonian fluids produce harmonically or sub-harmonically oscillating heaps of material, which are usually called oscillons.
Hydrodynamic systems: the most prominent realization of DSs are domains of convection rolls on a conducting background state in binary liquids. Another example is a film dragging in a rotating cylindric pipe filled with oil.
Electrical networks: large one- or two-dimensional arrays of coupled cells with a nonlinear current–voltage characteristic. DSs are characterized by a locally increased current through the cells.
Remarkably enough, phenomenologically the dynamics of the DSs in many of the above systems are similar in spite of the microscopic differences. Typical observations are (intrinsic) propagation, scattering, formation of bound states and clusters, drift in gradients, interpenetration, generation, and annihilation, as well as higher instabilities.
Theoretical description
Most systems showing DSs are described by nonlinear
partial differential equations. Discrete difference equations and
cellular automata are also used. Up to now,
modeling from first principles followed by a quantitative
comparison of experiment and theory has been performed only
rarely and sometimes also poses severe problems because of large
discrepancies between microscopic and macroscopic time and
space scales. Often simplified prototype models are
investigated which reflect the essential physical processes in
a larger class of experimental systems. Among these are
Reaction–diffusion systems, used for chemical systems, gas-discharges and semiconductors. The evolution of the state vector q(x, t) describing the concentration of the different reactants is determined by diffusion as well as local reactions:
A frequently encountered example is the two-component Fitzhugh–Nagumo-type activator–inhibitor system
Stationary DSs are generated by production of material in the center of the DSs, diffusive transport into the tails and depletion of material in the tails. A propagating pulse arises from production in the leading and depletion in the trailing end. Among other effects, one finds periodic oscillations of DSs ("breathing"), bound states, and collisions, merging, generation and annihilation.
Ginzburg–Landau type systems for a complex scalar q(x, t) used to describe nonlinear optical systems, plasmas, Bose-Einstein condensation, liquid crystals and granular media. A frequently found example is the cubic-quintic subcritical Ginzburg–Landau equation
To understand the mechanisms leading to the formation of DSs, one may consider the energy ρ = |q|2 for which one may derive the continuity equation
One can thereby show that energy is generally produced in the flanks of the DSs and transported to the center and potentially to the tails where it is depleted. Dynamical phenomena include propagating DSs in 1d, propagating clusters in 2d, bound states and vortex solitons, as well as "exploding DSs".
The Swift–Hohenberg equation is used in nonlinear optics and in the granular media dynamics of flames or electroconvection. Swift–Hohenberg can be considered as an extension of the Ginzburg–Landau equation. It can be written as
For dr > 0 one essentially has the same mechanisms as in the Ginzburg–Landau equation. For dr < 0, in the real Swift–Hohenberg equation one finds bistability between homogeneous states and Turing patterns. DSs are stationary localized Turing domains on the homogeneous background. This also holds for the complex Swift–Hohenberg equations; however, propagating DSs as well as interaction phenomena are also possible, and observations include merging and interpenetration.
Particle properties and universality
DSs in many different systems show universal particle-like
properties. To understand and describe the latter, one may try
to derive "particle equations" for slowly varying order
parameters like position, velocity or amplitude of the DSs by
adiabatically eliminating all fast variables in the field
description. This technique is known from linear systems,
however mathematical problems arise from the nonlinear models
due to a coupling of fast and slow modes.
Similar to low-dimensional dynamic systems, for supercritical
bifurcations of stationary DSs one finds characteristic normal
forms essentially depending on the symmetries of the system.
E.g., for a transition from a symmetric stationary to an
intrinsically propagating DS one finds the Pitchfork normal
form
for the velocity v of the DS, here σ
represents the bifurcation parameter and σ0
the bifurcation point. For a bifurcation to a "breathing" DS,
one finds the Hopf normal form
for the amplitude A of the oscillation.<ref
name="gurevich"/> It is also possible to treat "weak interaction"
as long as the overlap of the DSs is not too large. In this way, a
comparison between experiment and theory is facilitated.
Note that the above problems do not arise for classical
solitons as inverse scattering theory yields complete
analytical solutions.
See also
Clapotis
Compacton, a soliton with compact support
Fiber laser
Freak waves may be a related phenomenon
Graphene
Nonlinear Schrödinger equation
Nonlinear system
Oscillon
Peakon, a soliton with a non-differentiable peak
Q-ball, a non-topological soliton
Sine-Gordon equation
Solitary waves in discrete media
Soliton (optics)
Soliton (topological)
Soliton model of nerve impulse propagation
Topological quantum number
Vector soliton
References
Inline
Books and overview articles
N. Akhmediev and A. Ankiewicz, Dissipative Solitons, Lecture Notes in Physics, Springer, Berlin (2005)
N. Akhmediev and A. Ankiewicz, Dissipative Solitons: From Optics to Biology and Medicine, Lecture Notes in Physics, Springer, Berlin (2008)
H.-G. Purwins et al., Advances in Physics 59 (2010): 485
A. W. Liehr: Dissipative Solitons in Reaction Diffusion Systems. Mechanism, Dynamics, Interaction. Volume 70 of Springer Series in Synergetics, Springer, Berlin Heidelberg 2013,
Solitons
Self-organization
Systems theory | Dissipative soliton | [
"Mathematics"
] | 2,623 | [
"Self-organization",
"Dynamical systems"
] |
11,830,372 | https://en.wikipedia.org/wiki/Menger%20curvature | In mathematics, the Menger curvature of a triple of points in n-dimensional Euclidean space Rn is the reciprocal of the radius of the circle that passes through the three points. It is named after the Austrian-American mathematician Karl Menger.
Definition
Let x, y and z be three points in Rn; for simplicity, assume for the moment that all three points are distinct and do not lie on a single straight line. Let Π ⊆ Rn be the Euclidean plane spanned by x, y and z and let C ⊆ Π be the unique Euclidean circle in Π that passes through x, y and z (the circumcircle of x, y and z). Let R be the radius of C. Then the Menger curvature c(x, y, z) of x, y and z is defined by
If the three points are collinear, R can be informally considered to be +∞, and it makes rigorous sense to define c(x, y, z) = 0. If any of the points x, y and z are coincident, again define c(x, y, z) = 0.
Using the well-known formula relating the side lengths of a triangle to its area, it follows that
where A denotes the area of the triangle spanned by x, y and z.
Another way of computing Menger curvature is the identity
where is the angle made at the y-corner of the triangle spanned by x,y,z.
Menger curvature may also be defined on a general metric space. If X is a metric space and x,y, and z are distinct points, let f be an isometry from into . Define the Menger curvature of these points to be
Note that f need not be defined on all of X, just on {x,y,z}, and the value cX (x,y,z) is independent of the choice of f.
Integral Curvature Rectifiability
Menger curvature can be used to give quantitative conditions for when sets in may be rectifiable. For a Borel measure on a Euclidean space define
A Borel set is rectifiable if , where denotes one-dimensional Hausdorff measure restricted to the set .
The basic intuition behind the result is that Menger curvature measures how straight a given triple of points are (the smaller is, the closer x,y, and z are to being collinear), and this integral quantity being finite is saying that the set E is flat on most small scales. In particular, if the power in the integral is larger, our set is smoother than just being rectifiable
Let , be a homeomorphism and . Then if .
If where , and , then is rectifiable in the sense that there are countably many curves such that . The result is not true for , and for .:
In the opposite direction, there is a result of Peter Jones:
If , , and is rectifiable. Then there is a positive Radon measure supported on satisfying for all and such that (in particular, this measure is the Frostman measure associated to E). Moreover, if for some constant C and all and r>0, then . This last result follows from the Analyst's Traveling Salesman Theorem.
Analogous results hold in general metric spaces:
See also
Menger-Melnikov curvature of a measure
External links
References
Curvature (mathematics)
Multi-dimensional geometry | Menger curvature | [
"Physics"
] | 694 | [
"Geometric measurement",
"Physical quantities",
"Curvature (mathematics)"
] |
11,830,463 | https://en.wikipedia.org/wiki/LIGO%20Scientific%20Collaboration | The LIGO Scientific Collaboration (LSC) is a scientific collaboration of international physics institutes and research groups dedicated to the search for gravitational waves.
History
The LSC was established in 1997, under the leadership of Barry Barish. Its mission is to ensure equal scientific opportunity for individual participants and institutions by organizing research, publications, and all other scientific activities, and it includes scientists from both LIGO Laboratory and collaborating institutions. Barish appointed Rainer Weiss as the first spokesperson.
LSC members have access to the US-based Advanced LIGO detectors in Hanford, Washington and in Livingston, Louisiana, as well as the GEO 600 detector in Sarstedt, Germany. Under an agreement with the European Gravitational Observatory (EGO), LSC members also have access to data from the Virgo detector in Pisa, Italy. While the LSC and the Virgo Collaboration are separate organizations, they cooperate closely and are referred to collectively as "LVC". The KAGRA observatory's collaboration has joined the LIGO-Virgo collective, and the LIGO-Virgo-KAGRA collective is called "LVK".
The LSC Spokesperson is Patrick Brady of University of Wisconsin-Milwaukee. The Executive Director of the LIGO Laboratory is David Reitze from the University of Florida.
On 11 February 2016, the LIGO and Virgo collaborations announced that they succeeded in making the first direct gravitational wave observation on 14 September 2015.
In 2016, Barish received the Enrico Fermi Prize "for his fundamental contributions to the formation of the LIGO and LIGO-Virgo scientific collaborations and for his role in addressing challenging technological and scientific aspects whose solution led to the first detection of gravitational waves".
Collaboration members
Membership of LIGO Scientific Collaboration as of November 2015 is detailed in this table:
Notes
References
External links
LIGO Magazine
Gravitational-wave astronomy
Astronomy in the United States
Organizations based in California
Organizations based in Massachusetts
Organizations established in 1997
Albert Einstein Medal recipients | LIGO Scientific Collaboration | [
"Physics",
"Astronomy"
] | 392 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
11,830,903 | https://en.wikipedia.org/wiki/Cleveland%20Bridge%20%26%20Engineering%20Company | Cleveland Bridge & Engineering Company was a British bridge works and structural steel contractor based in Darlington. It was operational for 144 years.
From the founding of the company in 1877, it had a presence in Darlington. While initially focused on fabrication, the company became one of the major bridgebuilders in the world, having constructed structures across all five inhabited continents. It built numerous landmarks around the world, including the Victoria Falls Bridge in Zimbabwe; the Tees Transporter Bridge; the Forth Road and Humber suspension bridges in the UK; Hong Kong's Tsing Ma Bridge, and London's Wembley Stadium Arch. Cleveland Bridge's Dubai subsidiary, which was established in 1978, fabricated and erected steel structures for, amongst other projects, the Burj Al Arab and Emirates Towers.
During 1967, the company was acquired by The Cementation Company, which was itself bought by Trafalgar House soon thereafter. During 1990, it was merged with Redpath Dorman Long, another subsidiary owned by Trafalgar, to create Cleveland Structural Engineering. After a management buyout in 2000, the company operated as an independent concern, with considerable financial backing from Saudi Arabia's Al Rushaid Group. However, the company soon found itself in multiple legal disputes due to alleged quality issues and other concerns on its work on major projects such as The Shard and New Wembley Stadium; these proved to be not only costly in financial terms but also damaging to its reputation. During the early 2020s, the fiscal situation of the company declined considerably and backers proved to be unwilling to expend additional resources. Thus, in July 2021, the Darlington portion of the company went into administration in July 2021, owing £21m. After unsuccessful efforts to attract a buyer, the company was closed in September 2021.
History
Cleveland Bridge & Engineering Company was founded in 1877 in Darlington with a capital of £10,000. Seven years later, the assets were sold to Charles Frederick Dixon, who registered the company on a Stock Exchange in 1893. By 1913, it had 600 employees.
During 1967, the company was acquired by The Cementation Company. Three years later, Trafalgar House purchased Cementation; it also acquired Redpath Dorman Long from Dorman Long Group in 1982, after which the two subsidiaries were merged in 1990 to create Cleveland Structural Engineering. That business was renamed Kvaerner Cleveland Bridge following acquisition of Trafalgar House by Kværner in 1996.
During 1999, it was reported that Kværner intended to sell the business amid a wider restructuring away from heavy manufacturing activities; at the time, the company employed roughly 600 staff following a series of job losses. Despite appeals for financial assistance being made to the British government, it refused to intervene in the matter. One year later, the company became independent through a management buyout that involved a payment of $12.3 million. In addition to the UK-based operations, the same management team also acquired the company's Dubai subsidiary that had been established in 1978. Saudi Arabia's Al Rushaid Group provided finance to the firm which rose to an 88.5% stake by September 2002.
Throughout the 2000s and 2010s, the company's headcount varied considerably, often rising soon after the awarding of key contracts to the business. During this era, it undertook various activities, including its involvement in various road and railway-based schemes and several major construction projects, such as The Shard and Wembley Stadium.
Final years
In July 2021, Cleveland Bridge sought further funding from Al Rushaid Group and warned 220 staff of potential redundancies. That same month, the firm was reported to be on the brink of administration as a result of contract delays and negative economic consequences that were partially attributable to COVID-19.
Al Rushaid Group did not provide the requested resources; instead, FRP was appointed as the company's administrator and the business was put up for sale. Consequently, 51 workers were made redundant in August 2021. Around 25 staff continued to assist FRP, and 128 staff were furloughed under the Coronavirus Jobs Retention Scheme pending restart of production.
FRP was ultimately unable to secure a buyer for the business. Accordingly, on 10 September 2021, it announced the company would permanently close with the loss of a further 133 jobs. They stated £12m would be required to fund the business to the end of 2021. The company assets were sold off in November 2021.
Controversies
2016 death and HSE fine
In 2022, Cleveland Bridge & Engineering was fined £1.5 million by the Health and Safety Executive, with a further cost judgement of £29,000 against them. An inadequately secured crane access panel gave way in a 2016 fatal fall. The fine related to four breaches of the Health and Safety at Work etc. Act 1974 leading to the death. FRP Advisory stated it was unlikely the fine or costs could be paid.
The Shard
In 2013, Cleveland Bridge was ordered to pay Severfield-Rowen plc £824,478 compensation for delays to their subcontracted work on The Shard. The judge accepted there was a very high incidence of poor workmanship in the steelwork Cleveland Bridge delivered. Cleveland Bridge's own internal correspondence highlighted an extraordinary work overload in 2010, and Judge Akenhead concluded it had taken on more work than it had capacity.
Wembley Stadium
In 2002, the company won a £60 million steelwork contract for the bowl of New Wembley Stadium. Part way through construction, relationships between main contractor Multiplex and Cleveland Bridge broke down. Multiplex stripped Cleveland Bridge of their erection role, handing it to roof steelwork contractor Hollandia. Two hundred of Cleveland Bridge's on site erection staff and subcontractors transferred to Hollandia and were sacked after going on strike. The situation escalated when Cleveland Bridge unilaterally repudiated its remaining stadium fabrication contract.
Both sides blamed each other for extra costs; delays; poor workmanship; missing or incorrect steelwork; damaged, missing or incorrect paintwork; chaotic record-keeping; and the near site stock yards. Litigation ensued and Cleveland Bridge was ultimately ordered to pay Multiplex £6,154,246.79 in respect of net earlier overpayments; breach of contract, and interest. Cleveland Bridge was also ordered to pay 20% of Multiplex's legal costs. It was claimed, in evidence, that some Wembley steelwork had been fabricated in China for Cleveland Bridge and that it had been diverted to the Beijing National Stadium.
Mr Justice Jackson's 2008 judgement in the Technology and Construction Court was highly critical of both parties unwillingness to settle earlier in such an expensive case where the core evidence extended to over 500 lever arch files, and photocopying costs alone were £1 million. He highlighted the large number of items at dispute where the sums involved were substantially exceeded by the legal costs involved in resolving them.
Notable bridges
See also
References
External links
A to Z of bridges built by Cleveland Bridge
Bridge companies
Construction and civil engineering companies of England
Companies based in County Durham
Construction and civil engineering companies established in 1877
Manufacturing companies established in 1877
1877 establishments in England
Borough of Darlington
Structural steel
British companies established in 1877
2021 disestablishments in England
British companies disestablished in 2021 | Cleveland Bridge & Engineering Company | [
"Engineering"
] | 1,471 | [
"Structural engineering",
"Structural steel"
] |
11,831,990 | https://en.wikipedia.org/wiki/Bloch%27s%20theorem%20%28complex%20analysis%29 | In complex analysis, a branch of mathematics, Bloch's theorem describes the behaviour of holomorphic functions defined on the unit disk. It gives a lower bound on the size of a disk in which an inverse to a holomorphic function exists. It is named after André Bloch.
Statement
Let f be a holomorphic function in the unit disk |z| ≤ 1 for which
Bloch's theorem states that there is a disk S ⊂ D on which f is biholomorphic and f(S) contains a disk with radius 1/72.
Landau's theorem
If f is a holomorphic function in the unit disk with the property |f′(0)| = 1, then let Lf be the radius of the largest disk contained in the image of f.
Landau's theorem states that there is a constant L defined as the infimum of Lf over all such functions f, and that L is greater than Bloch's constant L ≥ B.
This theorem is named after Edmund Landau.
Valiron's theorem
Bloch's theorem was inspired by the following theorem of Georges Valiron:
Theorem. If f is a non-constant entire function then there exist disks D of arbitrarily large radius and analytic functions φ in D such that f(φ(z)) = z for z in D.
Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's principle.
Proof
Landau's theorem
We first prove the case when f(0) = 0, f′(0) = 1, and |f′(z)| ≤ 2 in the unit disk.
By Cauchy's integral formula, we have a bound
where γ is the counterclockwise circle of radius r around z, and 0 < r < 1 − |z|.
By Taylor's theorem, for each z in the unit disk, there exists 0 ≤ t ≤ 1 such that f(z) = z + z2f″(tz) / 2.
Thus, if |z| = 1/3 and |w| < 1/6, we have
By Rouché's theorem, the range of f contains the disk of radius 1/6 around 0.
Let D(z0, r) denote the open disk of radius r around z0. For an analytic function g : D(z0, r) → C such that g(z0) ≠ 0, the case above applied to (g(z0 + rz) − g(z0)) / (rg′(0)) implies that the range of g contains D(g(z0), |g′(0)|r / 6).
For the general case, let f be an analytic function in the unit disk such that |f′(0)| = 1, and z0 = 0.
If |f′(z)| ≤ 2|f′(z0)| for |z − z0| < 1/4, then by the first case, the range of f contains a disk of radius |f′(z0)| / 24 = 1/24.
Otherwise, there exists z1 such that |z1 − z0| < 1/4 and |f′(z1)| > 2|f′(z0)|.
If |f′(z)| ≤ 2|f′(z1)| for |z − z1| < 1/8, then by the first case, the range of f contains a disk of radius |f′(z1)| / 48 > |f′(z0)| / 24 = 1/24.
Otherwise, there exists z2 such that |z2 − z1| < 1/8 and |f′(z2)| > 2|f′(z1)|.
Repeating this argument, we either find a disk of radius at least 1/24 in the range of f, proving the theorem, or find an infinite sequence (zn) such that |zn − zn−1| < 1/2n+1 and |f′(zn)| > 2|f′(zn−1)|.
In the latter case the sequence is in D(0, 1/2), so f′ is unbounded in D(0, 1/2), a contradiction.
Bloch's theorem
In the proof of Landau's Theorem above, Rouché's theorem implies that not only can we find a disk D of radius at least 1/24 in the range of f, but there is also a small disk D0 inside the unit disk such that for every w ∈ D there is a unique z ∈ D0 with f(z) = w. Thus, f is a bijective analytic function from D0 ∩ f−1(D) to D, so its inverse φ is also analytic by the inverse function theorem.
Bloch's and Landau's constants
The number B is called the Bloch's constant. The lower bound 1/72 in Bloch's theorem is not the best possible. Bloch's theorem tells us B ≥ 1/72, but the exact value of B is still unknown.
The best known bounds for B at present are
where Γ is the Gamma function. The lower bound was proved by Chen and Gauthier, and the upper bound dates back to Ahlfors and Grunsky.
The similarly defined optimal constant L in Landau's theorem is called the Landau's constant. Its exact value is also unknown, but it is known that
In their paper, Ahlfors and Grunsky conjectured that their upper bounds are actually the true values of B and L.
For injective holomorphic functions on the unit disk, a constant A can similarly be defined. It is known that
See also
Table of selected mathematical constants
References
External links
Unsolved problems in mathematics
Theorems in complex analysis | Bloch's theorem (complex analysis) | [
"Mathematics"
] | 1,251 | [
"Mathematical problems",
"Theorems in mathematical analysis",
"Unsolved problems in mathematics",
"Theorems in complex analysis"
] |
11,832,350 | https://en.wikipedia.org/wiki/Growth%20factor%20receptor | A growth factor receptor is a receptor that binds to a growth factor. Growth factor receptors are the first stop in cells where the signaling cascade for cell differentiation and proliferation begins. Growth factors, which are ligands that bind to the receptor are the initial step to activating the growth factor receptors and tells the cell to grow and/or divide.
These receptors may use the JAK/STAT, MAP kinase, and PI3 kinase pathways.
A majority of growth factor receptors consists of receptor tyrosine kinases (RTKs). There are 3 dominant receptor types that are exclusive to research : the epidermal growth factor receptor, the neurotrophin receptor, and the insulin receptors. All growth factor receptors are membrane bound and composed of 3 general protein domains: extracellular, transmembrane, and cytoplasmic. The extracellular domain region is where a ligand may bind, usually with very high specificity. In RTKs, the binding of a ligand to the extracellular ligand binding site leads to the autophosphorylation of tyrosine residues in the intracellular domain. These phosphorylations allow for other intracellular proteins to bind to with the phosphotyrosine-binding domain which results in a series of physiological responses within the cell.
Medical relevance
Research in today’s society focus on growth factor receptors in order to pinpoint cancer treatment. Epidermal growth factor receptors are involved heavily with oncogene activity. Once growth factors bind to their receptor, a signal transduction pathway occurs within the cell to ensure the cell is working. However, in cancerous cells, the pathway might never turn on or turn off. Furthermore, in certain cancers, receptors (such as RTKs) are often observed to be overexpressed, which corresponds to the uncontrolled proliferation and differentiation of cells. For this same reason, tyrosine receptors are often a target for cancer therapy.
References
Receptors
Single-pass transmembrane proteins | Growth factor receptor | [
"Chemistry"
] | 406 | [
"Receptors",
"Signal transduction"
] |
11,832,700 | https://en.wikipedia.org/wiki/List%20of%20abbreviations%20in%20oil%20and%20gas%20exploration%20and%20production | The oil and gas industry uses many acronyms and abbreviations. This list is meant for indicative purposes only and should not be relied upon for anything but general information.
#
1C – Proved contingent resources
1oo2 – One out of two voting (instrumentation)
1P – Proven reserves
2C – Proved and probable contingent resources
2D – two-dimensional (geophysics)
2oo2 – Two out of two voting (instrumentation)
2oo2D – Two out of two voting with additional diagnostic detection capabilities (instrumentation)
2oo3 – Two out of three voting (instrumentation)
2ooN detection – to reach specified alarm limit when N ≥ 3 (instrumentation)
2P – proved and probable reserves
3C – three components seismic acquisition (x, y, and z)
3C – Proved, probable and possible contingent resources
3D – three-dimensional (geophysics)
3P – proved, probable and possible reserves
4D – multiple 3Ds acquired over time (the 4th D) over the same area with the same parameters (geophysics)
8rd – eight round (describes the number of revolutions per inch of pipe thread)
Symbol
°API – degrees API (American Petroleum Institute) density of oil
A
A – Appraisal (well)
AADE – American Association of Drilling Engineers
AAPG – American Association of Petroleum Geologists
AAPL – American Association of Professional Landmen
AAODC – American Association of Oilwell Drilling Contractors (obsolete; superseded by IADC)
AAV – Annulus access valve
ABAN – Abandonment, (also as AB and ABD and ABND)
ABSA – Alberta Boilers Safety Association
ABT – Annulus bore test
ACC – Air-cooled heat condenser
ACHE – Air-cooled heat exchanger
ACOU – Acoustic
ACP – Alkali-cosolvent-polymer
ACQU – Acquisition log
ACV – Automatic control valve
ADE – Advanced decision-making environment
ADEP – Awaiting development with exploration potential, referring to an asset
ADROC – advanced rock properties report
ADT – Applied drilling technology, ADT log
ADM – Advanced diagnostics module (fieldbus)
AER – Auto excitation regulator
AEMO – Australian Energy Market Operator
AFE – Authorization for expenditure, a process of submitting a business proposal to investors
AFP – Active rire protection
AGA – American Gas Association
AGRU – acid gas removal unit
AGT – (1) agitator, used in drilling
AGT – (2) authorised gas tester (certified by OPITTO)
AGT – (3) Azerbaijan – Georgia – Turkey (a region rich in oil related activity)
AHBDF – along hole (depth) below Derrick floor
AHD – along hole depth
AHU – air handling unit
AICD – autonomous inflow control device
AIChemE – American Institute of Chemical Engineers
AIM – asset integrity management
AIPSM – asset integrity and process safety management
AIR – assurance interface and risk
AIRG – airgun
AIRRE – airgun report
AISC – American Institute of Steel Construction
AISI – American Iron and Steel Institute
AIT – analyzer indicator transmitter
AIT – array induction tool
AL – appraisal license (United Kingdom), a type of onshore licence issued before 1996
ALAP – as low as possible (used along with density of mud)
ALARP – as low as reasonably practicable
ALC – vertical seismic profile acoustic log calibration report
ALLMS – anchor leg load monitoring system
ALQ – additional living quarters
ALR – acoustic log report
ALT – altered
AM – asset management
aMDEA – activated methyldiethanolamine
AMS – auxiliary measurement service log; auxiliary measurement sonde (temperature)
AMSL – above mean sea level
AMI – area of mutual interest
AMV – annulus master valve
ANACO – analysis of core logs report
ANARE – analysis report
AOF – absolute open flow
AOFP – absolute open-flow potential
AOI – area of interest
AOL – arrive on location
AOR – additional oil recovery
AP – alkali-polymer
APD – application for permit to drill
API – American Petroleum Institute: organization which sets unit standards in the oil and gas industry
°API – degrees API (gravity of oil)
APPRE – appraisal report
APS – active pipe support
APWD – annular pressure while drilling (tool)
ARACL – array acoustic log
ARESV – analysis of reservoir
ARI – azimuthal resistivity image
ARRC – array acoustic report
ART – actuator running tool
AS – array sonic processing log
ASD – acoustic sand detection
ASI – ASI log
ASME – American Society of Mechanical Engineers
ASOG – activity-specific operating guidelines
ASP – array sonic processing report
ASP – alkali-surfactant-polymer
ASTM – American Society for Testing and Materials
ASCSSV – annulus surface controlled sub-surface valve
ASV – anti-surge valve
ASV – annular safety valve
ASV – accommodation and support vessel
ATD – application to drill
ATU – auto top-up unit
AUV – authonomus underwater vehicle
AV – annular velocity or apparent viscosity
AVGMS – annulus vent gas monitoring system
AVO – amplitude versus offset (geophysics)
AWB/V – annulus wing block/valve (XT)
AWO – approval for well operation
ATM – at the moment
B
B or b – prefix denoting a number in billions
BA – bottom assembly (of a riser)
bbl – barrel
bbl/MMscf – barrels per million standard cubic feet
BBG – buy back gas
BBSM – behaviour-based safety management
BCPD – barrels condensate per day
Bcf – billion cubic feet (of natural gas)
Bcf/d – billion cubic feet per day (of natural gas)
Bcfe – billion cubic feet (of natural gas equivalent)
BD – bursting disc
BDF – below derrick floor
BDL – bit data log
BDV – blowdown valve
BGL – borehole geometry log
BGL – below ground level (used as a datum for depths in a well)
BGS – British Geological Survey
BGT – borehole geometry tool
BGWP – base of ground-water protection
BH – bloodhound
BHA – bottom hole assembly (toolstring on coiled tubing or drill pipe)
BHC – BHC gamma ray log
BHCA – BHC acoustic log
BHCS – BHC sonic log
BHCT – bottomhole circulating temperature
BHKA – bottomhole kickoff assembly
BHL – borehole log
BHP – bottom hole pressure
BHPRP – borehole pressure report
BHSRE – bottom hole sampling report
BHSS – borehole seismic survey
BHT – bottomhole temperature
BHTV – borehole television report
BINXQ – bond index quicklook log
BIOR – biostratigraphic range log
BIORE – biostratigraphy study report
BSLM – bend stiffener latch mechanism
BSW – base sediment and water
BIVDL – BI/DK/WF/casing collar locator/gamma ray log
BLD – bailed (refers to the practice of removing debris from the hole with a cylindrical container on a wireline)
BLI – bottom of logging interval
BLP – bridge-linked platform
BO – back-off log
BO – barrel of oil
boe – barrels of oil equivalent
boed – barrels of oil equivalent per day
BOEM – Bureau of Ocean Energy Management
boepd – barrels of oil equivalent per day
BOB – back on bottom
BOD – biological oxygen demand
BOL – bill of lading
BOM – bill of materials
BOP – blowout preventer
BOP – bottom of pipe
BOPD – barrels of oil per day
BOPE – blowout prevention equipment
BOREH – borehole seismic analysis
BOSIET – basic offshore safety induction and emergency training
BOTHL – bottom hole locator log
BOTTO – bottom hole pressure/temperature report
BP – bridge plug
BPD – barrels per day
BPH – barrels per hour
BPFL – borehole profile log
BPLUG – baker plug
BPM – barrels per minute
BPV – back pressure valve (goes on the end of coiled tubing a drill pipe tool strings to prevent fluid flow in the wrong direction)
BQL – B/QL log
BRPLG – bridge plug log
BRT – below rotary table (used as a datum for depths in a well)
BS – bend stiffener
BS – bumper sub
BS – booster station
BSEE – US: Bureau of Safety and Environmental Enforcement (formerly the MMS)
BSG – black start generator
BSR – blind shear rams (blowout preventer)
BSML – below sea mean level
BS&W – basic sediments and water
BT – buoyancy tank
BTEX – benzene, toluene, ethyl-benzene and xylene
BTHL – bottom hole log
BTO/C – break to open/close (valve torque)
BTU – British thermal units
BTU – Board of Trade Unit (1 kWh) (historical)
BU – bottom up
BUL – bottom-up lag
BUR – build-up rate
BVO – ball valve operator
bwd – barrels of water per day (often used in reference to oil production)
bwipd – barrels of water injected per day
– barrels of water per day
C
C&E – well completion and equipment cost
C&S – cased and suspended
C1 – methane
C2 – ethane
C3 – propane
C4 – butane
C6 – hexanes
C7+ – heavy hydrocarbon components
CA – core analysis log
CAAF – contract authorization approval form
CalGEM – California Geologic Energy Management Division (oil & gas regulatory body)
CALI – caliper log
CALOG – circumferential acoustic log
CALVE – calibrated velocity log data
CAODC – Canadian Association of Oilwell Drilling Contractors
CAPP – Canadian Association of Petroleum Producers
CAR – Company Appointed Representative
CART – cam-actuated running tool (housing running tool)
CART – cap replacement tool
CAS – casing log
CAT – connector actuating tool
CB – casing bowl
CB – core barrel
CBF – casing bowl flange
CBIL – CBIL log
CBL – cement bond log (measurement of casing cement integrity)
CBM – choke bridge module – XT choke
CBM – conventional buoy mooring
CBM – coal-bed methane
CCHT – core chart log
CCL – casing collar locator (in perforation or completion operations, the tool provides depths by correlation of the casing string's magnetic anomaly with known casing features)
CCLBD – construction / commissioning logic block diagram
CCLP – casing collar locator perforation
CCLTP – casing collar locator through tubing plug
CD – core description
CDATA – core data
CDIS – CDI synthetic seismic log
CDU – control distribution unit
CDU – crude distillation unit
CDP – common depth point (geophysics)
CDP – comprehensive drilling plan
CDRCL – compensated dual resistivity cal. log
CDF – core contaminated by drilling fluid
CDFT – critical device function test
CE – CE log
CEC – cation-exchange capacity
CECAN – CEC analysis
CEME – cement evaluation
CEOR – chemical-enhanced oil recovery
CER – central electrical/equipment room
CERE – cement remedial log
CET – cement evaluation tool
CF – completion fluid
CF – casing flange
CFD – computational fluid dynamics
CFGPD – cubic feet of gas per day
CFU – compact flotation unit
CGEL – CG EL log
CGL – core gamma log
CGPA – Canadian Gas Processors Association
CGPH – core graph log
CGR – condensate gas ratio
CGTL – compact gas to liquids (production equipment small enough to fit on a ship)
CHCNC – CHCNC gamma ray casing collar locator
CHDTP – calliper HDT playback log
CHECK – checkshot and acoustic calibration report
CHESM – contractor, health, environment and safety management
CHF – casing head flange
CHK – choke (a restriction in a flowline or a system, usually referring to a production choke during a test or the choke in the well control system)
CHKSR – checkshot survey report
CHKSS – checkshot survey log
CHOPS – cold heavy oil production with sand
CHP – casing hanger pressure (pressure in an annulus as measured at the casing hanger)
CHOTO – commissioning, handover and takeover
CHROM – chromatolog
CHRT – casing hanger running tool
CIBP – cast iron bridge plug
CICR – cast iron cement retainer
CIDL – chemical injection downhole lower
CIDU – chemical injection downhole upper
CIL – chemical injection line
CILD – conduction log
CIMV – chemical injection metering valve
CIRC – circulation
CITHP – closed-in tubing head pressure (tubing head pressure when the well is shut in)
CIV – chemical injection valve
CK – choke (a restriction in a flowline or a system, usually referring to a production choke during a test or the choke in the well control system)
CL – core log
CLG – core log and graph
CM – choke module
CMC – crown mounted compensators
CMC – critical micelle concentration
CMP – common midpoint (geophysics)
CMR – combinable magnetic resonance (NMR log tool)
CMT – cement
CNA – clay, no analysis
CND – compensated neutron density
CNFDP – CNFD true vertical-depth playback log
CNGR – compensated neutron gamma-ray log
CNL – compensated neutron log
CNLFD – CNL/FDC log
CNS – Central North Sea
CNCF – field-normalised compensated neutron porosity
CNR – Canadian natural resources
CO – change out (ex. from rod equipment to casing equipment)
COA – conditions of approval
COC – certificate of conformance
COD – chemical oxygen demand
COL – collar log
COMAN – compositional analysis
COML – compaction log
COMP – composite log
COMPR – completion program report
COMPU – computest report
COMRE – completion record log
COND – condensate production
CONDE – condensate analysis report
CONDR – continuous directional log
CORAN – core analysis report
CORE – core report
CORG – corgun log
CORIB – CORIBAND log
CORLG – correlation log
COROR – core orientation report
COW – Control of Work
COXY – carbon/oxygen log
CP – cathodic protection
CP – crown plug
cP – centipoise (viscosity unit of measurement)
CPI separator – corrugated plate interceptor
CPI – computer-processed interpretation
CPI – corrugated plate interceptor
CPICB – computer-processed interpretation coriband log
CPIRE – computer-processed interpretation report
CPP – central processing platform
CRA – corrosion-resistant alloy
CRET – cement retainer setting log
CRI – cuttings reinjection
CRINE – cost reduction in the new era
CRP – control riser platform
CRP – common/central reference point (subsea survey)
CRT – clamp replacement tool
CRT – casing running tool
CSE – confined space entry
CsF – caesium formate
CSC – car seal closed
CSG – coal seam gas
csg – casing
CSHN – cased-hole neutron log
CSI – combinable seismic imager (VSP) log (Schlumberger)
CSMT – core sampler tester log
CSO – complete seal-off
CSO – car seal open
CSPG – Canadian Society of Petroleum Geologists
CSR – corporate social responsibility
CST – chronological sample taker log (Schlumberger)
CSTAK – core sample taken log
CSTR – continuously-stirred tank reactor
CSTRE – CST report
CSU – commissioning and start-up
CSU – construction safety unit
CSUG – Canadian Society for Unconventional Gas
CT – coiled tubing
CTD – coiled tubing drilling
CTCO – coiled tubing clean-out
CTLF – coiled tubing lift frame
CTLF – compensated tension lift frame
CTOD – crack tip opening displacement
CTP – commissioning test procedure
CTR – Critical Transport Rate
CTRAC – cement tracer log
CUI – corrosion under insulation
CUL – cross-unit lateral
CUT – cutter log
CUTTD – cuttings description report
CWOP – complete well on paper
CWOR – completion work over riser
CWR – cooling water return
CWS – cooling water supply
X/O – cross-over
CYBD – Cyberbond log
CYBLK – Cyberlook log
CYDIP – Cyberdip log
CYDN – Cyberdon log
CYPRO – Cyberproducts log
CVD – Cost versus Depth
CVX – Chevron
D
D – development
D – Darcy, unit of permeability
D&A – dry and abandoned
D&C – drilling and completions
D&I – direction and inclination (MWD borehole deviation survey)
DAC – dipole acoustic log
DARCI – Darci log
DAS – data acquisition system
DAT – wellhead housing drill-ahead tool
DAZD – dip and azimuth display
DBB – double block and bleed
DBP – drillable bridge plug
DBR – damaged beyond repair
DCA – decline curve analysis
DC – drill centre
DC – drill collar/collars
DCAL – dual caliper log
DCC – distance cross course
DCS – distributed control system
DD – directional driller or directional drilling
DDC – daily drilling cost
DDC – de-watering and drying contract
DDBHC – DDBHC waveform log
DDET – depth determination log
DDM – derrick drilling machine (a.k.a. top drive)
DDNL – dual det. neutron life log
DDPT – drill data plot log
DDPU – double drum pulling unit
DDR – daily drilling report
DEA – diethanolamine
DECC – Department for Energy and Climate Change (UK)
DECT – decay time
DECT – down-hole electric cutting tool
DEFSU – definitive survey report
DEH – direct electrical heating
DELTA – delta-T log
DEN – density log
DEPAN – deposit analysis report
DEPC – depth control log
DEPT – depth
DESFL – deep induction SFL log
DEV – development well, Lahee classification
DEVLG – deviation log
DEXP – D-exponent log
DF – derrick floor
DFI – design, fabrication and installation résumé
DFIT – diagnostic fracture injection test
DFPH – Barrels of fluid per hour
DFR – drilling factual report
DG/DG# – diesel generator ('#'- means identification letter or number of the equipment i.e. DG3 or DG#3 means diesel generator nr 3)
DGA – diglycoamine
DGDS – dual-gradient drilling systems
DGP – dynamic geohistory plot (3D technique)
DH – drilling history
DHC – depositional history curve
DHSV – downhole safety valve
DHPG – downhole pressure gauge
DHPTT – downhole pressure/temperature transducer
DIBHC – DIS BHC log
DIEGR – dielectric gamma ray log
DIF – drill in fluids
DIL – dual-induction log
DILB – dual-induction BHC log
DILL – dual-induction laterolog
DILLS – dual-induction log-LSS
DILSL – dual-induction log-SLS
DIM – directional inertia mechanism
DINT – dip interpretation
DIP – dipmeter log
DIPAR – dipole acoustic report
DIPBH – dipmeter borehole log
DIPFT – dipmeter fast log
DIPLP – dip lithology pressure log
DIPRE – dipmeter report
DIPRM – dip removal log
DIPSA – dipmeter soda log
DIPSK – dipmeter stick log
DIRS – directional survey log
DIRSU – directional survey report
DIS – DIS-SLS log
DISFL – DISFL DBHC gamma ray log
DISO – dual induction sonic log
DL – development license (United Kingdom), a type of onshore license issued before 1996
DLIST – dip-list log
DLL – dual laterolog (deep and shallow resistivity)
DLS – dog-leg severity (directional drilling)
DM – dry mate
DMA – dead-man anchor
DMAS – dead-man auto-shear DMAS
DMRP – density – magnetic resonance porosity (wireline tool)
DMT – down-hole monitoring tool
DNHO – down-hole logging
DNV – Det Norske Veritas
DOA – delegation of authority
DOE – Department of Energy, United States
DOGGR – Division of Oil, Gas, and Geothermal Resources (former name of California's regulatory entity for oil, gas, and geothermal production)
DOPH – drilled-out plugged hole
DOWRE – downhole report
DP – drill pipe
DP – dynamic positioning
DPDV – dynamically positioned drilling vessel
DPL – dual propagation log
DPLD – differential pressure levitated device (or vehicle)
DPRES – dual propagation resistivity log
DPT – deeper pool test, Lahee classification
DQLC – dipmeter quality control log
DR – dummy-run log
DR – drilling report
DRI – drift log
DRL – drilling
DRLCT – drilling chart
DRLOG – drilling log
DRLPR – drilling proposal/progress report
DRO – discovered resources opportunities
DRPG – drilling program report
DRPRS – drilling pressure
DRREP – drilling report
DRYRE – drying report
DS – deviation survey, (also directional system)
DSA – Double Studded Adapter
DSCAN – DSC analysis report
DSI – dipole shear imager
DSL – digital spectralog (western atlas)
DSPT – cross-plots log
DST – drill-stem test
DSTG – DSTG log
DSTL – drill-stem test log
DSTND – dual-space thermal neutron density log
DSTPB – drill-stem test true vertical depth playback log
DSTR – drill-stem test report
DSTRE – drill-stem test report
DSTSM – drill-stem test summary report
DSTW – drill-stem test job report/works
DSU – drill spacing unit
DSV – diving support vessel or drilling supervisor
DTI – Department of Trade and Industry (UK) (obsolete; superseded by dBERR, which was then superseded by DECC)
DTPB – CNT true vertical-depth playback log
DTT – depth to time
DUC – drilled but uncompleted wells
DVD – Depth versus Day
DVT – differential valve tool (for cementing multiple stages)
DWOP – drilling well on paper (a theoretical exercise conducted involving the service-provider managers)
DWQL – dual-water quicklook log
DWSS – dig-well seismic surface log
DXC – DXC pressure pilot report
E
E – exploration
E&A – exploration and appraisal
E&I – electrical and instrumentation
E&P – exploration and production, another name for the upstream sector
EA – exploration asset
EAGE – European Association of Geoscientists and Engineers
ECA – Easington Catchment Area
ECD – equivalent circulating density
EDG/EDGE – emergency diesel generator
ECMS – electrical control and monitoring system
ECMWF – European Centre for Medium-Range Weather Forecasts
ECP – external casing packer
ECRD – electrically-controlled release device (for abandoning stuck wireline tool from cable)
ECT – external cantilevered turret
EDG – Emergency Diesel Generator
EDP – exploration drilling program report
EDP – emergency disconnect pPackage
EDP – emergency depressurisation
EDPHOT – emergency drill pipe hang-off tool
EDR – exploration drilling report
EDR – electronic drilling recorder
EDS – emergency disconnection sequence
EEAR – emergency electrical auto restart
EEHA – electrical equipment for hazardous areas (IECEx)
EFL – electrical flying lead
EFR – engineering factual report
EHT – electric heat trace
EGBE – ethylene glycol monobutyl ether (2-butoxyethanol)
EGMBE – ethylene glycol monobutyl ether
EHU – electro-hydraulic unit
EIA – environmental impact assessment
EI – Energy Institute
ELEC TECH – electronics technician
ELT – economic limit test
EL – electric log
EM – EMOP log
EMCS – energy management and control systems
EMD – equivalent mud density
EMG – equivalent mud gradient
EMOP – EMOP well site processing log
EMP – electromagnetic propagation log
EMR – electronic memory read-out
EMS – environment measurement sonde (wireline multi-caliper)
EMW – equivalent mud weight
EN PI – enhanced productivity index log
ENG – engineering log
ENGF – engineer factual report
ENGPD – engineering porosity data
Eni – Ente Nazionale Idrocarburi S.p.A. (Italy)
ENJ – enerjet log
ENMCS – electrical network monitoring and control system
EODU – electrical and optical distribution unit
EOFL – end of field life
EOR – enhanced oil recovery
EOT – end of tubing
EOT – electric overhead travelling
ELV – extra-low voltage
EOW – end-of-well report
EPCM/I – engineering procurement construction and management/installation
EPCU – electrical power conditioning unit
EPIDORIS – exploration and production integrated drilling operations and reservoir information system
EPL – EPL log
EPLG – epilog
EPLPC – EPL-PCD-SGR log
EPS – early production system
EPT – electromagnetic propagation
EPU – electrical power unit
EPTNG – EPT-NGT log
EPV – early production vessel
– extended reach (drilling)
ERT – emergency response training
ESD – emergency shutdown
ESD – equivalent static density
ESDV – emergency shutdown valve
ESHIA – environmental, social and health impact assessment
ESIA – environmental and social impact assessment
ESP – electric submersible pump
ETAP – Eastern Trough Area Project
ETD – external turret disconnectable
ETECH – electronics technician
ETTD – electromagnetic thickness test
ETU – electrical test unit
EUE – external-upset-end (tubing connection)
EUR – estimated ultimate recovery
EVARE – evaluation report
EWMP – earthworks/electrical works/excavation works management plan
EWR – end-of-well report
EXL – or XL, exploration licence (United Kingdom), a type of onshore licence issued between the first onshore licensing round (1986) and the sixth (1992)
EXP – exposed
EZSV – easy sliding valve (drillable packer plug)
F
F&G – fire and gas
FAC – factual report
FAC – first aid case
FACHV – four-arm calliper log
FANAL – formation analysis sheet log
FANG – friction angle
FAR – field auxiliary room
FAT – factory acceptance testing
FB – full bore
FBE – fusion-bonded epoxy
FBHP – flowing bottom-hole pressure
FBHT – flowing bottom-hole temperature
FC – float collar
FC – fail closed (valve or damper)
FCGT – flood clean gauge test
FCM – flow control module
FCP – final circulating pressure
FCV – flow control valve
FCVE – F-curve log
FDC – formation density log
FDF – forced-draft fan
FDP – field development plan
FDS – functional design specification
FDT – fractional dead time
FEED – front-end engineering design
FEL – from east line
FER – field equipment room
FER – formation evaluation report
FEWD – formation evaluation while drilling
FFAC – formation factor log
FFM – full field model
FG – fiberglass
FGHT – flood gauge hydrotest
FRP – fiberglass reinforced plastics
FGEOL – final geological report
FH – full-hole tool joint
FI – final inspection
FID – final investment decision
FID – flame ionisation detection
FIH – finish in hole (tripping pipe)
FIL – FIL log
– free issue (materials)
FINST – final stratigraphic report
FINTP – formation interpretation
FIP – flow-induced pulsation
FIT – fairing intervention tool
FIT – fluid identification test
FIT – formation integrity test
FIT – formation interval tester
FIT – flow indicator transmitter
FIV – flow-induced vibration
FIV – formation isolation valve
FJC – field joint coating
FL – F log
FL – fail locked (valve or damper)
FL – fluid level
FLAP –fluid level above pump
FLB – field logistics base
FLDF – flying lead deployment frame
FLIV – flowline injection valve
FLIV – flowline isolation valve
FLET – flowline end termination
aFLET – actuated flowline end termination
FLNG – floating liquefied natural gas
FLOG – FLOG PHIX RHGX log
FLOPR – flow profile report
FLOT – flying lead orientation tool
FLOW – flow and buildup test report
FLRA – field-level risk assessment
FLS – fluid sample
FLT – fault (geology)
FLT – flying lead termination
FLTC – fail locked tending to close
FLTO – fail locked tending to open
FMD – flooded member detection
FMEA – failure modes, & effects analysis
FMECA – failure modes, effects, and criticality analysis
FMI – formation micro imaging log (azimuthal microresistivity)
FMP – formation microscan report
FMP – Field Management Plan
FMS – formation multi-scan log; formation micro-scan log
FMS – flush-mounted slips
FMT – flow management tool
FMTAN – FMT analysis report
FNL – from north line
FO – fail open (valve or damper)
FOBOT – fibre optic breakout tray
FOET – further offshore emergency training
FOF – face of flange
FOH – finish out of hole (tripping pipe)
FOSA – field operating services agreement
FOSV – full-opening safety valve
FPDM – fracture potential and domain modelling/mapping
FPH – feet per hour
FPIT – free-point indicator tool
FPL – flow analysis log
FPLP – freshman petroleum learning program (Penn State)
FPLAN – field plan log
FPS – field production system
FPO – floating production and offloading – vessel with no or very limited (process only) on-board produced fluid storage capacity.
FPSO – floating production storage and offloading vessel
FPU – floating processing unit
FRA – fracture log
FRARE – fracture report
FRES – final reserve report
FS – fail safe
FSB – flowline support base
FSI – flawless start-up initiative
FSL – from south line
FSLT – flexible sealine lifting tool
FSO – floating storage offloading vessel
FSR – facility status report
FSU – floating storage unit
FT – formation tester log
FTHP – Flowing Tubing Head Pressure
FTL – field team leader
FTM – fire-team member
FTP – first tranche petroleum
FTP – field terminal platform
FTR – function test report
FTRE – formation testing report
FULDI – full diameter study report
FV – funnel viscosity
FV – float valve
FVF – formation volume factor
FWHP – flowing well-head pressure
FWKO – free water knock-out
FWL – free water level
FWL – from West line
FWR – final well report
FWV – flow wing valve (also known as production wing valve on a christmas tree)
FR – flow rate
G
G/C – gas condensate
GC – gathering center
G&P – gathering and processing
G&T – gathering and transportation
GALT – gross air leak test
GAS – gas log
GASAN – gas analysis report
GBS – gravity-based structure
GBT – gravity base tank
GC – Gauge Cutter
GCB – generator circuit breaker
GCLOG – graphic core log
GCT – GCT log
GDAT – geodetic datum
GDE – gross depositional environment
GDIP – geodip log
GDT – gas down to
GE – condensate gas equivalent
GE – ground elevation (also GR, or GRE)
GEOCH – geochemical evaluation
GEODY – GEO DYS log
GEOEV – geochemical evaluation report
GEOFO – geological and formation evaluation report
GEOL – geological surveillance log
GEOP – geophone data log
GEOPN – geological well prognosis report
GEOPR – geological operations progress report
GEORE – geological report
GGRG – gauge ring
GIIP – gas initially in place
GIH – go in hole
GIP – gas in place
GIS – geographic information system
GL – gas lift
GL – ground level
GLE – ground level elevation (generally in metres above mean sea level)
GLM – gas lift mandrel (alternative name for side pocket mandrel)
GLR – gas-liquid ratio
GLT – GLT log
GLV – gas lift valve
GLW –
GM – gas migration
GOC – gas oil contact
GOM – Gulf of Mexico
GOP – geological operations report
GOR – gas oil ratio
GOSP – gas/oil separation plant
GPIT – general-purpose inclinometry tool (borehole survey)
GPLT – geol plot log
GPTG – gallons per thousand gallons
GPM – gallons per Mcf
GPSL – geo pressure log
GR – ground level
GR – gamma ray
GR – gauge ring (measure hole size)
GRAD – gradiometer log
GRE – ground elevation
GRLOG – grapholog
GRN – gamma ray neutron log
GRP – glass-reinforced plastic
GRV – gross rock volume
GRSVY – gradient survey log
GS – gas supplier
GS – gel strength
GST – GST log
GTC/G –gas turbine compressor/generator
GTL – gas to liquids
GTW – gas to wire
GUN – gun set log
GWC – gas-water contact
GWR – guided wave radar
GWREP – geo well report
H
HAT – highest astronomical tide
HAZ – heat-affected zone
HAZID – hazard identification (meeting)
HAZOP – hazard and operability study (meeting)
HBE – high-build epoxy
HBP – held by production
HC – hydrocarbons
HCAL – HRCC caliper (in logs)(in inches)
HCCS – horizontal clamp connection system
HCM – horizontal connection module (to connect the christmas tree to the manifold)
HCS – high-capacity square mesh screens
HD – head
HDA – helideck assistant
HDD – horizontal directional drilling
HDPE – high-density polyethylene
HDT – high-resolution dipmeter log
HDU – horizontal drive unit
HEXT – hex diplog
HFE – human factors engineering
HFL – hydraulic flying lead
HGO – heavy gas oil
HGS – high (wpecific-)gravity solids
HH – horse head (on pumping unit)
HHP – hydraulic horsepower
HI – hydrogen index
HiPAP – high-precision acoustic positioning
HIPPS – high-integrity pressure protection system
HIRA – hazard identification and risk assessment
HISC – hydrogen-induced stress cracking
HKLD – hook load
HL – hook load
HLCV – heavy-lift crane vessel
HLO – heavy load-out (facility)
HLO – helicopter landing officer
Hmax – maximum wave height
HNGS – flasked hostile natural gamma-ray spectrometry tool
HO – hole opener
HOB – hang on bridle (cable assembly)
HMR – heating medium return
HMS – heating medium supply
HP – hydrostatic pressure
HPAM – partially hydrolyzed polyacrylamide
HPGAG – high-pressure gauge
HPHT – high-pressure high-temperature
HPPS – HP pressure log
HPU – hydraulic power unit
HPWBM – high-performance water-based mud
HRCC – HCAl of caliper (in inches)
HRLA – high-resolution laterolog array (resistivity logging tool)
HRF – hyperbaric rescue facility/vessel
HRSG – heat recovery steam generator
Hs – significant wave height
HSE – health, safety and environment or Health & Safety Executive (United Kingdom)
HSV – hyperbaric support vessel
HTHP – high-temperature high pressure
HTM – helideck team member
HVDC – high voltage direct current
HWDP – heavy-weight drill pipe (sometimes spelled hevi-wate)
HUD – hold-up depth
HUN – hold-up nipple
HUET – helicopter underwater escape training
HVAC – heating, ventilation and air-conditioning
HWDP – heavy weight drill pipe
HYPJ – hyperjet
HYROP – hydrophone log
I
I:P – injector to producer ratio
IADC – International Association of Drilling Contractors
IAT – internal active turret
IBC – intermediate bulk container
IC – instrument cable
ICoTA – Intervention and Coiled Tubing Association
ICC – isolation confirmation (or control) certificate
ICD – inflow control device
ICEX;IECEx – international electrotechnical commission system for certification to standards relating to equipment for use in explosive atmospheres (EEHA)
ICP – initial circulating pressure
ICP – intermediate casing point
ICP – inductively coupled plasma
ICSS – integrated controls and safety system
ICSU – integrated commissioning and start-up
ICV – interval control valve
ICV – integrated cement volume (of borehole)
ICW – incomplete work
ID – inner or internal diameter (of a tubular component such as a casing)
IDC – intangible drilling costs
IDEL – IDEL log
IEB – induction electro BHC log
IEL – induction electrical log
IF – internal flush tool joint
iFLS – intelligent fast load shedding
IFP – French Institute of Petroleum (Institut Français du Petrole)
IFT – interfacial tension
IGPE – immersion grade phenolic epoxy
IGV – inlet guide vane
IH – gamma ray log
IHEC – isolation of hazardous energy certificate
IHUC – installation, hook-up and commissioning
IHV – integrated hole volume (of borehole)
IIC – infield installation contractor
IJL – injection log
IL – induction log
ILI – inline inspection (intelligent pigging)
ILOGS – image logs
ILT – inline tee
IMAG – image analysis report
IMCA – International Marine Contractors Association
IMPP – injection-molded polypropylene coating system
IMR – inspection, maintenance, and repair
INCR – incline report
INCRE – incline report
INDRS – IND RES sonic log
INDT – INDT log
INDWE – individual well record report
INJEC – injection falloff log
INS – insufficient sample
INS – integrated navigation system
INSUR – inrun survey report
INVES – investigative program report
IOC – international oil company
IOM – installation, operation and maintenance manual
IOS – internal olefin sulfonate
IOS – isomerized olefin sulfonate
IP – ingress protection
IP – Institute of Petroleum, now Energy Institute
IP – intermediate pressure
IPAA – Independent Petroleum Association of America
IPC – installed production capacity
IPLS – IPLS log
IPR – inflow performance relationship
IPT – internal passive turret
IR – interpretation report
IRC – inspection release certificate
IRDV – intelligent remote dual valve
IRTJ – IRTJ gamma ray slimhole log
ISD – instrument-securing device
ISF – ISF sonic log
ISFBG – ISF BHC GR log
ISFCD – ISF conductivity log
ISFGR – ISF GR casing collar locator log
ISFL – ISF-LSS log
ISFP – ISF sonic true vertical depth playback log
ISFPB – ISF true vertical depth playback log
ISFSL – ISF SLS MSFL log
ISIP – initial shut-in pressure
ISSOW – integrated safe system of work
ISV – infield support vessel
ITD – internal turret disconnectable
ITO – inquiry to order
ITR – inspection test record
ITS – influx to surface
ITT – internal testing tool (for BOP test)
IUG – instrument utility gas
IWCF – International Well Control Federation
IWOCS – installation/workover control system
IWTT – interwell tracer test
J
J&A – junked and abandoned
JB – junk basket
JHA – job hazard analysis
JIB – joint-interest billing
JLT – J-lay tower
JSA – job safety analysis
JT – Joule-Thomson (effect/valve/separator)
JTS – joints
JU – jack-up drilling rig
JV – joint venture
JVP – joint venture partners/participants
K
KB – kelly bushing
KBE – kelly bushing elevation (in meters above sea level, or meters above ground level)
KBG – kelly bushing height above ground level
KBUG – kelly bushing underground (drilling up in coal mines, West Virginia, Baker & Taylor drilling)
KCI – potassium chloride
KD – kelly down
KMW – kill mud weight
KOEBD – gas converted to oil-equivalent at 6 million cubic feet = 1 thousand barrels
KOH – potassium hydroxide
KOP – kick-off point (directional drilling)
KOP – kick-off plug
KP – kilometre post
KRP – kill rate pressure
KT – kill truck
KLPD- kiloliters per day
L
LACT – lease automatic custody transfer
LAH – lookahead
LAOT – linear activation override tool
LARS – launch and recovery system
LAS – Log ASCII standard
LAT – lowest astronomical tide
LBL – long baseline (acoustics)
LC – locked closed
LCM – lost circulation material
LCNLG – LDT CNL gamma ray log
LCR – local control room
LCV – level control valve
L/D – lay down (such as tubing or rods)
LD – lay down (such as tubing or rods)
LDAR – leak detection and repair
LDHI – low-dosage hydrate inhibitor
LDL – litho density log
LDS – leak detection system (pipeline monitoring)
LDTEP – LDT EPT gamma ray log
LEAKL – leak detection log
LEPRE – litho-elastic property report
LER – lands eligible for remining or land equivalent ratio
LER – Local Equipment Room
LGO – Light Gas Oil
LGR – Liquid Gas Ratio
LGS – Low (specific-)Gravity Solids
LHT- Left Hand Turn
LIC – License
LIB – Lead Impression Block
LINCO – Liner and Completion Progress Report
LIOG – Lithography Log
LIT – Lead Impression Tool
LIT – level indicator transmitter
LITDE – Litho Density Quicklook Log
LITHR – Lithological Description Report
LITRE – Lithostratigraphy Report
LITST – Lithostratigraphic Log
LKO – Lowest Known Oil
LL – Laterolog
LMAP – Location Map
LMRP – Lower Marine Riser Package
LMTD – Log Mean Temperature Difference
LMV – Lower Master Valve (on a Xmas tree)
LNG – Liquefied Natural Gas
LO – Locked Open
LOA – Letter of Authorisation/Agreement/Authority
LOD – Lines of Defence
LOE – Lease Operating Expenses
LOGGN – Logging Whilst Drilling
LOGGS – Lincolnshire Offshore Gas Gathering System
LOGRS – Log Restoration Report
LOGSM – Log Sample
LOK – Low Permeability
LOKG – Low Permeability Gas
LOKO – Low Permeability Oil
LOLER – Lifting Operations and Lifting Equipment Regulations
LOPA – Layers of protection analysis IEC 61511
LOT – Leak-Off Test
LOT – Linear Override Tool
LOT – Lock Open Tool
LOTO – Lock Out / Tag Out
LP – Low Pressure
LPG – Liquefied Petroleum Gas
LPH – Litres Per Hour
LPWHH – Low Pressure Well Head Housing
LQ – Living Quarters
LRA – Lower Riser Assembly
LRG – Liquified Refinery Gas
LRP – Lower Riser Package
LSBGR – Long Spacing BHC GR Log
LSD – Land Surface Datum
LSP – Life Support Package
LSSON – Long Spacing Sonic Log
LT – Linear Time or Lag Time
LTA - Land Treatment Area
L&T – Load and Test
LTC – Long Thread and Coupled
LT&C – Long Thread and Coupled
LTHCP – Lower Tubing Hanger Crown Plug
– Lost Time Incident (Frequency Rate)
LTP – liner shaker, tensile bolting cloth, perforated panel backing
LTX – Low temperature extraction unit
LUMI – Luminescence Log
LUN – Livening Up Notice
LVEL – Linear Velocity Log
LVOT – Linear Valve Override Tool
LWD – Logging While Drilling
LWOL – Last Well on Lease
LWOP – Logging Well on Paper
M
M or m – prefix designating a number in thousands (not to be confused with SI prefix M for mega- or m for milli)
m – metre
MAASP – maximum acceptable [or allowable] annular surface pressure
MAC – multipole acoustic log
MACL – multiarm caliper log
MAE – major accident event
MAGST – magnetostratigraphic report
MAL – Master Acronym List
MAOP – maximum allowable operating pressure
MAP – metrol acoustic processor
MARA – maralog
MAST – sonic tool (for recording waveform)
MAWP – maximum allowable working pressure
MBC – marine breakaway coupling
MBC – membrane brine concentrator
Mbd – thousand barrels per day
MBES – multibeam echosounder
Mbod – thousand barrels of oil per day
Mboe – thousand barrels of oil equivalent
Mboed – thousand barrels of oil equivalent per day
MBP – mixed-bed polisher
Mbpd – thousand barrels of oil per day
MBR – minimum bend radius
MBRO – multi-bore restriction orifices
MBT – methylene blue test
MBWH – multi-bowl wellhead
MCC – motor control centre
MCD – mechanical completion dossier
Mcf – thousand cubic feet of natural gas
Mcfe – thousand cubic feet of natural gas equivalent
MCHE – main cryogenic heat exchanger
MCM – manifold choke module
MCP – monocolumn platform
MCS – manifold and connection system
MCS – master control station
MCSS – multi-cycle sliding sleeve
mD – millidarcy, measure of permeability, with units of area
MD – measured depth
MDO – marine diesel oil
MDR – master document register
MDRT – measured depth referenced to rotary table zero datum
MD – measurements/drilling log
MDEA – methyl diethanolamine (aMDEA)
MDL – methane drainage licence (United Kingdom), a type of onshore licence allowing natural gas to be collected "in the course of operations for making and keeping safe mines whether or not disused"
MDSS – measured depth referenced to mean sea level zero datum – "subsea" level
MDT – modular formation dynamic tester, a tool used to get formation pressure in the hole (not borehole pressure which the PWD does). MDT could be run on Wireline or on the Drill Pipe
MDR – mud damage removal (acid bullheading)
MEA – monoethanolamine
MEG – monoethylene glycol
MEIC – Mechanical Electrical Instrumentation Commission
MeOH – methanol (CH3OH)
MEPRL – mechanical properties log
MER – Maximum Efficiency Rating
MERCR – mercury injection study report
MERG – merge FDC/CNL/gamma ray/dual laterolog/micro SFL log
MEST – micro-electrical scanning tool
MF – marsh funnel (mud viscosity)
MFCT – multifinger caliper tool
MGL – magnelog
MGS – Mud Gas Separator
MGU – Motor Gauge Unit
MGPS – marine growth prevention system
MHWN – mean high water neaps
MHWS – mean high water springs
MLE – Motor Lead Extension
MLH – mud liner hanger
MIFR – mini frac log
MINL – minilog
MIPAL – micropalaeo log
MIRU – move in and rig up
MIST – minimum industry safety training
MIT – mechanical integrity test
MIYP – maximum internal yield pressure
mKB – meters below kelly bushing
ML – mud line (depth reference)
ML – microlog, or mud log
MLL – microlaterolog
MLF – marine loading facility
MLWN – mean low water neaps
MLWS – mean low water springs
mm – millimetre (SI unit)
MM – prefix designating a number in millions (thousand-thousand)
MMbod – million barrels of oil per day
MMboe – million barrels of oil equivalent
MMboed – million barrels of oil equivalent per day
MMbpd – million barrels per day
MMcf – million cubic feet (of natural gas)
MMcfe – million cubic feet (of natural gas equivalent)
MMcfge – million cubic feet (of natural gas equivalent)
MMS – Minerals Management Service (United States)
MMscfd – million standard cubic feet per day
MMTPA – millions of metric tonnes per annum
MMstb – million stock barrels
MNP – merge and playback log
MODU – mobile offshore drilling unit (either of jack-up drill rig or semi-submersible rig or drill ship)
MOF – marine offloading facility
MOPO – matrix of permitted operations
MOPU – mobile offshore production unit (to describe jack-up production rig, or semi-submersible production rig, or floating production, or storage ship)
MOT – materials/marine offloading terminal
MOV – motor operated valve
MPA – micropalaeo analysis report
MPD – managed pressure drilling
MPFM – multi-phase flow meter
MPK – merged playback log
MPP – multiphase pump
MPQT – manufacturing procedure qualification test
MPS – manufacturing procedure specification
MPSP – maximum predicted surface pressure
MPSV – multi-purpose support vessel
MPV – multi-purpose vessel
MQC – multi-quick connection plate
MR – marine riser
MR – mixed refrigerant
MR – morning report
MRBP – magna range bridge plug
MRC – maximum reservoir contact
MRCV – multi-reverse circulating valve
MRIT – magnetic resonance imaging tool
MRIRE – magnetic resonance image report
MRP – material requirement planning
MRR – material receipt report
MRT – marine riser tensioners
MRT – mechanical run test
MRX – magnetic resonance expert (wireline NMR tool)
MSCT – mechanical sidewall coring tool
MSDS – material safety data sheet
MSFL – micro SFL log; micro-spherically focussed log (resistivity)
MSI – mechanical and structural inspection
MSIP – modular sonic imaging platform (sonic scanner)
MSIPC – multi-stage inflatable packer collar
MSL – mean sea level
MSL – micro spherical log
MSS – magnetic single shot
MST – MST EXP resistivity log
MSV – multipurpose support vessel
MTBF – mean time between failures
MT – motor temperature; DMT parameter for ESP motor
MTO – material take-off
MTT – MTT multi-isotope trace tool
M/U – make up
MUD – mud log
MUDT – mud temperature log
MuSol – mutual solvent
MVB – master valve block on christmas tree
MVC – minimum volume commitment
MW – mud weight
MWD – measurement while drilling
MWDRE – measurement while drilling report
MWP – maximum working pressure
MWS – marine warranty survey
N
NACE – National Association of Corrosion Engineers
NAPE – Nigerian Association of Petroleum Explorationists
NAM – North American
NAPF – non-aqueous phase fluid
NAPL – non-aqueous phase liquid
NASA – non-active side arm (term used in North Sea oil for kill wing valve on a christmas tree)
NAVIG – navigational log
NB – nominal bore
NCC – normally clean condensate
ND – nipple down
NDE – non-destructive examination
NEFE – non-emulsifying iron inhibitor (usually used with hydrochloric acid)
NEUT – neutron log
NFG – 'no fucking good' used for marking damaged equipment,
NFI – no further investment
NFW – new field wildcat, Lahee classification
NG – natural gas
NGDC – national geoscience data centre (United Kingdom)
NGL – natural gas liquids
NGR – natural gamma ray
NGRC – national geological records centre (United Kingdom)
NGS – NGS log
NGSS – NGS spectro log
NGT – natural gamma ray tool
NGTLD – NGT LDT QL log
NGLQT – NGT QL log
NGTR – NGT ratio log
NHDA – National Hydrocarbons Data Archive (United Kingdom)
NHPV – net hydrocarbon pore volume
NL-NG – No loss-no gain
NMDC – non-magnetic drill collar
NMHC – non-methane hydrocarbons
NMR – nuclear magnetic resonance kog
NMVOC – non-methane volatile organic compounds
NNF – normally no flow
NNS – northern North Sea
NOISL – noise log
NOC – National Oil Company
NORM – naturally-occurring radioactive material
NP – non-producing well
NPD – Norwegian Petroleum Directorate
NPS – nominal pipe size (sometimes NS)
NPSH(R) – net-positive suction head (required)
NPT – Non-Productive Time (used during drilling or well intervention operations mainly, malfunction of equipment or the lack of personnel competencies that result in loss of time, which is costly)
NPV – net present value
NRB – not required back
NRPs – non-rotating protectors
NRI – net revenue interest
NRV – non-return valve
NPW – new pool wildcat, Lahee classification
NS – North Sea; can also refer to the North Slope Borough, Alaska, the North Slope, which includes Prudhoe Bay Oil Field (the largest US oil field), Kuparuk Oil Field, Milne Point, Lisburne, and Point McIntyre among others
NTHF – non-toxic high flash
NTP – Normal temperature and pressure
NTU – nephelometric turbidity unit
NUBOP – nipple (ed),(ing) up blow-out preventer
NUI – normally unattended installation
NUMAR – nuclear and magnetic resonance – image log
O
O&G – oil and gas
O&M – operations and maintenance
O/S – overshot, fishing tool
OBCS – ocean bottom cable system
OBDTL – OBDT log
OBEVA – OBDT evaluation report
OBM – oil-based mud
OCD - Oil Conservation Division
OBO – operated by others
OCIMF – Oil Companies International Marine Forum
OCI – oil corrosion inhibitor (vessels)
OCL – quality control log
OCM – offshore construction manager
OCS – offshore construction supervisor
OCTG – oil country tubular goods (oil well casing, tubing, and drill pipe)
OD – outer diameter (of a tubular component such as casing)
ODT – oil down to
OFE – oil field equipment
OFST – offset vertical seismic profile
OEM – original equipment manufacturer
OFIC – offshore interim completion certificate
OGA – Oil and Gas Authority (UK oil and gas regulatory authority)
OH – open hole
OH – open hole log
OHC – open hole completion
OHD – open hazardous drain
OHUT – offshore hook-up team
OI – oxygen index
OIM – offshore installation manager
OLAF – offshore footless loading arm
OMC – Offshore Material Coordinator
OMRL – oriented micro-resistivity log
ONAN – oil natural air natural cooled transformer
ONNR – Office of Natural Resources Revenue (formerly MMS)
OOE – offshore operation engineer (senior technical authority on an offshore oil platform)
OOIP – original oil in place
OOT/S – out of tolerance/straightness
O/P – Over Pull
OPITO – offshore petroleum industry training organization
OPEC – Organization of Petroleum Exporting Countries
OPL – operations log
OPRES – overpressure log
OPS – operations report
ORICO – oriented core data report
ORM – operability reliability maintainability
ORRI – overriding royalty interest
ORF – onshore receiving facility
OS&D – over, short, and damage report
OS – online survey
OSA _ Offshore Safety Advisor
OSV – offshore supply vessel
OT – a well on test
OT – off tree
OTDR – optical time domain reflectometry
OTIP – operational testing implementation plan
OTL – operations team leader
OTP – operational test procedure
OTR – order to remit
OTSG – one-time through steam generator
OWC – oil-water contact
OUT – outpost, Lahee classification
OUT – oil up to
OVCH – oversize charts
OVID – offshore vessel inspection database
P
P – producing well
P&A – plug(ged) and abandon(ed) (well)
PA – producing asset
PA – polyamide
PA – producing asset with exploration potential
PACO – process, automation, control and optimisation
PACU – packaged air conditioning unit
PADPRT – pressure assisted drillpipe running tool
PAGA – public address general alarm
PAL – palaeo chart
PALYN – palynological analysis report
PAR – pre-assembled rack
PAU – pre-assembled ynit
PBDMS – playback DMSLS log
PBHL – proposed bottom hole location
PBR – polished bore receptacle (component of a completion string)
PBD – pason billing system
PBTD – plug back total depth
PBU – pressure build-up (applies to integrity testing on valves)
PCA – production concession agreement
PCB – polychlorinated biphenyl
PCCC – pressure containing anti‐corrosion caps
PCCL – perforation casing collar locator log
PCDC – pressure-cased directional (geometry i.e. borehole survey) MWD tool
PCE – pressure control equipment
PCDM – power and control distribution module
PCKR – packer
PCMS – polymer coupon monitoring system
PCN – process control network
PCO – pre-commission preparations (pipeline)
PCOLL – perforation and collar
PCP – progressing cavity pump
PCP – possible condensate production
PCPT – piezo-cone penetration test
PCS – process control system
PDC – perforation depth control
PDC – polycrystalline diamond compact (a type of drilling bit)
PDG/PDHG – permanent downhole gauge
PDGB – permanent drilling guide base
PDKL – PDK log
PDKR – PDK 100 report
PDM –positive displacement motor
PDMS – permanent downhole monitoring system
PDP – proved developed producing (reserves)
PDP – positive displacement pump
PDPM – power distribution protection module
PDNP – proved developed not producing
PDR – physical data room
PDT – differential pressure transmitter
PE – petroleum engineer
PE – professional engineer
PE – production engineer
PE – polyethylene
PE – product emulsion
PE – production enhancement
PEA – palaeo environment study report
PED – pressure equipment directive
PEDL – petroleum exploration and development licence (United Kingdom)
PEFS – process engineering flow scheme
PENL – penetration log
PEP – PEP log
PERC – powered emergency release coupling
PERDC – perforation depth control
PERFO – perforation log
PERM – permeability
PERML – permeability log
PESGB – Petroleum Exploration Society of Great Britain
PETA – petrographical analysis report
PETD – petrographic data log
PETLG – petrophysical evaluation log
PETPM – petrography permeametry report
PETRP – petrophysical evaluation report
PEX – platform express toolstring (resistivity, porosity, imaging)
PFC – perforation formation correlation
PFD – process flow diagram
PFD – probability of failure on demand
PFE – plate/frame heat exchanger
PFHE – plate fin/frame heat exchanger
PFPG – perforation plug log
PFREC – perforation record log
PG – pressure gauge (report)
PGC – Potential Gas Committee
PGB – permanent guide base
PGOR – produced gas oil ratio
PGP – possible gas production
PH – phasor log
PHASE – phasor processing log
PHB – pre-hydrated bentonite
PHC – passive heave compensator
PHOL – photon log
PHPU – platform hydraulic power unit
PHPA – partially hydrolyzed polyacrylamide
PHYFM – physical formation log
PI – productivity index
PI – permit issued
PI – pressure indicator
P&ID – piping and instrumentation diagram
PINTL – production interpretation
PIP – pump intake pressure
PIP – pipe in pipe
PIT – pump intake temperature
PJSM – pre-job safety meeting
PL – production license
PLEM – pipeline end manifold
PLES – pipeline end structure
PLET – pipeline end termination
PLG – plug log
PLR – pig launcher/receiver
PLS – position location system
PLSV – pipelay support vessel
PLT – production logging tool
PLTQ – production logging tool quick-look log
PLTRE – production logging tool report
PLQ – permanent living quarters
PMI – positive material identification
PMM – permanent magnet motor
PMOC – project management of change
PMR – precooled mixed refrigerant
PMV – production master valve
PNP – proved not producing
POB – personnel on board
POBM – pseudo-oil-based mud
POD – plan of development
POF – permanent operations facility
POH – pull out of hole
POOH – pull out of hole
PON – petroleum operations notice (United Kingdom)
POP – pump-out plug
POP – possible oil production
POP – place on production
POR – density porosity log
PORRT – pack off run retrieval tool
POSFR – post-fracture report
POSTW – post-well appraisal report
POSWE – post-well summary report
PP – DXC pressure plot log
PP – pump pressure
PPA – Pounds of Proppant added
PP&A – permanent plug and abandon (also P&A)
ppb – pounds per barrel
PPC – powered positioning caliper (Schlumberger dual-axis wireline caliper tool)
ppcf – pounds per cubic foot
PPD – pour point depressant
PPE – preferred pressure end
PPE – personal protective equipment
PPFG – pore pressure/fracture gradient
ppg – pounds per gallon
PPI – post production inspection/intervention
PPI – post pipelay installation
PPL – pre-perforated liner
– pounds (per square inch) per thousand feet (of depth) – a unit of fluid density/pressure
PPS – production packer setting
PPU – pipeline process and umbilical
PQR – procedure qualification record
PR2 – testing regime to API6A annex F
PRA – production reporting and allocation
PREC – perforation record
PRESS – pressure report
PRL – polished rod liner
PRV – pressure relief valve
PROD – production log
PROTE – production test report
PROX – proximity log
PRSRE – pressure gauge report
PSANA – pressure analysis
PSA – production service agreement
PSA – production sharing agreement
PSC – production sharing contract
PSD – planned shutdown
PSD – pressure safety device
PSD – process shutdown
PSD – pump setting depth
PSE – pressure safety element (rupture disc)
PSIA – pounds per square inch atmospheric
PSIG – pounds per square inch gauge
PSL – product specification level
PSLOG – pressure log
PSM – process safety management
PSP – pseudostatic spontaneous potential
PSP – positive sealing plug
PSPL – PSP leak detection log
PSSR – pre-startup safety review
PSSR – pressure systems safety regulations (UK)
PSQ – plug squeeze log
PST – PST log
PSV – pipe/platform supply vessel
PSV – pressure safety valve
PSVAL – pressure evaluation log
PTA/S – pipeline termination assembly/structure
PTO – permit to sperate
PTRO – test rack opening pressure (For a gas lift valve)
PTSET – production test setter
PTTC – Petroleum Technology Transfer Council, United States
PTW – permit to work
PU – pick-up (tubing, rods, power swivel, etc.)
PUD – proved undeveloped reserves
PUN – puncher log
PUR – plant upset report
PUQ – production utilities quarters (platform)
PUWER – Provision and Use of Work Equipment Regulations 1998
PV – plastic viscosity
PVDF – polyvinylidene fluoride
PVSV –pressure vacuum safety valve
PVT – pressure volume temperature
PVTRE – pressure volume temperature report
PW – produced water
PWD – pressure while drilling
PWB – production wing block (XT)
PWHT – post-weld heat treat
PWRI – produced water reinjection
PWV – production wing valve (also known as a flow wing valve on a christmas tree)
Q
QA – quality assurance
QC – quality control
QCR – quality control report
QL – quick-look log
QJ - Quad Joint
R
R/B – rack back
R&M – repair and maintenance
RAC – ratio curves
RACI – responsible / accountable / consulted / informed
RAT – riser assembly tower
RAM – reliability, availability, and maintainability
RAWS – raw stacks VSP log
RBI – risk-based inspection
RBP – retrievable bridge plug
RBS – riser base spool
RCA – root cause analysis
RCRA - Resource Conservation and Recovery Act
RCKST – rig checkshot
RCD – rotating control device
RCI – reservoir characterization instrument (for downhole fluid measurements e.g. spectrometry, density)
RCL – retainer correlation log
RCM – reliability-centred maintenance
RCR – remote component replacement (tool)
RCU – remote control unit
RDMO – rig down move out
RDS – ROV-deployed sonar
RDRT – rig down rotary tools
RDT – reservoir description tool
RDVI – remote digital video inspection
RDWL – rig down wireline
RE – reservoir engineer
REOR – reorientation log
RE-PE – re-perforation report
RESAN – reservoir analysis
RESDV – riser emergency shutdown valve
RESEV – reservoir evaluation
RESFL – reservoir fluid
RESI – resistivity log
RESL – reservoir log
RESOI – residual oil
REZ – renewable energy zone (United Kingdom)
RF – recovery factor
RFCC – ready for commissioning certificate
RFLNG – ready for liquefied natural gas
RFM – riser feeding machine
RFMTS – repeat formation tester
RFO – ready for operations (pipelines/cables)
RFR – refer to attached (e.g., letter, document)
RFSU – ready for start-up
RFT – repeat formation tester
RFTRE – repeat formation tester report
RFTS – repeat formation tester sample
RHA – riser heel anchor
RHD – rectangular heavy duty – usually screens used for shaking
RHT – Right Hand Turn
RIGMO – rig move
RIH – run in hole
RIMS – riser integrity monitoring system
RITT – riser insertion tube (tool)
RKB – rotary kelly bushing (a datum for measuring depth in an oil well)
RLOF – rock load-out facility
RMLC – request for mineral land clearance
RMP – reservoir management plan
RMS – ratcheting mule shoe
RMS – riser monitoring system
RNT – RNT log
ROB – received on board (used for fuel/water received in bunkering operations)
ROCT – rotary coring tool
ROP – rate of penetration
ROP – rate of perforation
ROT – remote-operated tool
ROV/WROV – remotely-operated vehicle/work class remotely-operated vehicle, used for subsea construction and maintenance
ROZ – recoverable oil zone
ROWS – remote operator workstation
RPCM – ring pair corrosion monitoring
RPM – revolutions per minute (rotations per minute)
RRC – Railroad Commission of Texas (governs oil and gas production in Texas)
RROCK – routine rock properties report
RRR – reserve replacement ratio
RSES – responsible for safety and environment on site
RSPP – a publicly-traded oil and gas producer focused on horizontal drilling of multiple stacked pay zones in the oil-rich Permian basin
RSS – rig site survey
RSS – rotary steerable systems
RST – reservoir saturation tool (Schlumberger) log
RTMS – riser tension monitoring system
RTE – rotary table elevation
RTO – real-time operation
RTP/RTS – return to production/service
RTTS – retrievable test-treat-squeeze (packer)
RU – rig up
RURT – rig up rotary tools
RV – relief valve
RVI – remote video inspection
RWD – reaming while drilling
S
SABA – supplied air-breathing apparatus
SAFE – safety analysis function evaluation
SAGD – steam-assisted gravity drainage
SALM – single anchor loading mooring
SAM – subsea accumulator module
SAML – sample log
SAMTK – sample-taker log
SANDA – sandstone analysis log
SAPP – sodium acid pyrophosphate
SAS – safety and automation system
SAT – SAT log
SAT – site acceptance test
SB – SIT-BO log
SBF – synthetic base fluid
SBM – synthetic base mud
SBT – segmented bond tool
SC – seismic calibration
SCADA – supervisory control and data acquisition
SCAL – special core analysis
SCAP – scallops log
SCBA – self-contained breathing apparatus
SCUBA – self-contained underwater breathing apparatus
SCC – system completion certificate
SCD – system control diagram
SCDES – sidewall core description
scf – standard cubic feet (of gas)
scf/STB – standard cubic feet (of gas) / stock tank barrel (of fluid)
SCHLL – Schlumberger log
– subsea control module (mounting base)
SCO – synthetic crude oil
SCO – sand clean-out
SCR – slow circulation rate
SCR – steel catenary riser
SCRS – slow circulation rates
SCSG – type of pump
SCSSV – surface-controlled subsurface safety valve
SDON – shut down overnight
SEP – surface emissive power
SPCU – subsea control unit
SCVF – surface casing vent flow. It's kind of test
SD – sonic density
SDFD – shut down for day
SDFN – shut down for night
SDIC – sonic dual induction
SDL – supplier document list
SDM/U – subsea distribution module/unit
SDPBH – SDP bottom hole pressure report
SDSS – super duplex stainless steel
SDT – step draw-down test (sometimes SDDT)
SDU/M – subsea distribution unit/module
SEA – strategic environmental assessment (United Kingdom)
SECGU – section gauge log
SEDHI – sedimentary history
SEDIM – sedimentology
SEDL – sedimentology log
SEDRE – sedimentology report
SEG – Society of Exploration Geophysicsists
SEM – subsea electronics module
Semi (or semi-sub) – semi-submersible drilling rig
SEP – surface emissive power
SEPAR – separator sampling report
SEQSU – sequential survey
SF – Self Flowing
SFERAE – global association for the use of knowledge on fractured rock in a state of stress, in the field of energy, culture and environment
SFL – steel flying lead
SG – static gradient, specific gravity
SGR – shale gouge ratio
SGS – steel gravity structure
SGSI – Shell Global Solutions International
SGUN – squeeze gun
SHA – sensor harness assembly
SHC – system handover certificate
SHDT – stratigraphic high resolution dipmeter tool
SHINC – sunday holiday including
SHO – stab and hinge over
SHOCK – shock log
SHOWL – show log
SHT – shallow hole test
SI – shut in well
SI – structural integrity
SI – scale inhibitor
SIBHP – Shut in Bottom-Hole Pressure
SIBHT – Shut in Bottom-Hole Temperature
SID – Specific Instruction Document/ Standard Instruction for Drillers
SIT – System Integrity Test
SI/TA – shut in/temporarily abandoned
SIA – social impact assessment
SIC – subsea installation contractor
SICP – shut-in casing pressure
SIDPP – shut-in drill pipe pressure
SIDSM – sidewall sample
SIF – safety instrumented functions (test)
SIGTTO – Society of International Gas Tanker and Terminal Operators
SIL – safety integrity level
SIMCON – simultaneous construction
SIMOPS – simultaneous operations
SIP – shut-in pressure
SIPCOM – simultaneous production and commissioning
SIPES – Society of Independent Professional Earth Scientists, United States
SIPROD – simultaneous production and drilling
SIS – safety-instrumented system
SIT – system integration test FR SIT – field representation SIT
SIT – (casing) shoe integrity test
SITHP – shut-in tubing hanger/head pressure
SITT – single TT log
SIWHP – shut-in well head pressure
SKPLT – stick plot log
SL – seismic lines
SLS – SLS GR log
SLT – SLT GR log
SM or S/M – safety meeting
SMA – small amount
SMLS – seamless PipeMPP
SMO – suction module
SMPC – subsea multiphase pump, which can increase flowrate and pressure of the untreated wellstream
SN – seat nipple
SNAM – Societá Nazionale Metanodotti now Snam S.p.A. (Italy)
SNP – sidewall neutron porosity
SNS – southern North Sea
S/O – Slack Off
SOBM – synthetic oil-based mud
SOLAS – safety of life at sea
SONCB – sonic calibration log
SONRE – sonic calibration report
SONWR – sonic waveform report
SONWV – sonic waveform log
SOP – Safe Operating Procedure
SOP – shear-out plug
SOP - Standard Operating Procedure
SOR – senior operations representative
SOW - Scope of Work
SOW – slip-on wellhead
SP – set point
SP – shot point (geophysics)
SP – spontaneous potential (well log)
SPAMM – subsea pressurization and monitoring manifold
SPCAN – special core analysis
SPCU – subsea power and control unit
SPE – Society of Petroleum Engineers
SPEAN – spectral analysis
SPEL – spectralog
spf – shots per foot (perforation density)
SPFM – single-phase flow meter
SPH – SPH log
SPHL – self-propelled hyperbaric lifeboats
SPM – side pocket mandrel
SPM – strokes per minute (of a positive-displacement pump)
spm – shots per meter (perforation density)
SPMT – self-propelled modular transporter
SPOP – spontaneous potential log
SPP – stand pipe pressure
SPR – slow pumping rate
SPROF – seismic profile
SPS – subsea production systems
SPT – shallower pool test, Lahee classification
SPUD – spud date (started drilling well)
SPWLA – Society of Petrophysicists and Well Log Analysts
SQL – seismic quicklook log
SQZ – squeeze job
SR – shear rate
SRD – seismic reference datum, an imaginary horizontal surface at which TWT is assumed to be zero
SREC – seismic record log
SRJ – semi-rigid jumper
SRK – Soave-Redlich-Kwong
SRO – surface read-out
SRP – sucker rod pump
SRB – sulfate-reducing bacteria
SRT – site receival test
SS – subsea, as in a datum of depth, e.g. TVDSS (true vertical depth subsea)
SSCC – sulphide stress corrosion cracking
SSCP – subsea cryogenic pipeline
SSCS – subsea control system
SSD – sub-sea level depth (in metres or feet, positive value in downwards direction with respect to the geoid)
SSD – sliding sleeve door
SSFP – subsea flowline and pipeline
SSG – sidewall sample gun
SSH – steam superheater
SSIC – safety system inhibit certificate
SSIV – subsea isolation valve
SSTV – subsea test valve
SSM – subsea manifolds
SSMAR – synthetic seismic marine log
SSPLR – subsea pig launcher/receiver
SSSL – Supplementary Seismic Survey Licence (United Kingdom), a type of onshore licence
SSSV – sub-surface safety valve
SSTT – subsea test tree
SSU – subsea umbilicals
SSV – surface safety valve
SSWI – subsea well intervention
STAB – stabiliser
STAGR – static gradient survey report
STB – stock tank barrel
STC – STC log
STD – 2-3 joints of tubing
STFL – steel tube fly lead
STG – steam turbine generator
STGL – stratigraphic log
STHE – shell-and-tube heat exchanger
STIMU – stimulation report
STKPT – stuck point
STL – STL gamma ray log
STL – submerged turret loading
STRAT – stratigraphy, stratigraphic
STRRE – stratigraphy report
STOIIP – stock tank oil initially in place
STOOIP – stock tank oil originally in place
STOP – safety training observation program
STP – submerged turret production
STP – standard temperature and pressure
STSH – string shot
STTR – single top tension riser
ST&C – short thread and coupled
STC – short thread and coupled
STU – steel tube umbilical
STV – select tester valve
SUML – summarised log
SUMRE – summary report
SUMST – geological summary sheet
SURF – subsea/umbilicals/risers/flowlines
SURFR – surface sampling report
SURRE – survey report
SURU – start-up ramp-up
SURVL – survey chart log
– subsea umbilical termination (assembly/box)
SUTA – subsea umbilical termination assembly
SUTU – subsea umbilical termination unit
SW – salt water
SWC – side wall core
SWD – salt water disposal well
SWE – senior well engineer
SWHE – spiral-wound heat exchanger
SWOT – strengths, weaknesses, opportunities, and threats
SWT – surface well testing
SV – sleeve valve, or standing valve
SVLN – safety valve landing nipple
SWLP – seawater lift pump
SYNRE – synthetic seismic report
SYSEI – synthetic seismogram log
T
T – well flowing to tank
T/T – tangent to tangent
TA – temporarily abandoned well
TA – top assembly
TAC – tubing anchor (or tubing–annulus communication)
TAGOGR – thermally assisted gas/oil gravity drainage
TAN – total acid number
TAPLI – tape listing
TAPVE – tape verification
TAR – true amplitude recovery
TB – tubing puncher log
TBE – technical bid evaluation
TBG – tubing
TBT – through bore tree / toolbox talk
TC – type curve
TCA – total corrosion allowance
TCC – tungsten carbide coating
TCCC – transfer of care, custody and control
TCF – temporary construction facilities
TCF – trillion cubic feet (of gas)
TCI – tungsten carbide insert (a type of rollercone drillbit)
TCP – tubing conveyed perforating (gun)
TCPD – tubing-conveyed perforating depth
TCU – thermal combustion unit
TD – target depth
TD – total depth (depth of the end of the well; also a verb, to reach the final depth, used as an acronym in this case)
TDD – total depth (driller)
TDC - Top Dead Center
TDC – total drilling cost
TDL – total depth (logger)
TDM – touch-down monitoring
TDP – touch-down point
TDS – top drive system
TDS – total dissolved solids
TDT – thermal decay time log
TDTCP – TDT CPI log
TDT GR – TDT gamma ray casing collar locator log
TEA – triethanolamine
TEFC – totally enclosed fan-cooled
TEG – triethylene glycol
TEG – thermal electric generator
TELER – teledrift report
TEMP – temperature log
TETT – too early to tell
TFE – total fina elf (obsolete; Now Total S.A.) major French multinational oil company
TFL – through flow line
TFM – TaskForceMajella research project
TFM – tubular feeding machine
TGB – temporary guide base
TGT / TG – tank gross test
TGOR – total gas oil ratio (GOR uncorrected for gas lift gas present in the production fluid)
TH – tubing hanger
THCP – tubing hanger crown plug
Thr/Th# – thruster ('#'- means identification letter/number of the equipment, e.g. thr3 or thr#3 means "thruster no. 3")
THD – tubing head
THERM – thermometer log
THF – tubing hanger flange
THF – tetrahydrofuran (organic solvent)
THP – tubing hanger pressure (pressure in the production tubing as measured at the tubing hanger)
THRT – tubing hanger running tool
THS – tubing head spool
TIE – tie-in log
TIH – trip into hole
TIS – tie-in spool
TIT – tubing integrity test
TIW – Texas Iron Works (pressure valve)
TIEBK – tieback report
TLI – top of logging interval
TLOG – technical log
TLP – tension-leg platform
TMCM – transverse mercator central meridian
TMD – total measured depth in a wellbore
TNDT – thermal neutron decay time
TNDTG – thermal neutron decay time/gamma ray log
TOC – top of cement
TOC – Total organic carbon
TOF – top of fish
TOFD – time of first data sample (on seismic trace)
TOFS – time of first surface sample (on seismic trace)
TOH – trip out of hole
TOOH – trip out of hole
TOL – top of liner
TOL _ Top of Lead Cement
TOP - Top of Pipe
TORAN – torque and drag analysis
TOT – Top of Tail Cement
TOVALOP – tanker owners' voluntary agreement concerning liability for oil pollution
TPC – temporary plant configuration
TPERF – tool performance
TQM – total quality management
TR – temporary refuge
TRCFR – total recordable case frequency rate
TRT – tree running tool
TR – temporary refuge
TRA – top riser assembly
TRA – tracer log
TRACL – tractor log
TRAN –transition zone
TRD – total report data
TREAT – treatment report
TREP – test report
TRIP – trip condition log
TRS – tubing running services
TRSV – tubing-retrievable safety valve
TRSCSSV – tubing-Retrievable surface-controlled sub-surface valve
TRSCSSSV – tubing-retrievable surface-controlled sub-surface safety valve
TSA – thermally-sprayed aluminium
TSA – terminal storage agreement
TSI – temporarily shut in
TSJ – tapered stress joint
TSOV – tight shut-off valve
TSS – total suspended solids
TSTR – tensile strength
TT – torque tool
TT – transit time log
TTOC – theoretical top of cement
TTVBP – through-tubing vented bridge plug
TTRD – through-tubing rotary drilling
TUC – topside umbilical connection
TUC – turret utility container
TUM – tracked umbilical machine
TUPA – topside umbilical panel assembly
TUTA – topside umbilical termination box/unit/assembly (TUTU)
TVBDF – true vertical depth below derrick floor
TV/BIP – ratio of total volume (ore and overburden) to bitumen in place
TVD – true vertical depth
TVDPB – true vertical depth playback log
TVDRT – true vertical depth (referenced to) rotary table zero datum
TVDKB – true vertical depth (referenced to) top kelly bushing zero datum
TVDSS – true vertical depth (referenced to) mean sea level zero datum
TVELD – time and velocity to depth
TVRF – true vertical depth versus repeat formation tester
TWT – two-way time (seismic)
TWTTL – two-way travel time log
U
UBHO – universal bottom hole orientation (sub)
UBI – ultrasonic borehole imager
UBIRE – ultrasonic borehole imager report
UCH – umbilical connection housing
UCIT – ultrasonic casing imaging tool (high resolution casing and corrosion imaging tool)
UCL – unit control logic
UCR – unsafe condition report
UCS – unconfined compressive strength
UCSU – upstream commissioning and start-up
UFJ – upper flex joint
UFR – umbilical flow lines and risers
UGF – universal guide frame
UIC – underground injection control
UKCS – United Kingdom continental shelf
UKOOA – United Kingdom Offshore Operators Association
UKOOG – United Kingdom Onshore Operators Group
ULCGR – uncompressed LDC CNL gamma ray log
UMCA – umbilical midline connection assembly
UMV – upper master valve (from a christmas tree)
UPB – unmanned production buoy
UPL – upper pressure limit
UPR – upper pipe ram
UPT – upper pressure threshold
URA – upper riser assembly
URT – universal running tool
USBL – ultra-short baseline systems
USIT – ultrasonic imaging tool (cement bond logging, casing wear logging)
USGS – United States Geological Survey
UTA/B – umbilical termination assembly/box
UTAJ – umbilical termination assembly jumper
UTHCP – upper tubing hanger crown plug
UTM – universal transverse mercator
UWI – unique well identifier
UWILD – underwater inspection in lieu of dry-docking
UZV – shutdown valve
V
VBR – variable bore ram
VCCS – vertical clamp connection system
VDENL – variation density log
VDL – variable density log
VDU – vacuum distillation unit, used in processing bitumen
VELL – velocity log
VERAN – verticality analysis
VERIF – verification list
VERLI – verification listing
VERTK – vertical thickness
VFC – volt-free contact
VGMS – vent gas monitoring system (flexible riser annulus vent system)
VIR – value-investment ratio
VISME – viscosity measurement
VIV – vortex-induced vibration
VLP – vertical lift performance
VLS – vertical lay system
VLTCS – very-low-temperature carbon steel
VO – variation order
VOCs – volatile organic compounds
VOR – variation order request
VPR – vertical pipe racker
VRS – vapor recovery system
VRR – voidage replacement ratio
VS – vertical section
VSD – variable-speed drive
VSI – versatile seismic imager (Schlumberger VSP tool)
VSP – vertical seismic profile
VSPRO – vertical seismic profile
VTDLL – vertical thickness dual laterolog
VTFDC – vertical thickness FDC CNL log
VTISF – vertical thickness ISF log
VWL – velocity well log
VXT – vertical christmas tree
W
W – watt
WABAN – well abandonment report
WAC – weak acid cation
WAG – water alternating gas (describes an injection well which alternates between water and gas injection)
WALKS – walkaway seismic profile
WAS – well access system
WATAN – water analysis
WAV3 – amplitude (in seismics)
WAV4 – two-way travel time (in seismics)
WAV5 – compensate amplitudes
WAVF – waveform log
WBCO – wellbore clean-out
WBE – well barrier element
WBM – water-based drilling mud
WBS – well bore schematic
WBS – work breakdown structure
WC – watercut
WC – wildcat (well)
W/C – water cushion
WCC – work control certificate
WCT – wet christmas tree
WE – well engineer
WEG – wireline entry guide
WELDA – well data report
WELP – well log plot
WEQL – well equipment layout
WESTR – well status record
WESUR – well summary report
WF – water flood(ing)
WFAC – waveform acoustic log
WGEO – well geophone report
WGFM – wet gas flow meter
WGR – water gas ratio
WGUNT – water gun test
Wh – white
WH – well history
WHIG – whitehouse gauge
WHM – wellhead maintenance
WHP – wellhead pressure
WHRU – waste heat recovery unit
WHSIP – wellhead shut-in pressure
WI – water injection
WI – working interest
WI – work instructions
WIH – working in hole
WIKA – definition needed
WIMS – well integrity management system
WIR – water intake risers
WIT – water investigation tool
WITS – Wellsite Information Transfer Specification
WITSML – wellsite information transfer standard markup language
WIPSP – WIP stock packer
WLC – wireline composite log
WLL – wireline logging
WLSUM – well summary
WLTS –well log tracking system
WLTS – well log transaction system
WM – wet mate
WHM – wellhead maintenance
WHMIS – workplace hazardous material information systems
WHP – wellhead pressure
WLM – Wireline Measurement
WO – well in work over
WO/O – waiting on orders
WOA – well operations authorization
WOB – weight on bit
WOC – wait on cement
WOC – water/oil contact (or oil/water contact)
WOE – well operations engineer (a key person of well services)
WOM – wait/waiting on material
WOR – water-oil ratio
WORKO – workover
WOS – west of Shetland, oil province on the UKCS
WOW – wait/waiting on weather
WP – well proposal or working pressure
WPC – water pollution control
WPLAN – well course plan
WPQ/S/T – weld procedure qualification/specification/test
WPP – wellhead protection platform
WPR – well prognosis report
WQ – a textural parameter used for CBVWE computations (Halliburton)
WQCA – Water Quality Control Act
WQCB – Water Quality Control Board
WR – wireline retrievable (as in a WR plug)
WR – wet resistivity
WRS – well report sepia
WRSCSSV – wireline-retrievable surface-controlled sub-surface valve
WSCL – well site core log
WSE – well seismic edit
WSERE – well seismic edit report
WSG – wellsite geologist
WSHT – well shoot
WSL – well site log
WSO – water shut-off
WSOG – well-specific operation guidelines
WSP – well seismic profile
WSR – well shoot report
WSS – well services supervisor (leader of well services at the wellsite)
WSS – working spreadsheet (for logging)
WSSAM – well site sample
WSSOF – WSS offset profile
WSSUR – well seismic survey plot
WSSVP – WSS VSP raw shots
WSSVS – WSS VSP stacks
WST – well seismic tool (checkshot)
WSTL – well site test log
WSU – well service unit
wt – wall thickness
WT – well test
WTI – West Texas Intermediate benchmark crude
WTR – water
WUT – water up to
WV – wing valve (from a christmas tree)
WVS – well velocity survey
WWS – wire-wrapped (sand) screens
X
XC – cross-connection, cross correlation
XL or EXL – exploration licence (United Kingdom), a type of onshore licence issued between the First Onshore Licensing Round (1986) and the sixth (1992)
Xln – crystalline (minerals)
XLPE -cross-linked polyethylene
XMAC – cross-multipole array acoustic log
XMAC-E – XMAC elite (next generation of XMAC)
XMRI – extended-range micro-imager (Halliburton)
XMT/XT/HXT – christmas tree
XO – cross-over
XOM – Exxon Mobil
XOV – cross-over valve
XPERM – matrix permeability in the x-direction
XPHLOC – crossplot selection for XPHI
XPOR – crossplot porosity
XPT – formation pressure test log (Schlumberger)
XV – on/off valve (process control)
XYC – XY caliper log (Halliburton)
Y
yd – yarduhbk g
yl – holdup factor
YP – yield point
yr – year
Z
Z – depth, in the geosciences referring to the depth dimension in any x, y, z data
ZDENP – density log
ZDL – compensated Z-densilog
ZLD – zero liquid discharge
ZOI – zone of influence
See also
Oilfield terminology
References
External links
Network International Glossary July-11
Oil Field Acronyms and Abbreviations July-11
Oil Gas Technical Terms Glossary July-11
Schlumberger Oilfield Glossary July-11
Oil Drum Acronyms July-11
Oiltrashgear Oilfield Acronyms & Terminology November-15
OCIMF Acronyms Oct-11
SPWLA Petrophysical Curve Names and Mnemonics Oct-11
American Royalty Council Glossary Nov-11
Technip Glossary Apr-13
Petroleum industry
Abbreviations
Abbreviations
Drilling technology
Energy-related lists
Lists of abbreviations
Lists of acronyms
Oil exploration | List of abbreviations in oil and gas exploration and production | [
"Chemistry",
"Engineering"
] | 18,825 | [
"Oil platforms",
"Structural engineering",
"Petroleum technology",
"Petroleum industry",
"Petroleum",
"Natural gas technology",
"Oil wells",
"Chemical process engineering"
] |
47,436 | https://en.wikipedia.org/wiki/Atlas%20%28topology%29 | In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles.
Charts
The definition of an atlas depends on the notion of a chart. A chart for a topological space M is a homeomorphism from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair .
When a coordinate system is chosen in the Euclidean space, this defines coordinates on : the coordinates of a point of are defined as the coordinates of The pair formed by a chart and such a coordinate system is called a local coordinate system, coordinate chart, coordinate patch, coordinate map, or local frame.
Formal definition of atlas
An atlas for a topological space is an indexed family of charts on which covers (that is, ). If for some fixed n, the image of each chart is an open subset of n-dimensional Euclidean space, then is said to be an n-dimensional manifold.
The plural of atlas is atlases, although some authors use atlantes.
An atlas on an -dimensional manifold is called an adequate atlas if the following conditions hold:
The image of each chart is either or , where is the closed half-space,
is a locally finite open cover of , and
, where is the open ball of radius 1 centered at the origin.
Every second-countable manifold admits an adequate atlas. Moreover, if is an open covering of the second-countable manifold , then there is an adequate atlas on , such that is a refinement of .
Transition maps
A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.)
To be more precise, suppose that and are two charts for a manifold M such that is non-empty.
The transition map is the map defined by
Note that since and are both homeomorphisms, the transition map is also a homeomorphism.
More structure
One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives.
If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be .
Very generally, if each transition function belongs to a pseudogroup of homeomorphisms of Euclidean space, then the atlas is called a -atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle.
See also
Smooth atlas
Smooth frame
References
, Chapter 5 "Local coordinate description of fibre bundles".
External links
Atlas by Rowland, Todd
Manifolds | Atlas (topology) | [
"Mathematics"
] | 714 | [
"Topological spaces",
"Manifolds",
"Topology",
"Space (mathematics)"
] |
47,481 | https://en.wikipedia.org/wiki/Aquifer | An aquifer is an underground layer of water-bearing material, consisting of permeable or fractured rock, or of unconsolidated materials (gravel, sand, or silt). Aquifers vary greatly in their characteristics. The study of water flow in aquifers and the characterization of aquifers is called hydrogeology. Related terms include aquitard, which is a bed of low permeability along an aquifer, and aquiclude (or aquifuge), which is a solid, impermeable area underlying or overlying an aquifer, the pressure of which could lead to the formation of a confined aquifer. The classification of aquifers is as follows: Saturated versus unsaturated; aquifers versus aquitards; confined versus unconfined; isotropic versus anisotropic; porous, karst, or fractured; transboundary aquifer.
Groundwater from aquifers can be sustainably harvested by humans through the use of qanats leading to a well. This groundwater is a major source of fresh water for many regions, however can present a number of challenges such as overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, and the salinization or pollution of the groundwater.
Properties
Depth
Aquifers occur from near-surface to deeper than . Those closer to the surface are not only more likely to be used for water supply and irrigation, but are also more likely to be replenished by local rainfall. Although aquifers are sometimes characterized as "underground rivers or lakes," they are actually porous rock saturated with water.
Many desert areas have limestone hills or mountains within them or close to them that can be exploited as groundwater resources. Part of the Atlas Mountains in North Africa, the Lebanon and Anti-Lebanon ranges between Syria and Lebanon, the Jebel Akhdar in Oman, parts of the Sierra Nevada and neighboring ranges in the United States' Southwest, have shallow aquifers that are exploited for their water. Overexploitation can lead to the exceeding of the practical sustained yield; i.e., more water is taken out than can be replenished.
Along the coastlines of certain countries, such as Libya and Israel, increased water usage associated with population growth has caused a lowering of the water table and the subsequent contamination of the groundwater with saltwater from the sea.
In 2013 large freshwater aquifers were discovered under continental shelves off Australia, China, North America and South Africa. They contain an estimated half a million cubic kilometers of "low salinity" water that could be economically processed into potable water. The reserves formed when ocean levels were lower and rainwater made its way into the ground in land areas that were not submerged until the ice age ended 20,000 years ago. The volume is estimated to be 100 times the amount of water extracted from other aquifers since 1900.
Groundwater recharge
Classification
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. An aquitard can sometimes, if completely impermeable, be called an aquiclude or aquifuge. Aquitards are composed of layers of either clay or non-porous rock with low hydraulic conductivity.
Saturated versus unsaturated
Groundwater can be found at nearly every point in the Earth's shallow subsurface to some degree, although aquifers do not necessarily contain fresh water. The Earth's crust can be divided into two regions: the saturated zone or phreatic zone (e.g., aquifers, aquitards, etc.), where all available spaces are filled with water, and the unsaturated zone (also called the vadose zone), where there are still pockets of air that contain some water, but can be filled with more water.
Saturated means the pressure head of the water is greater than atmospheric pressure (it has a gauge pressure > 0). The definition of the water table is the surface where the pressure head is equal to atmospheric pressure (where gauge pressure = 0).
Unsaturated conditions occur above the water table where the pressure head is negative (absolute pressure can never be negative, but gauge pressure can) and the water that incompletely fills the pores of the aquifer material is under suction. The water content in the unsaturated zone is held in place by surface adhesive forces and it rises above the water table (the zero-gauge-pressure isobar) by capillary action to saturate a small zone above the phreatic surface (the capillary fringe) at less than atmospheric pressure. This is termed tension saturation and is not the same as saturation on a water-content basis. Water content in a capillary fringe decreases with increasing distance from the phreatic surface. The capillary head depends on soil pore size. In sandy soils with larger pores, the head will be less than in clay soils with very small pores. The normal capillary rise in a clayey soil is less than but can range between .
The capillary rise of water in a small-diameter tube involves the same physical process. The water table is the level to which water will rise in a large-diameter pipe (e.g., a well) that goes down into the aquifer and is open to the atmosphere.
Aquifers versus aquitards
Aquifers are typically saturated regions of the subsurface that produce an economically feasible quantity of water to a well or spring (e.g., sand and gravel or fractured bedrock often make good aquifer materials).
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. A completely impermeable aquitard is called an aquiclude or aquifuge. Aquitards contain layers of either clay or non-porous rock with low hydraulic conductivity.
In mountainous areas (or near rivers in mountainous areas), the main aquifers are typically unconsolidated alluvium, composed of mostly horizontal layers of materials deposited by water processes (rivers and streams), which in cross-section (looking at a two-dimensional slice of the aquifer) appear to be layers of alternating coarse and fine materials. Coarse materials, because of the high energy needed to move them, tend to be found nearer the source (mountain fronts or rivers), whereas the fine-grained material will make it farther from the source (to the flatter parts of the basin or overbank areas—sometimes called the pressure area). Since there are less fine-grained deposits near the source, this is a place where aquifers are often unconfined (sometimes called the forebay area), or in hydraulic communication with the land surface.
Confined versus unconfined
An unconfined aquifer has no impermeable barrier immediately above it, such that the water level can rise in response to recharge. A confined aquifer has an overlying impermeable barrier that prevents the water level in the aquifer from rising any higher. An aquifer in the same geologic unit may be confined in one area and unconfined in another. Unconfined aquifers are sometimes also called water table or phreatic aquifers, because their upper boundary is the water table or phreatic surface (see Biscayne Aquifer). Typically (but not always) the shallowest aquifer at a given location is unconfined, meaning it does not have a confining layer (an aquitard or aquiclude) between it and the surface. The term "perched" refers to ground water accumulating above a low-permeability unit or strata, such as a clay layer. This term is generally used to refer to a small local area of ground water that occurs at an elevation higher than a regionally extensive aquifer. The difference between perched and unconfined aquifers is their size (perched is smaller). Confined aquifers are aquifers that are overlain by a confining layer, often made up of clay. The confining layer might offer some protection from surface contamination.
If the distinction between confined and unconfined is not clear geologically (i.e., if it is not known if a clear confining layer exists, or if the geology is more complex, e.g., a fractured bedrock aquifer), the value of storativity returned from an aquifer test can be used to determine it (although aquifer tests in unconfined aquifers should be interpreted differently than confined ones). Confined aquifers have very low storativity values (much less than 0.01, and as little as ), which means that the aquifer is storing water using the mechanisms of aquifer matrix expansion and the compressibility of water, which typically are both quite small quantities. Unconfined aquifers have storativities (typically called specific yield) greater than 0.01 (1% of bulk volume); they release water from storage by the mechanism of actually draining the pores of the aquifer, releasing relatively large amounts of water (up to the drainable porosity of the aquifer material, or the minimum volumetric water content).
Isotropic versus anisotropic
In isotropic aquifers or aquifer layers the hydraulic conductivity (K) is equal for flow in all directions, while in anisotropic conditions it differs, notably in horizontal (Kh) and vertical (Kv) sense.
Semi-confined aquifers with one or more aquitards work as an anisotropic system, even when the separate layers are isotropic, because the compound Kh and Kv values are different (see hydraulic transmissivity and hydraulic resistance).
When calculating flow to drains or flow to wells in an aquifer, the anisotropy is to be taken into account lest the resulting design of the drainage system may be faulty.
Porous, karst, or fractured
To properly manage an aquifer its properties must be understood. Many properties must be known to predict how an aquifer will respond to rainfall, drought, pumping, and contamination. Considerations include where and how much water enters the groundwater from rainfall and snowmelt, how fast and in what direction the groundwater travels, and how much water leaves the ground as springs. Computer models can be used to test how accurately the understanding of the aquifer properties matches the actual aquifer performance. Environmental regulations require sites with potential sources of contamination to demonstrate that the hydrology has been characterized.
Porous
Porous aquifers typically occur in sand and sandstone. Porous aquifer properties depend on the depositional sedimentary environment and later natural cementation of the sand grains. The environment where a sand body was deposited controls the orientation of the sand grains, the horizontal and vertical variations, and the distribution of shale layers. Even thin shale layers are important barriers to groundwater flow. All these factors affect the porosity and permeability of sandy aquifers.
Sandy deposits formed in shallow marine environments and in windblown sand dune environments have moderate to high permeability while sandy deposits formed in river environments have low to moderate permeability. Rainfall and snowmelt enter the groundwater where the aquifer is near the surface. Groundwater flow directions can be determined from potentiometric surface maps of water levels in wells and springs. Aquifer tests and well tests can be used with Darcy's law flow equations to determine the ability of a porous aquifer to convey water.
Analyzing this type of information over an area gives an indication how much water can be pumped without overdrafting and how contamination will travel. In porous aquifers groundwater flows as slow seepage in pores between sand grains. A groundwater flow rate of 1 foot per day (0.3 m/d) is considered to be a high rate for porous aquifers, as illustrated by the water slowly seeping from sandstone in the accompanying image to the left.
Porosity is important, but, alone, it does not determine a rock's ability to act as an aquifer. Areas of the Deccan Traps (a basaltic lava) in west central India are good examples of rock formations with high porosity but low permeability, which makes them poor aquifers. Similarly, the micro-porous (Upper Cretaceous) Chalk Group of south east England, although having a reasonably high porosity, has a low grain-to-grain permeability, with its good water-yielding characteristics mostly due to micro-fracturing and fissuring.
Karst
Karst aquifers typically develop in limestone. Surface water containing natural carbonic acid moves down into small fissures in limestone. This carbonic acid gradually dissolves limestone thereby enlarging the fissures. The enlarged fissures allow a larger quantity of water to enter which leads to a progressive enlargement of openings. Abundant small openings store a large quantity of water. The larger openings form a conduit system that drains the aquifer to springs.
Characterization of karst aquifers requires field exploration to locate sinkholes, swallets, sinking streams, and springs in addition to studying geologic maps. Conventional hydrogeologic methods such as aquifer tests and potentiometric mapping are insufficient to characterize the complexity of karst aquifers. These conventional investigation methods need to be supplemented with dye traces, measurement of spring discharges, and analysis of water chemistry. U.S. Geological Survey dye tracing has determined that conventional groundwater models that assume a uniform distribution of porosity are not applicable for karst aquifers.
Linear alignment of surface features such as straight stream segments and sinkholes develop along fracture traces. Locating a well in a fracture trace or intersection of fracture traces increases the likelihood to encounter good water production. Voids in karst aquifers can be large enough to cause destructive collapse or subsidence of the ground surface that can initiate a catastrophic release of contaminants. Groundwater flow rate in karst aquifers is much more rapid than in porous aquifers as shown in the accompanying image to the left. For example, in the Barton Springs Edwards aquifer, dye traces measured the karst groundwater flow rates from 0.5 to 7 miles per day (0.8 to 11.3 km/d). The rapid groundwater flow rates make karst aquifers much more sensitive to groundwater contamination than porous aquifers.
In the extreme case, groundwater may exist in underground rivers (e.g., caves underlying karst topography).
Fractured
If a rock unit of low porosity is highly fractured, it can also make a good aquifer (via fissure flow), provided the rock has a hydraulic conductivity sufficient to facilitate movement of water.
Human use of groundwater
Challenges for using groundwater include: overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, groundwater becoming saline, groundwater pollution.
By country or continent
Africa
Aquifer depletion is a problem in some areas, especially in northern Africa, where one example is the Great Manmade River project of Libya. However, new methods of groundwater management such as artificial recharge and injection of surface waters during seasonal wet periods has extended the life of many freshwater aquifers, especially in the United States.
Australia
The Great Artesian Basin situated in Australia is arguably the largest groundwater aquifer in the world (over ). It plays a large part in water supplies for Queensland, and some remote parts of South Australia.
Canada
Discontinuous sand bodies at the base of the McMurray Formation in the Athabasca Oil Sands region of northeastern Alberta, Canada, are commonly referred to as the Basal Water Sand (BWS) aquifers. Saturated with water, they are confined beneath impermeable bitumen-saturated sands that are exploited to recover bitumen for synthetic crude oil production. Where they are deep-lying and recharge occurs from underlying Devonian formations they are saline, and where they are shallow and recharged by surface water they are non-saline. The BWS typically pose problems for the recovery of bitumen, whether by open-pit mining or by in situ methods such as steam-assisted gravity drainage (SAGD), and in some areas they are targets for waste-water injection.
South America
The Guarani Aquifer, located beneath the surface of Argentina, Brazil, Paraguay, and Uruguay, is one of the world's largest aquifer systems and is an important source of fresh water. Named after the Guarani people, it covers , with a volume of about , a thickness of between and a maximum depth of about .
United States
The Ogallala Aquifer of the central United States is one of the world's great aquifers, but in places it is being rapidly depleted by growing municipal use, and continuing agricultural use. This huge aquifer, which underlies portions of eight states, contains primarily fossil water from the time of the last glaciation. Annual recharge, in the more arid parts of the aquifer, is estimated to total only about 10 percent of annual withdrawals. According to a 2013 report by the United States Geological Survey (USGS), the depletion between 2001 and 2008, inclusive, is about 32 percent of the cumulative depletion during the entire 20th century.
In the United States, the biggest users of water from aquifers include agricultural irrigation and oil and coal extraction. "Cumulative total groundwater depletion in the United States accelerated in the late 1940s and continued at an almost steady linear rate through the end of the century. In addition to widely recognized environmental consequences, groundwater depletion also adversely impacts the long-term sustainability of groundwater supplies to help meet the Nation’s water needs."
An example of a significant and sustainable carbonate aquifer is the Edwards Aquifer in central Texas. This carbonate aquifer has historically been providing high quality water for nearly 2 million people, and even today, is full because of tremendous recharge from a number of area streams, rivers and lakes. The primary risk to this resource is human development over the recharge areas.
See also
References
External links
IGRAC International Groundwater Resources Assessment Centre
The Groundwater Project - Online platform for groundwater knowledge
Hydraulic engineering
Hydrology
Hydrogeology
Water and the environment
Bodies of water
Water supply | Aquifer | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,901 | [
"Hydrology",
"Water supply",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering",
"Hydrogeology"
] |
47,501 | https://en.wikipedia.org/wiki/Brightness%20temperature | Brightness temperature or radiance temperature is a measure of the intensity of electromagnetic energy coming from a source. In particular, it is the temperature at which a black body would have to be in order to duplicate the observed intensity of a grey body object at a frequency .
This concept is used in radio astronomy, planetary science, materials science and climatology.
The brightness temperature provides "a more physically recognizable way to describe intensity".
When the electromagnetic radiation observed is thermal radiation emitted by an object simply by virtue of its temperature, then the actual temperature of the object will always be equal to or higher than the brightness temperature. Since the emissivity is limited by 1, the brightness temperature is a lower bound of the object’s actual temperature.
For radiation emitted by a non-thermal source such as a pulsar, synchrotron, maser, or a laser, the brightness temperature may be far higher than the actual temperature of the source. In this case, the brightness temperature is simply a measure of the intensity of the radiation as it would be measured at the origin of that radiation.
In some applications, the brightness temperature of a surface is determined by an optical measurement, for example using a pyrometer, with the intention of determining the real temperature. As detailed below, the real temperature of a surface can in some cases be calculated by dividing the brightness temperature by the emissivity of the surface. Since the emissivity is a value between 0 and 1, the real temperature will be greater than or equal to the brightness temperature. At high frequencies (short wavelengths) and low temperatures, the conversion must proceed through Planck's law.
The brightness temperature is not a temperature as ordinarily understood. It characterizes radiation, and depending on the mechanism of radiation can differ considerably from the physical temperature of a radiating body (though it is theoretically possible to construct a device which will heat up by a source of radiation with some brightness temperature to the actual temperature equal to brightness temperature).
Nonthermal sources can have very high brightness temperatures. In pulsars the brightness temperature can reach 1030 K. For the radiation of a helium–neon laser with a power of 1 mW, a frequency spread Δf = 1 GHz, an output aperture of 1 mm, and a beam dispersion half-angle of 0.56 mrad, the brightness temperature would be .
For a black body, Planck's law gives:
where (the Intensity or Brightness) is the amount of energy emitted per unit surface area per unit time per unit solid angle and in the frequency range between and ; is the temperature of the black body; is the Planck constant; is frequency; is the speed of light; and is the Boltzmann constant.
For a grey body the spectral radiance is a portion of the black body radiance, determined by the emissivity .
That makes the reciprocal of the brightness temperature:
At low frequency and high temperatures, when , we can use the Rayleigh–Jeans law:
so that the brightness temperature can be simply written as:
In general, the brightness temperature is a function of , and only in the case of blackbody radiation it is the same at all frequencies. The brightness temperature can be used to calculate the spectral index of a body, in the case of non-thermal radiation.
Calculating by frequency
The brightness temperature of a source with known spectral radiance can be expressed as:
When we can use the Rayleigh–Jeans law:
For narrowband radiation with very low relative spectral linewidth and known radiance we can calculate the brightness temperature as:
Calculating by wavelength
Spectral radiance of black-body radiation is expressed by wavelength as:
So, the brightness temperature can be calculated as:
For long-wave radiation the brightness temperature is:
For almost monochromatic radiation, the brightness temperature can be expressed by the radiance and the coherence length :
In oceanography
In oceanography, the microwave brightness temperature, as measured by satellites looking at the ocean surface, depends on salinity as well as on the temperature and roughness (e.g. from wind-driven waves) of the water.
References
Temperature
Radio astronomy
Planetary science | Brightness temperature | [
"Physics",
"Chemistry",
"Astronomy"
] | 849 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Radio astronomy",
"Thermodynamics",
"Planetary science",
"Wikipedia categories named after physical quantities",
"Astronomical sub-disciplines"
] |
47,521 | https://en.wikipedia.org/wiki/Condensation | Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applications of condensation
Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application.
Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation.
It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails.
Commercial applications of condensation, by consumers as well as industry, include power generation, water desalination, thermal management, refrigeration, and air conditioning.
Biological adaptation
Numerous living beings use water made accessible by condensation. A few examples of these are the Australian thorny devil, the darkling beetles of the Namibian coast, and the coast redwoods of the West Coast of the United States.
Condensation in building construction
Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion, weakening of mortar and masonry walls, and energy penalties due to increased heat transfer. To alleviate these issues, the indoor air humidity needs to be lowered, or air ventilation in the building needs to be improved. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, and move air throughout a building. The amount of water vapor that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, moisture heavy air comes into contact with a cool surface. As the air is cooled, it can no longer hold as much water vapor. This leads to deposition of water on the cool surface. This is very apparent when central heating is used in combination with single glazed windows in winter.
Interstructure condensation may be caused by thermal bridges, insufficient or lacking insulation, damp proofing or insulated glazing.
Table
See also
Air well (condenser)
Bose–Einstein condensate
Cloud physics
Condenser (heat transfer)
DNA condensation
Dropwise condensation
Groasis Waterboxx
Kelvin equation
Liquefaction of gases
Phase diagram
Phase transition
Retrograde condensation
Surface condenser
References
Sources
Phase transitions | Condensation | [
"Physics",
"Chemistry"
] | 983 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Statistical mechanics",
"Matter"
] |
47,526 | https://en.wikipedia.org/wiki/Convection | Convection is single or multiphase fluid flow that occurs spontaneously through the combined effects of material property heterogeneity and body forces on a fluid, most commonly density and gravity (see buoyancy). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell). The convection may be due to gravitational, electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere, its oceans, and its mantle. Discrete convective cells in the atmosphere can be identified by clouds, with stronger convection resulting in thunderstorms. Natural convection also plays a role in stellar physics. Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place.
Granular convection is a similar phenomenon in granular material instead of fluids.
Advection is fluid motion created by velocity instead of thermal gradients.
Convective heat transfer is the intentional use of convection as a method for heat transfer. Convection is a process in which heat is carried from place to place by the bulk movement of a fluid and gases.
History
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
Terminology
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics, convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference.
In thermodynamics, convection often refers to heat transfer by convection, where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer.
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection.
Mechanisms
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation: the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall (inertial) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire, plate tectonics, oceanic currents (thermohaline circulation) and sea-wind formation (where upward convection is also modified by Coriolis forces). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number (Ra).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force (proper acceleration).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink. Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense, and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational or buoyant convection
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection. For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline.
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Solid-state convection in ice
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa, and other bodies in the outer Solar System.
Thermomagnetic convection
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility. In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field.
Combustion
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Examples and applications
Systems of natural circulation include tornadoes and other weather systems, ocean currents, and household ventilation. Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres, oceans, planetary mantles, and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane. On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes, at speeds which may closely approach that of light.
Demonstration experiments
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow.
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature).
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously.
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively.
Double diffusive convection
Convection cells
A convection cell, also known as a Bénard cell, is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes.
Atmospheric convection
Atmospheric circulation
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth, together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator, and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex, with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity, allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat, but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation.
Weather
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle. For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze.
Warm air has a lower density than cool air, so warm air rises within cooler air, similar to hot air balloons. Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense. When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Oceanic circulation
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles, while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere, and the reverse across the Southern Hemisphere. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary.
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation, leaving a saltier brine. In this process, the water becomes saltier and denser and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp.) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water, a south-going stream.
Mantle convection
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface.
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation (accretion) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics. Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle, and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary. Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40K, uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy, as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
Stack effect
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
Stellar physics
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation. This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy.
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water convection at freezing temperatures
Water is a fluid that does not obey the Boussinesq approximation. This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling, where the water is cooled to below freezing temperatures but does not immediately begin to freeze. Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. This plume is caused by a delay in the nucleation of the ice. Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped.
Nuclear reactors
In a nuclear reactor, natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G
and S8G United States Naval reactors, which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
Mathematical models of convection
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number, Grashof number, Richardson number, and the Rayleigh number.
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If , forced convection may be neglected, whereas if , natural convection may be neglected. If the ratio, known as the Richardson number, is approximately one, then both forced and natural convection need to be taken into account.
Onset
The onset of natural convection is determined by the Rayleigh number (Ra). This dimensionless number is given by
where
is the difference in density between the two parcels of material that are mixing
is the local gravitational acceleration
is the characteristic length-scale of convection: the depth of the boiling pot, for example
is the diffusivity of the characteristic that is causing the convection, and
is the dynamic viscosity.
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
is the reference density, typically picked to be the average density of the medium,
is the coefficient of thermal expansion, and
is the temperature difference across the medium.
The general diffusivity, , is redefined as a thermal diffusivity, .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection.
Turbulence
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr).
In very sticky, viscous fluids (large ν), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
Behavior
The Grashof number can be formulated for natural convection occurring due to a concentration gradient, sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f4(Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning: The values indicated for the Horizontal cylinder are wrong; see discussion.
Natural convection from a vertical plate
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection.
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar.
Num = 0.478(Gr0.25)
Mean Nusselt number = Num = hmL/k
where
hm = mean coefficient applicable between the lower edge of the plate and any point in a distance L (W/m2. K)
L = height of the vertical surface (m)
k = thermal conductivity (W/m. K)
Grashof number = Gr =
where
g = gravitational acceleration (m/s2)
L = distance above the lower edge (m)
ts = temperature of the wall (K)
t∞ = fluid temperature outside the thermal boundary layer (K)
v = kinematic viscosity of the fluid (m2/s)
T = absolute temperature (K)
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number) must be used.
Note that the above equation differs from the usual expression for Grashof number because the value has been replaced by its approximation , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Pattern formation
Convection, especially Rayleigh–Bénard convection, where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system.
When heat is fed into the system from one direction (usually below), at small values it merely diffuses (conducts) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number, the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity, which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell.
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals, may begin to appear.
See also
References
External links
Fluid mechanics
Physical phenomena | Convection | [
"Physics",
"Chemistry",
"Engineering"
] | 6,605 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Civil engineering",
"Thermodynamics",
"Fluid mechanics"
] |
47,641 | https://en.wikipedia.org/wiki/Standard%20Model | The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
Historical background
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg, has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
Particle content
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Fermions
The Standard Model includes 12 elementary particles of spin , known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
Gauge bosons
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The , , and gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the having more mass than the . The weak interactions involving the act only on left-handed particles and right-handed antiparticles. The carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
Higgs boson
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above ; therefore, the LHC (designed to collide two proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about (about 133 proton masses, on the order of ), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
Theoretical aspects
Construction of the Standard Model Lagrangian
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
Quantum chromodynamics sector
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
where is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by , where
are the Dirac matrices,
is the 8-component () SU(3) gauge field,
are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
represents the gluon field strength tensor, and
is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form , where is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and is an arbitrary function of spacetime.
Electroweak sector
The electroweak sector is a Yang–Mills gauge theory with the symmetry group ,
where the subscript sums over the three generations of fermions; , and are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and and are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as , where
is the U(1) gauge field,
is the weak hypercharge – the generator of the U(1) group,
is the 3-component SU(2) gauge field,
are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
and are the U(1) and SU(2) coupling constants respectively,
() and are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form do not respect gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
Higgs sector
In the Standard Model, the Higgs field is an SU(2) doublet of complex scalar fields with four degrees of freedom:
where the superscripts + and 0 indicate the electric charge of the components. The weak hypercharge of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
where is the electroweak gauge covariant derivative defined above and is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields and and the scalar field . The scalar potential is given by
where , so that acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field .
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when . It is possible to perform a gauge transformation on such that the ground state is transformed to a basis where and . This breaks the symmetry of the ground state. The expectation value of now becomes
where has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~.
After symmetry breaking, the masses of the W and Z are given by and , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is . Since and are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
Yukawa sector
The Yukawa interaction terms are:
where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and is its charge conjugate state.
The Yukawa terms are invariant under the SU(2) × U(1) gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
Fundamental interactions
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
Gravity
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
Electromagnetism
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
Weak nuclear force
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
Strong nuclear force
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
Tests and predictions
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
Challenges
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
See also
Yang–Mills theory
Fundamental interaction:
Quantum electrodynamics
Strong interaction: Color charge, Quantum chromodynamics, Quark model
Weak interaction: Electroweak interaction, Fermi's interaction, Weak hypercharge, Weak isospin
Gauge theory: Introduction to gauge theory
Generation
Higgs mechanism: Higgs boson, Alternatives to the Standard Higgs Model
Lagrangian
Open questions: CP violation, Neutrino masses, QCD matter, Quantum triviality
Quantum field theory
Standard Model: Mathematical formulation of, Physics beyond the Standard Model
Electron electric dipole moment
Notes
References
Further reading
Introductory textbooks
Advanced textbooks
Highlights the gauge theory aspects of the Standard Model.
Highlights dynamical and phenomenological aspects of the Standard Model.
920 pages.
952 pages.
670 pages. Highlights group-theoretical aspects of the Standard Model.
Journal articles
External links
"The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast.
The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces.
Particle Physics: Standard Model, Leonard Susskind lectures (2010).
Concepts in physics
Particle physics | Standard Model | [
"Physics"
] | 5,764 | [
"Standard Model",
"Particle physics",
"nan"
] |
47,651 | https://en.wikipedia.org/wiki/Reproducibility | Reproducibility, closely related to replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a statistical analysis of a data set should be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.
With a narrower scope, reproducibility has been defined in computational sciences as having the following quality: the results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.
In recent decades, there has been a rising concern that many published scientific results fail the test of reproducibility, evoking a reproducibility or replication crisis.
History
The first to stress the importance of reproducibility in science was the Anglo-Irish chemist Robert Boyle, in England in the 17th century. Boyle's air pump was designed to generate and study vacuum, which at the time was a very controversial concept. Indeed, distinguished philosophers such as René Descartes and Thomas Hobbes denied the very possibility of vacuum existence. Historians of science Steven Shapin and Simon Schaffer, in their 1985 book Leviathan and the Air-Pump, describe the debate between Boyle and Hobbes, ostensibly over the nature of vacuum, as fundamentally an argument about how useful knowledge should be gained. Boyle, a pioneer of the experimental method, maintained that the foundations of knowledge should be constituted by experimentally produced facts, which can be made believable to a scientific community by their reproducibility. By repeating the same experiment over and over again, Boyle argued, the certainty of fact will emerge.
The air pump, which in the 17th century was a complicated and expensive apparatus to build, also led to one of the first documented disputes over the reproducibility of a particular scientific phenomenon. In the 1660s, the Dutch scientist Christiaan Huygens built his own air pump in Amsterdam, the first one outside the direct management of Boyle and his assistant at the time Robert Hooke. Huygens reported an effect he termed "anomalous suspension", in which water appeared to levitate in a glass jar inside his air pump (in fact suspended over an air bubble), but Boyle and Hooke could not replicate this phenomenon in their own pumps. As Shapin and Schaffer describe, "it became clear that unless the phenomenon could be produced in England with one of the two pumps available, then no one in England would accept the claims Huygens had made, or his competence in working the pump". Huygens was finally invited to England in 1663, and under his personal guidance Hooke was able to replicate anomalous suspension of water. Following this Huygens was elected a Foreign Member of the Royal Society. However, Shapin and Schaffer also note that "the accomplishment of replication was dependent on contingent acts of judgment. One cannot write down a formula saying when replication was or was not achieved".
The philosopher of science Karl Popper noted briefly in his famous 1934 book The Logic of Scientific Discovery that "non-reproducible single occurrences are of no significance to science". The statistician Ronald Fisher wrote in his 1935 book The Design of Experiments, which set the foundations for the modern scientific practice of hypothesis testing and statistical significance, that "we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results". Such assertions express a common dogma in modern science that reproducibility is a necessary condition (although not necessarily sufficient) for establishing a scientific fact, and in practice for establishing scientific authority in any field of knowledge. However, as noted above by Shapin and Schaffer, this dogma is not well-formulated quantitatively, such as statistical significance for instance, and therefore it is not explicitly established how many times must a fact be replicated to be considered reproducible.
Terminology
Replicability and repeatability are related terms broadly or loosely synonymous with reproducibility (for example, among the general public), but they are often usefully differentiated in more precise senses, as follows.
Two major steps are naturally distinguished in connection with reproducibility of experimental or observational studies:
When new data is obtained in the attempt to achieve it, the term replicability is often used, and the new study is a replication or replicate of the original one. Obtaining the same results when analyzing the data set of the original study again with the same procedures, many authors use the term reproducibility in a narrow, technical sense coming from its use in computational research.
Repeatability is related to the repetition of the experiment within the same study by the same researchers.
Reproducibility in the original, wide sense is only acknowledged if a replication performed by an independent researcher team is successful.
The terms reproducibility and replicability sometimes appear even in the scientific literature with reversed meaning, as different research fields settled on their own definitions for the same terms.
Measures of reproducibility and repeatability
In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning. In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility.
These measures are related to the more general concept of variance components in metrology.
Reproducible research
Reproducible research method
The term reproducible research refers to the idea that scientific results should be documented in such a way that their deduction is fully transparent. This requires a detailed description of the methods used to obtain the data
and making the full dataset and the code to calculate the results easily accessible.
This is the essential part of open science.
To make any research project computationally reproducible, general practice involves all data and files being clearly separated, labelled, and documented. All operations should be fully documented and automated as much as practicable, avoiding manual intervention where feasible. The workflow should be designed as a sequence of smaller steps that are combined so that the intermediate outputs from one step directly feed as inputs into the next step. Version control should be used as it lets the history of the project be easily reviewed and allows for the documenting and tracking of changes in a transparent manner.
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.
There are systems that facilitate such documentation, like the R Markdown language
or the Jupyter notebook.
The Open Science Framework provides a platform and useful tools to support reproducible research.
Reproducible research in practice
Psychology has seen a renewal of internal concerns about irreproducible results (see the entry on replicability crisis for empirical results on success rates of replications). Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a six-month period. In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%). In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration. In 2017, an article published in Scientific Data suggested that this may not be sufficient and that the whole analysis context should be disclosed.
In economics, concerns have been raised in relation to the credibility and reliability of published research. In other sciences, reproducibility is regarded as fundamental and is often a prerequisite to research being published, however in economic sciences it is not seen as a priority of the greatest importance. Most peer-reviewed economic journals do not take any substantive measures to ensure that published results are reproducible, however, the top economics journals have been moving to adopt mandatory data and code archives. There is low or no incentives for researchers to share their data, and authors would have to bear the costs of compiling data into reusable forms. Economic research is often not reproducible as only a portion of journals have adequate disclosure policies for datasets and program code, and even if they do, authors frequently do not comply with them or they are not enforced by the publisher. A Study of 599 articles published in 37 peer-reviewed journals revealed that while some journals have achieved significant compliance rates, significant portion have only partially complied, or not complied at all. On an article level, the average compliance rate was 47.5%; and on a journal level, the average compliance rate was 38%, ranging from 13% to 99%.
A 2018 study published in the journal PLOS ONE found that 14.4% of a sample of public health statistics researchers had shared their data or code or both.
There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, beginning with the CONSORT initiative, which is now part of a wider initiative, the EQUATOR Network.
This group has recently turned its attention to how better reporting might reduce waste in research, especially biomedical research.
Reproducible research is key to new discoveries in pharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible.
Noteworthy irreproducible results
Hideyo Noguchi became famous for correctly identifying the bacterial agent of syphilis, but also claimed that he could culture this agent in his laboratory. Nobody else has been able to produce this latter result.
In March 1989, University of Utah chemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process ("cold fusion"). The report was astounding given the simplicity of the equipment: it was essentially an electrolysis cell containing heavy water and a palladium cathode which rapidly absorbed the deuterium produced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world (see science by press conference). Over the next several months others tried to replicate the experiment, but were unsuccessful.
Nikola Tesla claimed as early as 1899 to have used a high frequency current to light gas-filled lamps from over away without using wires. In 1904 he built Wardenclyffe Tower on Long Island to demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed due to economic problems, so no attempt to reproduce his first result was ever carried out.
Other examples which contrary evidence has refuted the original claim:
N-rays, a hypothesized form of radiation subsequently found to be illusory
Polywater, a hypothesized polymerized form of water found to be just water with common contaminations
Stimulus-triggered acquisition of pluripotency, revealed to be the result of fraud
GFAJ-1, a bacterium that could purportedly incorporate arsenic into its DNA in place of phosphorus
MMR vaccine controversy — a study in The Lancet claiming the MMR vaccine caused autism was revealed to be fraudulent
Schön scandal — semiconductor "breakthroughs" revealed to be fraudulent
Power posing — a social psychology phenomenon that went viral after being the subject of a very popular TED talk, but was unable to be replicated in dozens of studies
See also
Metascience
Accuracy
ANOVA gauge R&R
Contingency
Corroboration
Reproducible builds
Falsifiability
Hypothesis
Measurement uncertainty
Pathological science
Pseudoscience
Replication (statistics)
Replication crisis
ReScience C (journal)
Retraction in academic publishing
Tautology
Testability
Verification and validation
References
Further reading
"Science is not irrevocably broken, [epidemiologist John Ioannidis] asserts. It just needs some improvements. "Despite the fact that I've published papers with pretty depressive titles, I'm actually an optimist," Ioannidis says. "I find no other investment of a society that is better placed than science.""
External links
Transparency and Openness Promotion Guidelines from the Center for Open Science
Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results of the National Institute of Standards and Technology
Reproducible papers with artifacts by the CTuning foundation
ReproducibleResearch.net
Measurement
Philosophy of science
Scientific method
Tests
Validity (statistics)
Discovery and invention controversies
Metascience
Statistical reliability | Reproducibility | [
"Physics",
"Mathematics"
] | 2,801 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
47,719 | https://en.wikipedia.org/wiki/Coulomb | The coulomb (symbol: C) is the unit of electric charge in the International System of Units (SI). It is defined to be equal to the electric charge delivered by a 1 ampere current in 1 second. It is used to define the elementary charge e.
Definition
The SI defines the coulomb as "the quantity of electricity carried in 1 second by a current of 1 ampere". Then the value of the elementary charge e is defined to be . Since the coulomb is the reciprocal of the elementary charge,
it is approximately and is thus not an integer multiple of the elementary charge.
The coulomb was previously defined in terms of the force between two wires. The coulomb was originally defined, using the latter definition of the ampere, as .
The 2019 redefinition of the ampere and other SI base units fixed the numerical value of the elementary charge when expressed in coulombs and therefore fixed the value of the coulomb when expressed as a multiple of the fundamental charge.
SI prefixes
Like other SI units, the coulomb can be modified by adding a prefix that multiplies it by a power of 10.
Conversions
The magnitude of the electrical charge of one mole of elementary charges (approximately , the Avogadro number) is known as a faraday unit of charge (closely related to the Faraday constant). One faraday equals In terms of the Avogadro constant (NA), one coulomb is equal to approximately × NA elementary charges.
Every farad of capacitance can hold one coulomb per volt across the capacitor.
One ampere hour equals , hence = .
One statcoulomb (statC), the obsolete CGS electrostatic unit of charge (esu), is approximately or about one-third of a nanocoulomb.
In everyday terms
The charges in static electricity from rubbing materials together are typically a few microcoulombs.
The amount of charge that travels through a lightning bolt is typically around 15 C, although for large bolts this can be up to 350 C.
The amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about = ≈ .
A typical smartphone battery can hold ≈ .
Name and history
By 1878, the British Association for the Advancement of Science had defined the volt, ohm, and farad, but not the coulomb. In 1881, the International Electrical Congress, now the International Electrotechnical Commission (IEC), approved the volt as the unit for electromotive force, the ampere as the unit for electric current, and the coulomb as the unit of electric charge.
At that time, the volt was defined as the potential difference [i.e., what is nowadays called the "voltage (difference)"] across a conductor when a current of one ampere dissipates one watt of power.
The coulomb (later "absolute coulomb" or "abcoulomb" for disambiguation) was part of the EMU system of units. The "international coulomb" based on laboratory specifications for its measurement was introduced by the IEC in 1908. The entire set of "reproducible units" was abandoned in 1948 and the "international coulomb" became the modern coulomb.
See also
Abcoulomb, a cgs unit of charge
Ampère's circuital law
Coulomb's law
Electrostatics
Elementary charge
Faraday constant, the number of coulombs per mole of elementary charges
Notes and references
SI derived units
Units of electrical charge | Coulomb | [
"Physics",
"Mathematics"
] | 743 | [
"Physical quantities",
"Electric charge",
"Quantity",
"Units of electrical charge",
"Units of measurement"
] |
47,732 | https://en.wikipedia.org/wiki/Fourier-transform%20spectroscopy | Fourier-transform spectroscopy (FTS) is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the radiation, electromagnetic or not. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy (FTIR, FT-NIRS), nuclear magnetic resonance (NMR) and magnetic resonance spectroscopic imaging (MRSI), mass spectrometry and electron spin resonance spectroscopy.
There are several methods for measuring the temporal coherence of the light (see: field-autocorrelation), including the continuous-wave and the pulsed Fourier-transform spectrometer or Fourier-transform spectrograph.
The term "Fourier-transform spectroscopy" reflects the fact that in all these techniques, a Fourier transform is required to turn the raw data into the actual spectrum, and in many of the cases in optics involving interferometers, is based on the Wiener–Khinchin theorem.
Conceptual introduction
Measuring an emission spectrum
One of the most basic tasks in spectroscopy is to characterize the spectrum of a light source: how much light is emitted at each different wavelength. The most straightforward way to measure a spectrum is to pass the light through a monochromator, an instrument that blocks all of the light except the light at a certain wavelength (the un-blocked wavelength is set by a knob on the monochromator). Then the intensity of this remaining (single-wavelength) light is measured. The measured intensity directly indicates how much light is emitted at that wavelength. By varying the monochromator's wavelength setting, the full spectrum can be measured. This simple scheme in fact describes how some spectrometers work.
Fourier-transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one wavelength at a time to pass through to the detector, this technique lets through a beam containing many different wavelengths of light at once, and measures the total beam intensity. Next, the beam is modified to contain a different combination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer how much light there is at each wavelength.
To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows some wavelengths to pass through but blocks others (due to wave interference). The beam is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that can pass through.
As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform (hence the name, "Fourier-transform spectroscopy"). The raw data is sometimes called an "interferogram". Because of the existing computer equipment requirements, and the ability of light to analyze very small amounts of substance, it is often beneficial to automate many aspects of the sample preparation. The sample can be better preserved and the results are much easier to replicate. Both of these benefits are important, for instance, in testing situations that may later involve legal action, such as those involving drug specimens.
Measuring an absorption spectrum
The method of Fourier-transform spectroscopy can also be used for absorption spectroscopy. The primary example is "FTIR Spectroscopy", a common technique in chemistry.
In general, the goal of absorption spectroscopy is to measure how well a sample absorbs or transmits light at each different wavelength. Although absorption spectroscopy and emission spectroscopy are different in principle, they are closely related in practice; any technique for emission spectroscopy can also be used for absorption spectroscopy. First, the emission spectrum of a broadband lamp is measured (this is called the "background spectrum"). Second, the emission spectrum of the same lamp shining through the sample is measured (this is called the "sample spectrum"). The sample will absorb some of the light, causing the spectra to be different. The ratio of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum.
Accordingly, the technique of "Fourier-transform spectroscopy" can be used both for measuring emission spectra (for example, the emission spectrum of a star), and absorption spectra (for example, the absorption spectrum of a liquid).
Continuous-wave Michelson or Fourier-transform spectrograph
The Michelson spectrograph is similar to the instrument used in the Michelson–Morley experiment. Light from the source is split into two beams by a half-silvered mirror, one is reflected off a fixed mirror and one off a movable mirror, which introduces a time delay—the Fourier-transform spectrometer is just a Michelson interferometer with a movable mirror. The beams interfere, allowing the temporal coherence of the light to be measured at each different time delay setting, effectively converting the time domain into a spatial coordinate. By making measurements of the signal at many discrete positions of the movable mirror, the spectrum can be reconstructed using a Fourier transform of the temporal coherence of the light. Michelson spectrographs are capable of very high spectral resolution observations of very bright sources.
The Michelson or Fourier-transform spectrograph was popular for infra-red applications at a time when infra-red astronomy only had single-pixel detectors. Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imaging Fabry–Pérot instruments, which are easier to construct.
Extracting the spectrum
The intensity as a function of the path length difference (also denoted as retardation) in the interferometer and wavenumber is
where is the spectrum to be determined. Note that it is not necessary for to be modulated by the sample before the interferometer. In fact, most FTIR spectrometers place the sample after the interferometer in the optical path. The total intensity at the detector is
This is just a Fourier cosine transform. The inverse gives us our desired result in terms of the measured quantity :
Pulsed Fourier-transform spectrometer
A pulsed Fourier-transform spectrometer does not employ transmittance techniques. In the most general description of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the measured properties of the analyte.
Examples of pulsed Fourier-transform spectrometry
In magnetic spectroscopy (EPR, NMR), a microwave pulse (EPR) or a radio frequency pulse (NMR) in a strong ambient magnetic field is used as the energizing event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the field strength) which reveals information about the analyte.
In Fourier-transform mass spectrometry, the energizing event is the injection of the charged sample into the strong electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in the sample.
Free induction decay
Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily deconvolute a set of similar but distinct signals. The resulting composite signal, is called a free induction decay, because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of signal due to entropic loss of the property being measured.
Nanoscale spectroscopy with pulsed sources
Pulsed sources allow for the utilization of Fourier-transform spectroscopy principles in scanning near-field optical microscopy techniques. Particularly in nano-FTIR, where the scattering from a sharp probe-tip is used to perform spectroscopy of samples with nanoscale spatial resolution, a high-power illumination from pulsed infrared lasers makes up for a relatively small scattering efficiency (often < 1%) of the probe.
Stationary forms of Fourier-transform spectrometers
In addition to the scanning forms of Fourier-transform spectrometers, there are a number of stationary or self-scanned forms. While the analysis of the interferometric output is similar to that of the typical scanning interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is dictated by specific consideration for the spectral region and the application.
Fellgett advantage
One of the most important advantages of Fourier-transform spectroscopy was shown by P. B. Fellgett, an early advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a spectrum when measurement noise is dominated by detector noise (which is independent of the power of radiation incident on the detector), a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent scanning monochromator, of the order of the square root of m, where m is the number of sample points comprising the spectrum. However, if the detector is shot-noise dominated, the noise will be proportional to the square root of the power, thus for a broad boxcar spectrum (continuous broadband source), the noise is proportional to the square root of m, thus precisely offset the Fellgett's advantage. For line emission sources the situation is even worse and there is a distinct `multiplex disadvantage' as the shot noise from a strong emission component will overwhelm the fainter components of the spectrum. Shot noise is the main reason Fourier-transform spectrometry was never popular for ultraviolet (UV) and visible spectra.
See also
Applied spectroscopy
Forensic chemistry
Forensic polymer engineering
Nuclear magnetic resonance
Time stretch dispersive Fourier transform
Infrared spectroscopy
Infrared spectroscopy of metal carbonyls
nano-FTIR
Fellgett's advantage
References
External links
Description of how a Fourier transform spectrometer works
The Michelson or Fourier transform spectrograph
Internet Journal of Vibrational Spectroscopy – How FTIR works
Fourier Transform Spectroscopy Topical Meeting and Tabletop Exhibit
Spectroscopy
Fourier analysis
Scientific techniques | Fourier-transform spectroscopy | [
"Physics",
"Chemistry"
] | 2,184 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
48,209 | https://en.wikipedia.org/wiki/Gas%20laws | The laws describing the behaviour of gases under fixed pressure, volume, amount of gas, and absolute temperature conditions are called gas laws. The basic gas laws were discovered by the end of the 18th century when scientists found out that relationships between pressure, volume and temperature of a sample of gas could be obtained which would hold to approximation for all gases. The combination of several empirical gas laws led to the development of the ideal gas law.
The ideal gas law was later found to be consistent with atomic and kinetic theory.
History
In 1643, the Italian physicist and mathematician, Evangelista Torricelli, who for a few months had acted as Galileo Galileo's secretary, conducted a celebrated experiment in Florence. He demonstrated that a column of mercury in an inverted tube can be supported by the pressure of air outside of the tube, with the creation of a small section of vacuum above the mercury. This experiment essentially paved the way towards the invention of the barometer, as well as drawing the attention of Robert Boyle, then a "skeptical" scientist working in England. Boyle was inspired by Torricelli's experiment to investigate how the elasticity of air responds to varying pressure, and he did this through a series of experiments with a setup reminiscent of that used by Torricelli. Boyle published his results in 1662.
Later on, in 1676, the French physicist Edme Mariotte, independently arrived at the same conclusions of Boyle, while also noting some dependency of air volume on temperature. However it took another century and a half for the development of thermometry and recognition of the absolute zero temperature scale, which eventually allowed the discovery of temperature-dependent gas laws.
Boyle's law
In 1662, Robert Boyle systematically studied the relationship between the volume and pressure of a fixed amount of gas at a constant temperature. He observed that the volume of a given mass of a gas is inversely proportional to its pressure at a constant temperature.
Boyle's law, published in 1662, states that, at a constant temperature, the product of the pressure and volume of a given mass of an ideal gas in a closed system is always constant. It can be verified experimentally using a pressure gauge and a variable volume container. It can also be derived from the kinetic theory of gases: if a container, with a fixed number of molecules inside, is reduced in volume, more molecules will strike a given area of the sides of the container per unit time, causing a greater pressure.
Statement
Boyle's law states that:
The concept can be represented with these formulae:
, meaning "Volume is inversely proportional to Pressure", or
, meaning "Pressure is inversely proportional to Volume", or
, or
where is the pressure, is the volume of a gas, and is the constant in this equation (and is not the same as the proportionality constants in the other equations).
Charles' law
Charles' law, or the law of volumes, was founded in 1787 by Jacques Charles. It states that, for a given mass of an ideal gas at constant pressure, the volume is directly proportional to its absolute temperature, assuming in a closed system.
The statement of Charles' law is as follows:
the volume (V) of a given mass of a gas, at constant pressure (P), is directly proportional to its temperature (T).
Statement
Charles' law states that:
Therefore,
, or
, or
,
where "V" is the volume of a gas, "T" is the absolute temperature and k2 is a proportionality constant (which is not the same as the proportionality constants in the other equations in this article).
Gay-Lussac's law
Gay-Lussac's law, Amontons' law or the pressure law was founded by Joseph Louis Gay-Lussac in 1808.
Statement
Gay-Lussac's law states that:
Therefore,
, or
, or
,
where P is the pressure, T is the absolute temperature, and k is another proportionality constant.
Avogadro's law
Avogadro's law, Avogadro's hypothesis, Avogadro's principle or Avogadro-Ampère's hypothesis is an experimental gas law which was hypothesized by Amedeo Avogadro in 1811. It related the volume of a gas to the amount of substance of gas present.
Statement
Avogadro's law states that:
This statement gives rise to the molar volume of a gas, which at STP (273.15 K, 1 atm) is about 22.4 L. The relation is given by:
, orwhere n is equal to the number of molecules of gas (or the number of moles of gas).
Combined and ideal gas laws
The combined gas law or general gas equation is obtained by combining Boyle's law, Charles's law, and Gay-Lussac's law. It shows the relationship between the pressure, volume, and temperature for a fixed mass of gas:
This can also be written as:
With the addition of Avogadro's law, the combined gas law develops into the ideal gas law:
where P is the pressure, V is volume, n is the number of moles, R is the universal gas constant and T is the absolute temperature.
The proportionality constant, now named R, is the universal gas constant with a value of 8.3144598 (kPa∙L)/(mol∙K).
An equivalent formulation of this law is:
where P is the pressure, V is the volume, N is the number of gas molecules, kB is the Boltzmann constant (1.381×10−23J·K−1 in SI units) and T is the absolute temperature.
These equations are exact only for an ideal gas, which neglects various intermolecular effects (see real gas). However, the ideal gas law is a good approximation for most gases under moderate pressure and temperature.
This law has the following important consequences:
If temperature and pressure are kept constant, then the volume of the gas is directly proportional to the number of molecules of gas.
If the temperature and volume remain constant, then the pressure of the gas changes is directly proportional to the number of molecules of gas present.
If the number of gas molecules and the temperature remain constant, then the pressure is inversely proportional to the volume.
If the temperature changes and the number of gas molecules are kept constant, then either pressure or volume (or both) will change in direct proportion to the temperature.
Other gas laws
Graham's law This law states that the rate at which gas molecules diffuse is inversely proportional to the square root of the gas density at a constant temperature. Combined with Avogadro's law (i.e. since equal volumes have an equal number of molecules) this is the same as being inversely proportional to the root of the molecular weight.
Dalton's law of partial pressures This law states that the pressure of a mixture of gases simply is the sum of the partial pressures of the individual components. Dalton's law is as follows:
and all component gases and the mixture are at the same temperature and volume
where Ptotal is the total pressure of the gas mixture
Pi is the partial pressure or pressure of the component gas at the given volume and temperature.
Amagat's law of partial volumes This law states that the volume of a mixture of gases (or the volume of the container) simply is the sum of the partial volumes of the individual components. Amagat's law is as follows:
and all component gases and the mixture are at the same temperature and pressure
where Vtotal is the total volume of the gas mixture or the volume of the container,
Vi is the partial volume, or volume of the component gas at the given pressure and temperature.
Henry's law This states that at constant temperature, the amount of a given gas dissolved in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid. The equation is as follows:
Real gas law This was formulated by Johannes Diderik van der Waals in 1873.
References
FSU(Florida State University)
External links
History of thermodynamics | Gas laws | [
"Physics",
"Chemistry"
] | 1,663 | [
"History of thermodynamics",
"Thermodynamics",
"Gas laws"
] |
48,256 | https://en.wikipedia.org/wiki/Random%20sequence | The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".
Axiomatic probability theory deliberately avoids a definition of a random sequence. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement "let us consider a random sequence" an abuse of language.
Early history
Émile Borel was one of the first mathematicians to formally address randomness in 1909. In 1919 Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased.
The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability. This definition is often called Mises–Church randomness.
Modern approaches
During the 20th century various technical approaches to defining random sequences were developed and now three distinct paradigms can be identified. In the mid 1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule. In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet. This definition is often called Kolmogorov–Loveland stochasticity. But this method was considered too weak by Alexander Shen who showed that there is a Kolmogorov–Loveland stochastic sequence which does not conform to the general notion of randomness.
In 1966 Per Martin-Löf introduced a new notion which is now generally considered the most satisfactory notion of algorithmic randomness. His original definition involved measure theory, but it was later shown that it can be expressed in terms of Kolmogorov complexity. Kolmogorov's definition of a random string was that it is random if it has no description shorter than itself via a universal Turing machine.
Three basic paradigms for dealing with random sequences have now emerged:
The frequency / measure-theoretic approach. This approach started with the work of Richard von Mises and Alonzo Church. In the 1960s Per Martin-Löf noticed that the sets coding such frequency-based stochastic properties are a special kind of measure zero sets, and that a more general and smooth definition can be obtained by considering all effectively measure zero sets.
The complexity / compressibility approach. This paradigm was championed by A. N. Kolmogorov along with contributions from Leonid Levin and Gregory Chaitin. For finite sequences, Kolmogorov defines randomness of a binary string of length n as the entropy (or Kolmogorov complexity) normalized by the length n. In other words, if the Kolmogorov complexity of the string is close to n, it is very random; if the complexity is far below n, it is not so random. The dual concept of randomness is compressibility ‒ the more random a sequence is, the less compressible, and vice versa.
The predictability approach. This paradigm is due to Claus P. Schnorr and uses a slightly different definition of constructive martingales than martingales used in traditional probability theory. Schnorr showed how the existence of a selective betting strategy implied the existence of a selection rule for a biased sub-sequence. If one only requires a recursive martingale to succeed on a sequence instead of constructively succeed on a sequence, then one gets the concept of recursive randomness. Yongge Wang showed that recursive randomness concept is different from Schnorr's randomness concept.
In most cases, theorems relating the three paradigms (often equivalence) have been proven.
See also
Randomness
History of randomness
Random number generator
Seven states of randomness
Statistical randomness
References
Sergio B. Volchan What Is a Random Sequence? The American Mathematical Monthly, Vol. 109, 2002, pp. 46–63
Notes
External links
Video on frequency stability. Why humans can't "guess" randomly
Randomness tests by Terry Ritter
Sequences and series
Statistical randomness | Random sequence | [
"Mathematics"
] | 1,151 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
48,336 | https://en.wikipedia.org/wiki/Electrolyte | An electrolyte is a substance that conducts electricity through the movement of ions, but not through the movement of electrons. This includes most soluble salts, acids, and bases, dissolved in a polar solvent like water. Upon dissolving, the substance separates into cations and anions, which disperse uniformly throughout the solvent. Solid-state electrolytes also exist. In medicine and sometimes in chemistry, the term electrolyte refers to the substance that is dissolved.
Electrically, such a solution is neutral. If an electric potential is applied to such a solution, the cations of the solution are drawn to the electrode that has an abundance of electrons, while the anions are drawn to the electrode that has a deficit of electrons. The movement of anions and cations in opposite directions within the solution amounts to a current. Some gases, such as hydrogen chloride (HCL), under conditions of high temperature or low pressure can also function as electrolytes. Electrolyte solutions can also result from the dissolution of some biological (e.g., DNA, polypeptides) or synthetic polymers (e.g., polystyrene sulfonate), termed "polyelectrolytes", which contain charged functional groups. A substance that dissociates into ions in solution or in the melt acquires the capacity to conduct electricity. Sodium, potassium, chloride, calcium, magnesium, and phosphate in a liquid phase are examples of electrolytes.
In medicine, electrolyte replacement is needed when a person has prolonged vomiting or diarrhea, and as a response to sweating due to strenuous athletic activity. Commercial electrolyte solutions are available, particularly for sick children (such as oral rehydration solution, Suero Oral, or Pedialyte) and athletes (sports drinks). Electrolyte monitoring is important in the treatment of anorexia and bulimia.
In science, electrolytes are one of the main components of electrochemical cells.
In clinical medicine, mentions of electrolytes usually refer metonymically to the ions, and (especially) to their concentrations (in blood, serum, urine, or other fluids). Thus, mentions of electrolyte levels usually refer to the various ion concentrations, not to the fluid volumes.
Etymology
The word electrolyte derives from Ancient Greek ήλεκτρο- (ēlectro-), prefix originally meaning amber but in modern contexts related to electricity, and λυτός (lytos), meaning "able to be untied or loosened".
History
In his 1884 dissertation, Svante Arrhenius put forth his explanation of solid crystalline salts disassociating into paired charged particles when dissolved, for which he won the 1903 Nobel Prize in Chemistry. Arrhenius's explanation was that in forming a solution, the salt dissociates into charged particles, to which Michael Faraday (1791-1867) had given the name "ions" many years earlier. Faraday's belief had been that ions were produced in the process of electrolysis. Arrhenius proposed that, even in the absence of an electric current, solutions of salts contained ions. He thus proposed that chemical reactions in solution were reactions between ions.
Shortly after Arrhenius's hypothesis of ions, Franz Hofmeister and Siegmund Lewith found that different ion types displayed different effects on such things as the solubility of proteins. A consistent ordering of these different ions on the magnitude of their effect arises consistently in many other systems as well. This has since become known as the Hofmeister series.
While the origins of these effects are not abundantly clear and have been debated throughout the past century, it has been suggested that the charge density of these ions is important and might actually have explanations originating from the work of Charles-Augustin de Coulomb over 200 years ago.
Formation
Electrolyte solutions are normally formed when salt is placed into a solvent such as water and the individual components dissociate due to the thermodynamic interactions between solvent and solute molecules, in a process called "solvation". For example, when table salt (sodium chloride), NaCl, is placed in water, the salt (a solid) dissolves into its component ions, according to the dissociation reaction:
NaCl(s) → Na+(aq) + Cl−(aq)
It is also possible for substances to react with water, producing ions. For example, carbon dioxide gas dissolves in water to produce a solution that contains hydronium, carbonate, and hydrogen carbonate ions.
Molten salts can also be electrolytes as, for example, when sodium chloride is molten, the liquid conducts electricity. In particular, ionic liquids, which are molten salts with melting points below 100 °C, are a type of highly conductive non-aqueous electrolytes and thus have found more and more applications in fuel cells and batteries.
An electrolyte in a solution may be described as "concentrated" if it has a high concentration of ions, or "dilute" if it has a low concentration. If a high proportion of the solute dissociates to form free ions, the electrolyte is strong; if most of the solute does not dissociate, the electrolyte is weak. The properties of electrolytes may be exploited using electrolysis to extract constituent elements and compounds contained within the solution.
Alkaline earth metals form hydroxides that are strong electrolytes with limited solubility in water, due to the strong attraction between their constituent ions. This limits their application to situations where high solubility is required.
In 2021, researchers have found that electrolyte can "substantially facilitate electrochemical corrosion studies in less conductive media".
Physiological importance
In physiology, the primary ions of electrolytes are sodium (Na+), potassium (K+), calcium (Ca2+), magnesium (Mg2+), chloride (Cl−), hydrogen phosphate (HPO42−), and hydrogen carbonate (HCO3−). The electric charge symbols of plus (+) and minus (−) indicate that the substance is ionic in nature and has an imbalanced distribution of electrons, the result of chemical dissociation. Sodium is the main electrolyte found in extracellular fluid and potassium is the main intracellular electrolyte; both are involved in fluid balance and blood pressure control.
All known multicellular lifeforms require a subtle and complex electrolyte balance between the intracellular and extracellular environments. In particular, the maintenance of precise osmotic gradients of electrolytes is important. Such gradients affect and regulate the hydration of the body as well as blood pH, and are critical for nerve and muscle function. Various mechanisms exist in living species that keep the concentrations of different electrolytes under tight control.
Both muscle tissue and neurons are considered electric tissues of the body. Muscles and neurons are activated by electrolyte activity between the extracellular fluid or interstitial fluid, and intracellular fluid. Electrolytes may enter or leave the cell membrane through specialized protein structures embedded in the plasma membrane called "ion channels". For example, muscle contraction is dependent upon the presence of calcium (Ca2+), sodium (Na+), and potassium (K+). Without sufficient levels of these key electrolytes, muscle weakness or severe muscle contractions may occur.
Electrolyte balance is maintained by oral, or in emergencies, intravenous (IV) intake of electrolyte-containing substances, and is regulated by hormones, in general with the kidneys flushing out excess levels. In humans, electrolyte homeostasis is regulated by hormones such as antidiuretic hormones, aldosterone and parathyroid hormones. Serious electrolyte disturbances, such as dehydration and overhydration, may lead to cardiac and neurological complications and, unless they are rapidly resolved, will result in a medical emergency.
Measurement
Measurement of electrolytes is a commonly performed diagnostic procedure, performed via blood testing with ion-selective electrodes or urinalysis by medical technologists. The interpretation of these values is somewhat meaningless without analysis of the clinical history and is often impossible without parallel measurements of renal function. The electrolytes measured most often are sodium and potassium. Chloride levels are rarely measured except for arterial blood gas interpretations since they are inherently linked to sodium levels. One important test conducted on urine is the specific gravity test to determine the occurrence of an electrolyte imbalance.
Rehydration
According to a study paid for by the Gatorade Sports Science Institute, electrolyte drinks containing sodium and potassium salts replenish the body's water and electrolyte concentrations after dehydration caused by exercise, excessive alcohol consumption, diaphoresis (heavy sweating), diarrhea, vomiting, intoxication or starvation; the study says that athletes exercising in extreme conditions (for three or more hours continuously, e.g. a marathon or triathlon) who do not consume electrolytes risk dehydration (or hyponatremia).
A home-made electrolyte drink can be made by using water, sugar and salt in precise proportions. It is important to include glucose (sugar) to utilise the co-transport mechanism of sodium and glucose. Commercial preparations are also available for both human and veterinary use.
Electrolytes are commonly found in fruit juices, sports drinks, milk, nuts, and many fruits and vegetables (whole or in juice form) (e.g., potatoes, avocados).
Electrochemistry
When electrodes are placed in an electrolyte and a voltage is applied, the electrolyte will conduct electricity. Lone electrons normally cannot pass through the electrolyte; instead, a chemical reaction occurs at the cathode, providing electrons to the electrolyte. Another reaction occurs at the anode, consuming electrons from the electrolyte. As a result, a negative charge cloud develops in the electrolyte around the cathode, and a positive charge develops around the anode. The ions in the electrolyte neutralize these charges, enabling the electrons to keep flowing and the reactions to continue.
For example, in a solution of ordinary table salt (sodium chloride, NaCl) in water, the cathode reaction will be
2 H2O + 2e− → 2 OH− + H2
and hydrogen gas will bubble up; the anode reaction is
2 NaCl → 2 Na+ + Cl2 + 2e−
and chlorine gas will be liberated into solution where it reacts with the sodium and hydroxyl ions to produce sodium hypochlorite - household bleach. The positively charged sodium ions Na+ will react toward the cathode, neutralizing the negative charge of OH− there, and the negatively charged hydroxide ions OH− will react toward the anode, neutralizing the positive charge of Na+ there. Without the ions from the electrolyte, the charges around the electrode would slow down continued electron flow; diffusion of H+ and OH− through water to the other electrode takes longer than movement of the much more prevalent salt ions.
Electrolytes dissociate in water because water molecules are dipoles and the dipoles orient in an energetically favorable manner to solvate the ions.
In other systems, the electrode reactions can involve the metals of the electrodes as well as the ions of the electrolyte.
Electrolytic conductors are used in electronic devices where the chemical reaction at a metal-electrolyte interface yields useful effects.
In batteries, two materials with different electron affinities are used as electrodes; electrons flow from one electrode to the other outside of the battery, while inside the battery the circuit is closed by the electrolyte's ions. Here, the electrode reactions convert chemical energy to electrical energy.
In some fuel cells, a solid electrolyte or proton conductor connects the plates electrically while keeping the hydrogen and oxygen fuel gases separated.
In electroplating tanks, the electrolyte simultaneously deposits metal onto the object to be plated, and electrically connects that object in the circuit.
In operation-hours gauges, two thin columns of mercury are separated by a small electrolyte-filled gap, and, as charge is passed through the device, the metal dissolves on one side and plates out on the other, causing the visible gap to slowly move along.
In electrolytic capacitors the chemical effect is used to produce an extremely thin dielectric or insulating coating, while the electrolyte layer behaves as one capacitor plate.
In some hygrometers the humidity of air is sensed by measuring the conductivity of a nearly dry electrolyte.
Hot, softened glass is an electrolytic conductor, and some glass manufacturers keep the glass molten by passing a large current through it.
Solid electrolytes
Solid electrolytes can be mostly divided into four groups described below.
Gel electrolytes
Gel electrolytes – closely resemble liquid electrolytes. In essence, they are liquids in a flexible lattice framework. Various additives are often applied to increase the conductivity of such systems.
Ceramic electrolytes
Solid ceramic electrolytes – ions migrate through the ceramic phase by means of vacancies or interstitials within the lattice. There are also glassy-ceramic electrolytes.
Polymer electrolytes
Dry polymer electrolytes differ from liquid and gel electrolytes in that salt is dissolved directly into the solid medium. Usually it is a relatively high-dielectric constant polymer (PEO, PMMA, PAN, polyphosphazenes, siloxanes, etc.) and a salt with low lattice energy. In order to increase the mechanical strength and conductivity of such electrolytes, very often composites are made, and inert ceramic phase is introduced. There are two major classes of such electrolytes: polymer-in-ceramic, and ceramic-in-polymer.
Organic plastic electrolytes
Organic ionic plastic crystals – are a type organic salts exhibiting mesophases (i.e. a state of matter intermediate between liquid and solid), in which mobile ions are orientationally or rotationally disordered while their centers are located at the ordered sites in the crystal structure. They have various forms of disorder due to one or more solid–solid phase transitions below the melting point and have therefore plastic properties and good mechanical flexibility as well as an improved electrode-electrolyte interfacial contact. In particular, protic organic ionic plastic crystals (POIPCs), which are solid protic organic salts formed by proton transfer from a Brønsted acid to a Brønsted base and in essence are protic ionic liquids in the molten state, have found to be promising solid-state proton conductors for fuel cells. Examples include 1,2,4-triazolium perfluorobutanesulfonate and imidazolium methanesulfonate.
See also
Electrochemical machining
Elektrolytdatenbank Regensburg
Ion transport number
ITIES (interface between two immiscible electrolyte solutions)
Salt bridge
Strong electrolyte
Supporting electrolyte (background electrolyte)
VTPR
References
External links
Blood tests
Urine tests
Physical chemistry
Acid–base physiology | Electrolyte | [
"Physics",
"Chemistry"
] | 3,165 | [
"Blood tests",
"Acid–base physiology",
"Applied and interdisciplinary physics",
"Electrolytes",
"Electrochemistry",
"nan",
"Chemical pathology",
"Physical chemistry"
] |
48,340 | https://en.wikipedia.org/wiki/Pesticide | Pesticides are substances that are used to control pests. They include herbicides, insecticides, nematicides, fungicides, and many others (see table). The most common of these are herbicides, which account for approximately 50% of all pesticide use globally. Most pesticides are used as plant protection products (also known as crop protection products), which in general protect plants from weeds, fungi, or insects. In general, a pesticide is a chemical or biological agent (such as a virus, bacterium, or fungus) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, weeds, molluscs, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property, cause nuisance, or spread disease, or are disease vectors. Along with these benefits, pesticides also have drawbacks, such as potential toxicity to humans and other species.
Definition
The word pesticide derives from the Latin pestis (plague) and caedere (kill).
The Food and Agriculture Organization (FAO) has defined pesticide as:
any substance or mixture of substances intended for preventing, destroying, or controlling any pest, including vectors of human or animal disease, unwanted species of plants or animals, causing harm during or otherwise interfering with the production, processing, storage, transport, or marketing of food, agricultural commodities, wood and wood products or animal feedstuffs, or substances that may be administered to animals for the control of insects, arachnids, or other pests in or on their bodies. The term includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to crops either before or after harvest to protect the commodity from deterioration during storage and transport.
Classifications
Pesticides can be classified by target organism (e.g., herbicides, insecticides, fungicides, rodenticides, and pediculicides – see table),
Biopesticides according to the EPA include microbial pesticides, biochemical pesticides, and plant-incorporated protectants.
Pesticides can be classified into structural classes, with many structural classes developed for each of the target organisms listed in the table. A structural class is usually associated with a single mode of action, whereas a mode of action may encompass more than one structural class.
The pesticidal chemical (active ingredient) is mixed (formulated) with other components to form the product that is sold, and which is applied in various ways. Pesticides in gas form are fumigants.
Pesticides can be classified based upon their mode of action, which indicates the exact biological mechanism which the pesticide disrupts. The modes of action are important for resistance management, and are categorized and administered by the insecticide, herbicide, and fungicide resistance action committees.
Pesticides may be systemic or non-systemic. A systemic pesticide moves (translocates) inside the plant. Translocation may be upward in the xylem, or downward in the phloem or both. Non-systemic pesticides (contact pesticides) remain on the surface and act through direct contact with the target organism. Pesticides are more effective if they are systemic. Systemicity is a prerequisite for the pesticide to be used as a seed-treatment.
Pesticides can be classified as persistent (non-biodegradable) or non-persistent (biodegradable). A pesticide must be persistent enough to kill or control its target but must degrade fast enough not to accumulate in the environment or the food chain in order to be approved by the authorities. Persistent pesticides, including DDT, were banned many years ago, an exception being spraying in houses to combat malaria vectors.
History
From biblical times until the 1950s the pesticides used were inorganic compounds and plant extracts. The inorganic compounds were derivatives of copper, arsenic, mercury, sulfur, among others, and the plant extracts contained pyrethrum, nicotine, and rotenone among others. The less toxic of these are still in use in organic farming. In the 1940s the insecticide DDT, and the herbicide 2,4-D, were introduced. These synthetic organic compounds were widely used and were very profitable. They were followed in the 1950s and 1960s by numerous other synthetic pesticides, which led to the growth of the pesticide industry. During this period, it became increasingly evident that DDT, which had been sprayed widely in the environment to combat the vector, had accumulated in the food chain. It had become a global pollutant, as summarized in the well-known book Silent Spring. Finally, DDT was banned in the 1970s in several countries, and subsequently all persistent pesticides were banned worldwide, an exception being spraying on interior walls for vector control.
Resistance to a pesticide was first seen in the 1920s with inorganic pesticides, and later it was found that development of resistance is to be expected, and measures to delay it are important. Integrated pest management (IPM) was introduced in the 1950s. By careful analysis and spraying only when an economical or biological threshold of crop damage is reached, pesticide application is reduced. This became in the 2020s the official policy of international organisations, industry, and many governments. With the introduction of high yielding varieties in the 1960s in the green revolution, more pesticides were used. Since the 1980s genetically modified crops were introduced, which resulted in lower amounts of insecticides used on them. Organic agriculture, which uses only non-synthetic pesticides, has grown and in 2020 represents about 1.5 per cent of the world's total agricultural land.
Pesticides have become more effective. Application rates fell from 1,000 to 2,500 grams of active ingredient per hectare (g/ha) in the 1950s to 40–100 g/ha in the 2000s. Despite this, amounts used have increased. In high income countries over 20 years between the 1990s and 2010s amounts used increased 20%, while in the low income countries amounts increased 1623%.
Development of new pesticides
The aim is to find new compounds or agents with improved properties such as a new mode of action or lower application rate. Another aim is to replace older pesticides which have been banned for reasons of toxicity or environmental harm or have become less effective due to development of resistance.
The process starts with testing (screening) against target organisms such as insects, fungi or plants. Inputs are typically random compounds, natural products, compounds designed to disrupt a biochemical target, compounds described in patents or literature, or biocontrol organisms.
Compounds that are active in the screening process, known as hits or leads, cannot be used as pesticides, except for biocontrol organisms and some potent natural products. These lead compounds need to be optimised by a series of cycles of synthesis and testing of analogs. For approval by regulatory authorities for use as pesticides, the optimized compounds must meet several requirements. In addition to being potent (low application rate), they must show low toxicity to non-target organisms, low environmental impact, and viable manufacturing cost. The cost of developing a pesticide in 2022 was estimated to be 350 million US dollars. It has become more difficult to find new pesticides. More than 100 new active ingredients were introduced in the 2000s and less than 40 in the 2010s. Biopesticides are cheaper to develop, since the authorities require less toxicological and environmental study. Since 2000 the rate of new biological product introduction has frequently exceeded that of conventional products.
More than 25% of existing chemical pesticides contain one or more chiral centres (stereogenic centres). Newer pesticides with lower application rates tend to have more complex structures, and thus more often contain chiral centres. In cases when most or all of the pesticidal activity in a new compound is found in one enantiomer (the eutomer), the registration and use of the compound as this single enantiomer is preferred. This reduces the total application rate and avoids the tedious environmental testing required when registering a racemate. However, if a viable enantioselective manufacturing route cannot be found, then the racemate is registered and used.
Uses
In addition to their main use in agriculture, pesticides have a number of other applications. Pesticides are used to control organisms that are considered to be harmful, or pernicious to their surroundings. For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West Nile virus, yellow fever, and malaria. They can also kill bees, wasps or ants that can cause allergic reactions. Insecticides can protect animals from illnesses that can be caused by parasites such as fleas. Pesticides can prevent sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside weeds, trees, and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like swimming and fishing and cause the water to look or smell unpleasant. Uncontrolled pests such as termites and mold can damage structures such as houses. Pesticides are used in grocery stores and food storage facilities to manage rodents and insects that infest food such as grain. Pesticides are used on lawns and golf courses, partly for cosmetic reasons.
Integrated pest management, the use of multiple approaches to control pests, is becoming widespread and has been used with success in countries such as Indonesia, China, Bangladesh, the U.S., Australia, and Mexico. IPM attempts to recognize the more widespread impacts of an action on an ecosystem, so that natural balances are not upset.
Each use of a pesticide carries some associated risk. Proper pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada.
DDT, sprayed on the walls of houses, is an organochlorine that has been used to fight malaria vectors (mosquitos) since the 1940s. The World Health Organization recommend this approach. It and other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the environment and human toxicity. DDT has become less effective, as resistance was identified in Africa as early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT.
Amount used
Total pesticides use in agriculture in 2021 was 3.54 million tonnes of active ingredients (Mt), a 4 percent increase with respect to 2020, an 11 percent increase in a decade, and a doubling since 1990. Pesticides use per area of cropland in 2021 was 2.26 kg per hectare (kg/ha), an increase of 4 percent with respect to 2020; use per value of agricultural production was 0.86 kg per thousand international dollar (kg/1000 I$) (+2%); and use per person was 0.45 kg per capita (kg/cap) (+3%). Between 1990 and 2021, these indicators increased by 85 percent, 3 percent, and 33 percent, respectively. Brazil was the world's largest user of pesticides in 2021, with 720 kt of pesticides applications for agricultural use, while the USA (457 kt) was the second-largest user.
Applications per cropland area in 2021 varied widely, from 10.9 kg/hectare in Brazil to 0.8 kg/ha in the Russian Federation. The level in Brazil was about twice as high as in Argentina (5.6 kg/ha) and Indonesia (5.3 kg/ha). Insecticide use in the US has declined by more than half since 1980 (0.6%/yr), mostly due to the near phase-out of organophosphates. In corn fields, the decline was even steeper, due to the switchover to transgenic Bt corn.
Benefits
Pesticides increase agricultural yields and lower costs. One study found that not using pesticides reduced crop yields by about 10%. Another study, conducted in 1999, found that a ban on pesticides in the United States may result in a rise of food prices, loss of jobs, and an increase in world hunger.
There are two levels of benefits for pesticide use, primary and secondary. Primary benefits are direct gains from the use of pesticides and secondary benefits are effects that are more long-term.
Biological
Controlling pests and plant disease vectors
Improved crop yields
Improved crop/livestock quality
Invasive species controlled
Controlling human/livestock disease vectors and nuisance organisms
Human lives saved and disease reduced. Diseases controlled include malaria, with millions of lives having been saved or enhanced with the use of DDT alone.
Animal lives saved and disease reduced
Controlling organisms that harm other human activities and structures
Drivers view unobstructed
Tree/brush/leaf hazards prevented
Wooden structures protected
Economics
In 2018 world pesticide sales were estimated to be $65 billion, of which 88% was used for agriculture. Generic accounted for 85% of sales in 2018. In one study, it was estimated that for every dollar ($1) that is spent on pesticides for crops results in up to four dollars ($4) in crops which would otherwise be lost to insects, fungi and weeds. In general, farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available year-round.
Disadvantages
On the cost side of pesticide use there can be costs to the environment and costs to human health. Pesticides safety education and pesticide applicator regulation are designed to protect the public from pesticide misuse, but do not eliminate all misuse. Reducing the use of pesticides and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use.
Health effects
Pesticides may affect health negatively. mimicking hormones causing reproductive problems, and also causing cancer. A 2007 systematic review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide exposure" and thus concluded that cosmetic use of pesticides should be decreased. There is substantial evidence of associations between organophosphate insecticide exposures and neurobehavioral alterations. Limited evidence also exists for other negative outcomes from pesticide exposure including neurological, birth defects, and fetal death.
The American Academy of Pediatrics recommends limiting exposure of children to pesticides and using safer alternatives:
Pesticides are also found in majority of U.S. households with 88 million out of the 121.1 million households indicating that they use some form of pesticide in 2012. As of 2007, there were more than 1,055 active ingredients registered as pesticides, which yield over 20,000 pesticide products that are marketed in the United States.
Owing to inadequate regulation and safety precautions, 99% of pesticide-related deaths occur in developing countries that account for only 25% of pesticide usage.
One study found pesticide self-poisoning the method of choice in one third of suicides worldwide, and recommended, among other things, more restrictions on the types of pesticides that are most harmful to humans.
A 2014 epidemiological review found associations between autism and exposure to certain pesticides, but noted that the available evidence was insufficient to conclude that the relationship was causal.
Occupational exposure among agricultural workers
The World Health Organization and the UN Environment Programme estimate that 3 million agricultural workers in the developing world experience severe poisoning from pesticides each year, resulting in 18,000 deaths. According to one study, as many as 25 million workers in developing countries may suffer mild pesticide poisoning yearly. Other occupational exposures besides agricultural workers, including pet groomers, groundskeepers, and fumigators, may also put individuals at risk of health effects from pesticides.
Pesticide use is widespread in Latin America, as around US$3 billion are spent each year in the region. Records indicate an increase in the frequency of pesticide poisonings over the past two decades. The most common incidents of pesticide poisoning is thought to result from exposure to organophosphate and carbamate insecticides. At-home pesticide use, use of unregulated products, and the role of undocumented workers within the agricultural industry makes characterizing true pesticide exposure a challenge. It is estimated that 50–80% of pesticide poisoning cases are unreported.
Underreporting of pesticide poisoning is especially common in areas where agricultural workers are less likely to seek care from a healthcare facility that may be monitoring or tracking the incidence of acute poisoning. The extent of unintentional pesticide poisoning may be much greater than available data suggest, particularly among developing countries. Globally, agriculture and food production remain one of the largest industries. In East Africa, the agricultural industry represents one of the largest sectors of the economy, with nearly 80% of its population relying on agriculture for income. Farmers in these communities rely on pesticide products to maintain high crop yields.
Some East Africa governments are shifting to corporate farming, and opportunities for foreign conglomerates to operate commercial farms have led to more accessible research on pesticide use and exposure among workers. In other areas where large proportions of the population rely on subsistence, small-scale farming, estimating pesticide use and exposure is more difficult.
Pesticide poisoning
Pesticides may exhibit toxic effects on humans and other non-target species, the severity of which depends on the frequency and magnitude of exposure. Toxicity also depends on the rate of absorption, distribution within the body, metabolism, and elimination of compounds from the body. Commonly used pesticides like organophosphates and carbamates act by inhibiting acetylcholinesterase activity, which prevents the breakdown of acetylcholine at the neural synapse. Excess acetylcholine can lead to symptoms like muscle cramps or tremors, confusion, dizziness and nausea. Studies show that farm workers in Ethiopia, Kenya, and Zimbabwe have decreased concentrations of plasma acetylcholinesterase, the enzyme responsible for breaking down acetylcholine acting on synapses throughout the nervous system. Other studies in Ethiopia have observed reduced respiratory function among farm workers who spray crops with pesticides. Numerous exposure pathways for farm workers increase the risk of pesticide poisoning, including dermal absorption walking through fields and applying products, as well as inhalation exposure.
Measuring exposure to pesticides
There are multiple approaches to measuring a person's exposure to pesticides, each of which provides an estimate of an individual's internal dose. Two broad approaches include measuring biomarkers and markers of biological effect. The former involves taking direct measurement of the parent compound or its metabolites in various types of media: urine, blood, serum. Biomarkers may include a direct measurement of the compound in the body before it's been biotransformed during metabolism. Other suitable biomarkers may include the metabolites of the parent compound after they've been biotransformed during metabolism. Toxicokinetic data can provide more detailed information on how quickly the compound is metabolized and eliminated from the body, and provide insights into the timing of exposure.
Markers of biological effect provide an estimation of exposure based on cellular activities related to the mechanism of action. For example, many studies investigating exposure to pesticides often involve the quantification of the acetylcholinesterase enzyme at the neural synapse to determine the magnitude of the inhibitory effect of organophosphate and carbamate pesticides.
Another method of quantifying exposure involves measuring, at the molecular level, the amount of pesticide interacting with the site of action. These methods are more commonly used for occupational exposures where the mechanism of action is better understood, as described by WHO guidelines published in "Biological Monitoring of Chemical Exposure in the Workplace". Better understanding of how pesticides elicit their toxic effects is needed before this method of exposure assessment can be applied to occupational exposure of agricultural workers.
Alternative methods to assess exposure include questionnaires to discern from participants whether they are experiencing symptoms associated with pesticide poisoning. Self-reported symptoms may include headaches, dizziness, nausea, joint pain, or respiratory symptoms.
Challenges in assessing pesticide exposure
Multiple challenges exist in assessing exposure to pesticides in the general population, and many others that are specific to occupational exposures of agricultural workers. Beyond farm workers, estimating exposure to family members and children presents additional challenges, and may occur through "take-home" exposure from pesticide residues collected on clothing or equipment belonging to parent farm workers and inadvertently brought into the home. Children may also be exposed to pesticides prenatally from mothers who are exposed to pesticides during pregnancy. Characterizing children's exposure resulting from drift of airborne and spray application of pesticides is similarly challenging, yet well documented in developing countries. Because of critical development periods of the fetus and newborn children, these non-working populations are more vulnerable to the effects of pesticides, and may be at increased risk of developing neurocognitive effects and impaired development.
While measuring biomarkers or markers of biological effects may provide more accurate estimates of exposure, collecting these data in the field is often impractical and many methods are not sensitive enough to detect low-level concentrations. Rapid cholinesterase test kits exist to collect blood samples in the field. Conducting large scale assessments of agricultural workers in remote regions of developing countries makes the implementation of these kits a challenge. The cholinesterase assay is a useful clinical tool to assess individual exposure and acute toxicity. Considerable variability in baseline enzyme activity among individuals makes it difficult to compare field measurements of cholinesterase activity to a reference dose to determine health risk associated with exposure. Another challenge in deriving a reference dose is identifying health endpoints that are relevant to exposure. More epidemiological research is needed to identify critical health endpoints, particularly among populations who are occupationally exposed.
Prevention
Minimizing harmful exposure to pesticides can be achieved by proper use of personal protective equipment, adequate reentry times into recently sprayed areas, and effective product labeling for hazardous substances as per FIFRA regulations. Training high-risk populations, including agricultural workers, on the proper use and storage of pesticides, can reduce the incidence of acute pesticide poisoning and potential chronic health effects associated with exposure. Continued research into the human toxic health effects of pesticides serves as a basis for relevant policies and enforceable standards that are health protective to all populations.
Environmental effects
Pesticide use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including non-target species, air, water and soil. Pesticide drift occurs when pesticides suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides are one of the causes of water pollution, and some pesticides were persistent organic pollutants (now banned), which contribute to soil and flower (pollen, nectar) contamination. Furthermore, pesticide use can adversely impact neighboring agricultural activity, as pests themselves drift to and harm nearby crops that have no pesticide used on them.
In addition, pesticide use reduces invertebrate biodiversity in streams, contributes to pollinator decline, destroys habitat (especially for birds), and threatens endangered species. Pests can develop a resistance to the pesticide (pesticide resistance), necessitating a new pesticide. Alternatively a greater dose of the pesticide can be used to counteract the resistance, although this will cause a worsening of the ambient pollution problem.
The Stockholm Convention on Persistent Organic Pollutants banned all persistent pesticides, in particular DDT and other organochlorine pesticides, which were stable and lipophilic, and thus able to bioaccumulate in the body and the food chain. and which spread throughout the planet. Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Pesticides now have to be degradable in the environment. Such degradation of pesticides is due to both innate chemical properties of the compounds and environmental processes or conditions. For example, the presence of halogens within a chemical structure often slows down degradation in an aerobic environment. Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to microbial degraders.
Pesticide contamination in the environment can be monitored through bioindicators such as bee pollinators.
Economics
In one study, the human health and environmental costs due to pesticides in the United States was estimated to be $9.6 billion: offset by about $40 billion in increased agricultural production.
Additional costs include the registration process and the cost of purchasing pesticides: which are typically borne by agrichemical companies and farmers respectively. The registration process can take several years to complete (there are 70 types of field tests) and can cost $50–70 million for a single pesticide. At the beginning of the 21st century, the United States spent approximately $10 billion on pesticides annually.
Resistance
The use of pesticides inherently entails the risk of resistance developing. Various techniques and procedures of pesticide application can slow the development of resistance, as can some natural features of the target population and surrounding environment.
Alternatives
Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering (mostly of crops), and methods of interfering with insect breeding. Application of composted yard waste has also been used as a way of controlling pests.
These methods are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering reduced-risk pesticides in increasing numbers.
Cultivation practices
Cultivation practices include polyculture (growing multiple types of plants), crop rotation, planting crops in areas where the pests that damage them do not live, timing planting according to when pests will be least problematic, and use of trap crops that attract pests away from the real crop. Trap crops have successfully controlled pests in some commercial agricultural systems while reducing pesticide usage. In other systems, trap crops can fail to reduce pest densities at a commercial scale, even when the trap crop works in controlled experiments.
Use of other organisms
Release of other organisms that fight the pest is another example of an alternative to pesticide use. These organisms can include natural predators or parasites of the pests. Biological pesticides based on entomopathogenic fungi, bacteria and viruses causing disease in the pest species can also be used.
Biological control engineering
Interfering with insects' reproduction can be accomplished by sterilizing males of the target species and releasing them, so that they mate with females but do not produce offspring. This technique was first used on the screwworm fly in 1958 and has since been used with the medfly, the tsetse fly, and the gypsy moth. This is a costly and slow approach that only works on some types of insects.
Other alternatives
Other alternatives include "laserweeding" – the use of novel agricultural robots for weed control using lasers.
Push pull strategy
Push-pull technique: intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it.
Effectiveness
Some evidence shows that alternatives to pesticides can be equally effective as the use of chemicals. A study of Maize fields in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third season of the study. Additional silicon nutrition protects some horticultural crops against fungal diseases almost completely, while insufficient silicon sometimes leads to severe infection even when fungicides are used.
Pesticide resistance is increasing and that may make alternatives more attractive.
Types
Biopesticides
Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals. For example, canola oil and baking soda have pesticidal applications and are considered biopesticides. Biopesticides fall into three major classes:
Microbial pesticides which consist of bacteria, entomopathogenic fungi or viruses (and sometimes includes the metabolites that bacteria or fungi produce). Entomopathogenic nematodes are also often classed as microbial pesticides, even though they are multi-cellular.
Biochemical pesticides or herbal pesticides are naturally occurring substances that control (or monitor in the case of pheromones) pests and microbial diseases.
Plant-incorporated protectants (PIPs) have genetic material from other species incorporated into their genetic material (i.e. GM crops). Their use is controversial, especially in many European countries.
By pest type
Pesticides that are related to the type of pests are:
Regulation
International
In many countries, pesticides must be approved for sale and use by a government agency.
Worldwide, 85% of countries have pesticide legislation for the proper storage of pesticides and 51% include provisions to ensure proper disposal of all obsolete pesticides.
Though pesticide regulations differ from country to country, pesticides, and products on which they were used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for many countries. The Code was updated in 1998 and 2002. The FAO claims that the code has raised awareness about pesticide hazards and decreased the number of countries without restrictions on pesticide use.
Three other efforts to improve regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information on Chemicals in International Trade and the United Nations Codex Alimentarius Commission. The former seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating countries.
United States
In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA).
Studies must be conducted to establish the conditions in which the material is safe to use and the effectiveness against the intended pest(s). The EPA regulates pesticides to ensure that these products do not pose adverse effects to humans or the environment, with an emphasis on the health and safety of children. Pesticides produced before November 1984 continue to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are reviewed every 15 years to ensure they meet the proper standards. During the registration process, a label is created. The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity, pesticides are assigned to a Toxicity Class. Pesticides are the most thoroughly tested chemicals after drugs in the United States; those used on food require more than 100 tests to determine a range of potential impacts.
Some pesticides are considered too hazardous for sale to the general public and are designated restricted use pesticides. Only certified applicators, who have passed an exam, may purchase or supervise the application of restricted use pesticides. Records of sales and use are required to be maintained and may be audited by government agencies charged with the enforcement of pesticide regulations. These records must be made available to employees and state or territorial environmental regulatory agencies.
In addition to the EPA, the United States Department of Agriculture (USDA) and the United States Food and Drug Administration (FDA) set standards for the level of pesticide residue that is allowed on or in crops. The EPA looks at what the potential human health and environmental effects might be associated with the use of the pesticide.
In addition, the U.S. EPA uses the National Research Council's four-step process for human health risk assessment: (1) Hazard Identification, (2) Dose-Response Assessment, (3) Exposure Assessment, and (4) Risk Characterization.
In 2013 Kaua'i County (Hawai'i) passed Bill No. 2491 to add an article to Chapter 22 of the county's code relating to pesticides and GMOs. The bill strengthens protections of local communities in Kaua'i where many large pesticide companies test their products.
The first legislation providing federal authority for regulating pesticides was enacted in 1910.
Canada
EU
EU legislation has been approved banning the use of highly toxic pesticides including those that are carcinogenic, mutagenic or toxic to reproduction, those that are endocrine-disrupting, and those that are persistent, bioaccumulative and toxic (PBT) or very persistent and very bioaccumulative (vPvB) and measures have been approved to improve the general safety of pesticides across all EU member states.
In 2023 The Environment Committee of European Parliament approved a decision aiming to reduce pesticide use by 50% (the most hazardous by 65%) by the year 2030 and ensure sustainable use of pesticides (for example use them only as a last resort). The decision also includes measures for providing farmers with alternatives.
Residue
Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. The maximum residue limits (MRL) of pesticides in food are carefully set by the regulatory authorities to ensure, to their best judgement, no health impacts. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns.
Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Residues are monitored by the authorities. In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide.
See also
Index of pesticide articles
Environmental hazard
Pest control
Pesticide residue
Pesticide standard value
WHO Pesticide Evaluation Scheme
References
Bibliography
Davis, Frederick Rowe. "Pesticides and the perils of synecdoche in the history of science and environmental history." History of Science 57.4 (2019): 469–492.
Davis, Frederick Rowe. Banned: a history of pesticides and the science of toxicology (Yale UP, 2014).
Matthews, Graham A. A history of pesticides (CABI, 2018).
Sources
External links
Pesticides at the World Health Organization (WHO)
Pesticides at the United Nations Environment Programme (UNEP)
Pesticides at the European Commission
Pesticides at the United States Environmental Protection Agency
Chemical substances
Toxic effects of pesticides
Soil contamination
Biocides | Pesticide | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 7,327 | [
"Pesticides",
"Toxicology",
"Biocides",
"Environmental chemistry",
"Materials",
"Soil contamination",
"nan",
"Chemical substances",
"Matter"
] |
48,395 | https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20equations | The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample.
Flow velocity
The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
General continuum equations
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is:
By setting the Cauchy stress tensor to be the sum of a viscosity term (the deviatoric stress) and a pressure term (volumetric stress), we arrive at:
where
is the material derivative, defined as ,
is the (mass) density,
is the flow velocity,
is the divergence,
is the pressure,
is time,
is the deviatoric stress tensor, which has order 2,
represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on.
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations.
Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative ) of any finite volume (V) to represent the change of velocity in fluid media:
where
is the material derivative of mass per unit volume (density, ),
is the mathematical operation for the integration throughout the volume (V),
is the partial derivative mathematical operator,
is the divergence of the flow velocity (), which is a scalar field, Note 1
is the gradient of density (), which is the vector derivative of a scalar field, Note 1
Note 1 - Refer to the mathematical operator del represented by the nabla () symbol.
to arrive at the conservation form of the equations of motion. This is often written:
where is the outer product of the flow velocity ():
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
Convective acceleration
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Compressible flow
Remark: here, the deviatoric stress tensor is denoted as it was in the general continuum equations and in the incompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
The most general of the Navier–Stokes equations become
in index notation, the equation can be written as
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to:
To give finally:
{{Equation box 1
|indent=:
|title=Navier–Stokes momentum equation (conservative form)
|equation=
|cellpadding
|border
|border colour = #FF0000
|background colour = #DCDCDC
}}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
If the dynamic and bulk viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor is and the divergence of tensor is , one finally arrives to the compressible Navier–Stokes momentum equation:
where is the material derivative. is the shear kinematic viscosity and is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
The convective acceleration term can also be written as
where the vector is known as the Lamb vector.
For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
Incompressible flow
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient .
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity :
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This is constitutive equation is also called the Newtonian law of viscosity.
Dynamic viscosity need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state.
The divergence of the deviatoric stress in case of uniform viscosity is given by:
because for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density:
where is called the kinematic viscosity.
By isolating the fluid velocity, one can also state:
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, , then we have
where is called the unit pressure head.
In incompressible flows, the pressure field satisfies the Poisson equation,
which is obtained by taking the divergence of the momentum equations.
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation):
The higher-order term, namely the shear stress divergence , has simply reduced to the vector Laplacian term . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations.
In the usual case of an external field being a conservative field:
by defining the hydraulic head:
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or less dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,
where and are solenoidal and irrotational projection operators satisfying , and and are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:
with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb and Biot–Savart law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by,
for divergence-free test functions satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There one will be able to address the question "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This all would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Weak form of the incompressible Navier–Stokes equations
Strong form
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain
with boundary
being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ():
is the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as:
Let be the dynamic viscosity of the fluid, the second-order identity tensor and the strain-rate tensor defined as:
The functions and are given Dirichlet and Neumann boundary data, while is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation.
Assuming constant dynamic viscosity, using the vectorial identity
and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:
Moreover, note that the Neumann boundary conditions can be rearranged as:
Weak form
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation
multiply it for a test function , defined in a suitable space , and integrate both members with respect to the domain :
Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:
Using these relations, one gets:
In the same fashion, the continuity equation is multiplied for a test function belonging to a space and integrated in the domain :
The space functions are chosen as follows:
Considering that the test function vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:
Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:
Discrete velocity
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Pressure recovery
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions one would choose the irrotational vector elements obtained from the gradient of the pressure element.
Non-inertial frame of reference
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference , and a non-inertial frame of reference , which is translating with velocity and rotating with angular velocity with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
Here and are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of with respect to and the fourth term is due to the angular acceleration of with respect to .
Other equations
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state.
Continuity equation for incompressible fluid
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows:
A fluid media for which the density () is constant is called incompressible. Therefore, the rate of change of density () with respect to time and the gradient of density are equal to zero . In this case the general equation of continuity, , reduces to: . Furthermore, assuming that density () is a non-zero constant means that the right-hand side of the equation is divisible by density (). Therefore, the continuity equation for an incompressible fluid reduces further to:This relationship, , identifies that the divergence of the flow velocity vector () is equal to zero , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator , and vorticity which is now expressed like so, for an incompressible fluid:
Stream function for incompressible 2D fluid
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with and no dependence of anything on ), where the equations reduce to:
Differentiating the first with respect to , the second with respect to and subtracting the resulting equations will eliminate pressure and any conservative force.
For incompressible flow, defining the stream function through
results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:
where is the 2D biharmonic operator and is the kinematic viscosity, . We can also express this compactly using the Jacobian determinant:
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
Properties
Nonlinearity
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.
Turbulence
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, –, –, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Applicability
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement.
Failing that, one may have to resort to molecular dynamics or various hybrid methods.
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.
Application to specific problems
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
Parallel flow
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is:
The boundary condition is the no slip condition. This problem is easily solved for the flow field:
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Radial flow
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function that must satisfy:
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for (approximately; this is not ), the parameter being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.
Convection
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Exact solutions of the Navier–Stokes equations
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex.Landau & Lifshitz (1987) pp. 75–88. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier-Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier-Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.
A three-dimensional steady-state vortex solution
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let be a constant radius of the inner coil. One set of solutions is given by:
for arbitrary constants and . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
Viscous three-dimensional periodic solutions
Two examples of periodic fully-three-dimensional viscous solutions are described in.
These solutions are defined on a three-dimensional torus and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by:
where is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is at .
The pressure field is obtained from the velocity field as (where and are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by .
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex.
Wyld diagrams
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions.
Representations in 3D
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. means the partial derivative of with respect to , and means the second-order partial derivative of with respect to .
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.
Cartesian coordinates
From the general form of the Navier–Stokes, with the velocity vector expanded as , sometimes respectively named , , , we may write the vector equation explicitly,
Note that gravity has been accounted for as a body force, and the values of , , will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:
When the flow is incompressible, does not change for any fluid particle, and its material derivative vanishes: . The continuity equation is reduced to:
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
Cylindrical coordinates
A change of variables on the Cartesian equations will yield the following momentum equations for , , and
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity (), and the remaining quantities are independent of :
Spherical coordinates
In spherical coordinates, the , , and momentum equations are (note the convention used: is polar angle, or colatitude, ):
Mass continuity will read:
These equations could be (slightly) compacted by, for example, factoring from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
Navier–Stokes equations use in games
The Navier–Stokes equations are used extensively in video games in order to model a wide variety of natural phenomena. Simulations of small-scale gaseous fluids, such as fire and smoke, are often based on the seminal paper "Real-Time Fluid Dynamics for Games" by Jos Stam, which elaborates one of the methods proposed in Stam's earlier, more famous paper "Stable Fluids" from 1999. Stam proposes stable fluid simulation using a Navier–Stokes solution method from 1968, coupled with an unconditionally stable semi-Lagrangian advection scheme, as first proposed in 1992.
More recent implementations based upon this work run on the game systems graphics processing unit (GPU) as opposed to the central processing unit (CPU) and achieve a much higher degree of performance.
Many improvements have been proposed to Stam's original work, which suffers inherently from high numerical dissipation in both velocity and mass.
An introduction to interactive fluid simulation can be found in the 2007 ACM SIGGRAPH course, Fluid Simulation for Computer Animation.
See also
Citations
General references
V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986.
Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley,
Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing,
Milne-Thomson, L.M. C.B.E (1962), Theoretical Hydrodynamics, Macmillan & Co Ltd.
Tartar, L (2006), An Introduction to Navier Stokes Equation and Oceanography, Springer ISBN 3-540-35743-2
Birkhoff, Garrett (1960), Hydrodynamics, Princeton University Press
Campos, D.(Editor) (2017) Handbook on Navier-Stokes Equations Theory and Applied Analysis, Nova Science Publisher ISBN 978-1-53610-292-5
Döring, C.E. and J.D. Gibbon, J.D. (1995) Applied analysis of the Navier-Stokes equations, Cambridge University Press, ISBN 0-521-44557-1
Basset, A.B. (1888) Hydrodynamics Volume I and II, Cambridge: Delighton, Bell and Co
Fox, R. W. McDonald, A.T. and Pritchard, P.J. (2004) Introduction to Fluid Mechanics, John Wiley and Sons, ISBN 0-471-2023-2
Foias, C. Mainley, O. Rosa, R. and Temam, R. (2004) Navier–Stokes Equations and Turbulence, Cambridge University Press, ISBN 0-521-36032-3
Lions, P-L. (1998) Mathematical Topics in Fluid Mechanics Volume 1 and 2, Clarendon Press, ISBN 0-19-851488-3
Deville, M.O. and Gatski, T. B. (2012) Mathematical Modeling for Complex Fluids and Flows, Springer, ISBN 978-3-642-25294-5
Kochin, N.E. Kibel, I.A. and Roze, N.V. (1964) Theoretical Hydromechanics, John Wiley & Sons, Ltd.
Lamb, H. (1879) Hydrodynamics,'' Cambridge University Press,
External links
Simplified derivation of the Navier–Stokes equations
Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA
Aerodynamics
Computational fluid dynamics
Concepts in physics
Equations of fluid dynamics
Functions of space and time
Partial differential equations
Transport phenomena | Navier–Stokes equations | [
"Physics",
"Chemistry",
"Engineering"
] | 8,555 | [
"Transport phenomena",
"Physical phenomena",
"Equations of fluid dynamics",
"Equations of physics",
"Computational fluid dynamics",
"Functions of space and time",
"Chemical engineering",
"Computational physics",
"Aerodynamics",
"nan",
"Aerospace engineering",
"Spacetime",
"Fluid dynamics"
] |
48,404 | https://en.wikipedia.org/wiki/Ring%20%28mathematics%29 | In mathematics, rings are algebraic structures that generalize fields: multiplication need not be commutative and multiplicative inverses need not exist. Informally, a ring is a set equipped with two binary operations satisfying properties analogous to those of addition and multiplication of integers. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
Formally, a ring is a set endowed with two binary operations called addition and multiplication such that the ring is an abelian group with respect to the addition operator, and the multiplication operator is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors define rings without requiring a multiplicative identity and instead call the structure defined above a ring with identity. See .)
Whether a ring is commutative has profound implications on its behavior. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry. The simplest commutative rings are those that admit division by non-zero elements; such rings are called fields.
Examples of commutative rings include the set of integers with their standard addition and multiplication, the set of polynomials with their addition and multiplication, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of real square matrices with , group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis.
Definition
A ring is a set equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms:
is an abelian group under addition, meaning that:
for all in (that is, is associative).
for all in (that is, is commutative).
There is an element in such that for all in (that is, is the additive identity).
For each in there exists in such that (that is, is the additive inverse of ).
is a monoid under multiplication, meaning that:
for all in (that is, is associative).
There is an element in such that and for all in (that is, is the multiplicative identity).
Multiplication is distributive with respect to addition, meaning that:
for all in (left distributivity).
for all in (right distributivity).
In notation, the multiplication symbol is often omitted, in which case is written as .
Variations on the definition
In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "" (IPA: ) with a missing "i". For example, the set of even integers with the usual + and ⋅ is a rng, but not a ring. As explained in below, many authors apply the term "ring" without requiring a multiplicative identity.
Although ring addition is commutative, ring multiplication is not required to be commutative: need not necessarily equal . Rings that also satisfy commutativity for multiplication (such as the ring of integers) are called commutative rings. Books on commutative algebra or algebraic geometry often adopt the convention that ring means commutative ring, to simplify terminology.
In a ring, multiplicative inverses are not required to exist. A nonzero commutative ring in which every nonzero element has a multiplicative inverse is called a field.
The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms. The proof makes use of the "", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products: .)
There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative. For these authors, every algebra is a "ring".
Illustration
The most familiar example of a ring is the set of all integers consisting of the numbers
The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers.
Some properties
Some basic properties of a ring follow immediately from the axioms:
The additive identity is unique.
The additive inverse of each element is unique.
The multiplicative identity is unique.
For any element in a ring , one has (zero is an absorbing element with respect to multiplication) and .
If in a ring (or more generally, is a unit element), then has only one element, and is called the zero ring.
If a ring contains the zero ring as a subring, then itself is the zero ring.
The binomial formula holds for any and satisfying .
Example: Integers modulo 4
Equip the set with the following operations:
The sum in is the remainder when the integer is divided by (as is always smaller than , this remainder is either or ). For example, and
The product in is the remainder when the integer is divided by . For example, and
Then is a ring: each axiom follows from the corresponding axiom for If is an integer, the remainder of when divided by may be considered as an element of and this element is often denoted by "" or which is consistent with the notation for . The additive inverse of any in is For example,
has a subring , and if is prime, then has no subrings.
Example: 2-by-2 matrices
The set of 2-by-2 square matrices with entries in a field is
With the operations of matrix addition and matrix multiplication, satisfies the above ring axioms. The element is the multiplicative identity of the ring. If and then while this example shows that the ring is noncommutative.
More generally, for any ring , commutative or not, and any nonnegative integer , the square matrices of dimension with entries in form a ring; see Matrix ring.
History
Dedekind
The study of rings originated from the theory of polynomial rings and the theory of algebraic integers. In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms "ideal" (inspired by Ernst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting.
Hilbert
The term "Zahlring" (number ring) was coined by David Hilbert in 1892 and published in 1897. In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring), so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of an equivalence). Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, if then:
and so on; in general, is going to be an integral linear combination of , , and .
Fraenkel and Noether
The first axiomatic definition of a ring was given by Adolf Fraenkel in 1915, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.
Multiplicative identity and the term "ring"
Fraenkel's axioms for a "ring" included that of a multiplicative identity, whereas Noether's did not.
Most or all books on algebra up to around 1960 followed Noether's convention of not requiring a for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of in the definition of "ring", especially in advanced books by notable authors such as Artin, Bourbaki, Eisenbud, and Lang. There are also books published as late as 2022 that use the term without the requirement for a . Likewise, the Encyclopedia of Mathematics does not require unit elements in rings. In a research article, the authors often specify which definition of ring they use in the beginning of that article.
Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a , then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable." Poonen makes the counterargument that the natural notion for rings would be the direct product rather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence.
Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention:
to include a requirement for a multiplicative identity: "unital ring", "unitary ring", "unit ring", "ring with unity", "ring with identity", "ring with a unit", or "ring with 1".
to omit a requirement for a multiplicative identity: "rng" or "pseudo-ring", although the latter may be confusing because it also has other meanings.
Basic examples
Commutative rings
The prototypical example is the ring of integers with the two operations of addition and multiplication.
The rational, real and complex numbers are commutative rings of a type called fields.
A unital associative algebra over a commutative ring is itself a ring as well as an -module. Some examples:
The algebra of polynomials with coefficients in .
The algebra of formal power series with coefficients in .
The set of all continuous real-valued functions defined on the real line forms a commutative -algebra. The operations are pointwise addition and multiplication of functions.
Let be a set, and let be a ring. Then the set of all functions from to forms a ring, which is commutative if is commutative.
The ring of quadratic integers, the integral closure of in a quadratic extension of It is a subring of the ring of all algebraic integers.
The ring of profinite integers the (infinite) product of the rings of -adic integers over all prime numbers .
The Hecke ring, the ring generated by Hecke operators.
If is a set, then the power set of becomes a ring if we define addition to be the symmetric difference of sets and multiplication to be intersection. This is an example of a Boolean ring.
Noncommutative rings
For any ring and any natural number , the set of all square -by- matrices with entries from , forms a ring with matrix addition and matrix multiplication as operations. For , this matrix ring is isomorphic to itself. For (and not the zero ring), this matrix ring is noncommutative.
If is an abelian group, then the endomorphisms of form a ring, the endomorphism ring of . The operations in this ring are addition and composition of endomorphisms. More generally, if is a left module over a ring , then the set of all -linear maps forms a ring, also called the endomorphism ring and denoted by .
The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero.
If is a group and is a ring, the group ring of over is a free module over having as basis. Multiplication is defined by the rules that the elements of commute with the elements of and multiply together as they do in the group .
The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative.
Non-rings
The set of natural numbers with the usual operations is not a ring, since is not even a group (not all the elements are invertible with respect to addition – for instance, there is no natural number which can be added to to get as a result). There is a natural way to enlarge it to a ring, by including negative numbers to produce the ring of integers The natural numbers (including ) form an algebraic structure known as a semiring (which has all of the axioms of a ring excluding that of an additive inverse).
Let be the set of all continuous functions on the real line that vanish outside a bounded interval that depends on the function, with addition as usual but with multiplication defined as convolution: Then is a rng, but not a ring: the Dirac delta function has the property of a multiplicative identity, but it is not a function and hence is not an element of .
Basic concepts
Products and powers
For each nonnegative integer , given a sequence of elements of , one can define the product recursively: let and let for .
As a special case, one can define nonnegative integer powers of an element of a ring: and for . Then for all .
Elements in a ring
A left zero divisor of a ring is an element in the ring such that there exists a nonzero element of such that . A right zero divisor is defined similarly.
A nilpotent element is an element such that for some . One example of a nilpotent element is a nilpotent matrix. A nilpotent element in a nonzero ring is necessarily a zero divisor.
An idempotent is an element such that . One example of an idempotent element is a projection in linear algebra.
A unit is an element having a multiplicative inverse; in this case the inverse is unique, and is denoted by . The set of units of a ring is a group under ring multiplication; this group is denoted by or or . For example, if is the ring of all square matrices of size over a field, then consists of the set of all invertible matrices of size , and is called the general linear group.
Subring
A subset of is called a subring if any one of the following equivalent conditions holds:
the addition and multiplication of restrict to give operations making a ring with the same multiplicative identity as .
; and for all in , the elements , , and are in .
can be equipped with operations making it a ring such that the inclusion map is a ring homomorphism.
For example, the ring of integers is a subring of the field of real numbers and also a subring of the ring of polynomials (in both cases, contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers does not contain the identity element and thus does not qualify as a subring of one could call a subrng, however.
An intersection of subrings is a subring. Given a subset of , the smallest subring of containing is the intersection of all subrings of containing , and it is called the subring generated by .
For a ring , the smallest subring of is called the characteristic subring of . It can be generated through addition of copies of and . It is possible that ( times) can be zero. If is the smallest positive integer such that this occurs, then is called the characteristic of . In some rings, is never zero for any positive integer , and those rings are said to have characteristic zero.
Given a ring , let denote the set of all elements in such that commutes with every element in : for any in . Then is a subring of , called the center of . More generally, given a subset of , let be the set of all elements in that commute with every element in . Then is a subring of , called the centralizer (or commutant) of . The center is the centralizer of the entire ring . Elements or subsets of the center are said to be central in ; they (each individually) generate a subring of the center.
Ideal
Let be a ring. A left ideal of is a nonempty subset of such that for any in and in , the elements and are in . If denotes the -span of , that is, the set of finite sums
then is a left ideal if . Similarly, a right ideal is a subset such that . A subset is said to be a two-sided ideal or simply ideal if it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup of . If is a subset of , then is a left ideal, called the left ideal generated by ; it is the smallest left ideal containing . Similarly, one can consider the right ideal or the two-sided ideal generated by a subset of .
If is in , then and are left ideals and right ideals, respectively; they are called the principal left ideals and right ideals generated by . The principal ideal is written as . For example, the set of all positive and negative multiples of along with form an ideal of the integers, and this ideal is generated by the integer . In fact, every ideal of the ring of integers is principal.
Like a group, a ring is said to be simple if it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field.
Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinite chain of left ideals is called a left Noetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a left Artinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (the Hopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian.
For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper ideal of is called a prime ideal if for any elements we have that implies either or Equivalently, is prime if for any ideals , we have that implies either or . This latter formulation illustrates the idea of ideals as generalizations of elements.
Homomorphism
A homomorphism from a ring to a ring is a function from to that preserves the ring operations; namely, such that, for all , in the following identities hold:
If one is working with , then the third condition is dropped.
A ring homomorphism is said to be an isomorphism if there exists an inverse homomorphism to (that is, a ring homomorphism that is an inverse function), or equivalently if it is bijective.
Examples:
The function that maps each integer to its remainder modulo (a number in ) is a homomorphism from the ring to the quotient ring ("quotient ring" is defined below).
If is a unit element in a ring , then is a ring homomorphism, called an inner automorphism of .
Let be a commutative ring of prime characteristic . Then is a ring endomorphism of called the Frobenius homomorphism.
The Galois group of a field extension is the set of all automorphisms of whose restrictions to are the identity.
For any ring , there are a unique ring homomorphism and a unique ring homomorphism .
An epimorphism (that is, right-cancelable morphism) of rings need not be surjective. For example, the unique map is an epimorphism.
An algebra homomorphism from a -algebra to the endomorphism algebra of a vector space over is called a representation of the algebra.
Given a ring homomorphism , the set of all elements mapped to 0 by is called the kernel of . The kernel is a two-sided ideal of . The image of , on the other hand, is not always an ideal, but it is always a subring of .
To give a ring homomorphism from a commutative ring to a ring with image contained in the center of is the same as to give a structure of an algebra over to (which in particular gives a structure of an -module).
Quotient ring
The notion of quotient ring is analogous to the notion of a quotient group. Given a ring and a two-sided ideal of , view as subgroup of ; then the quotient ring is the set of cosets of together with the operations
for all in . The ring is also called a factor ring.
As with a quotient group, there is a canonical homomorphism , given by . It is surjective and satisfies the following universal property:
If is a ring homomorphism such that , then there is a unique homomorphism such that
For any ring homomorphism , invoking the universal property with produces a homomorphism that gives an isomorphism from to the image of .
Module
The concept of a module over a ring generalizes the concept of a vector space (over a field) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ring , an -module is an abelian group equipped with an operation (associating an element of to every pair of an element of and an element of ) that satisfies certain axioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for all , in and all , in ,
is an abelian group under addition.
When the ring is noncommutative these axioms define left modules; right modules are defined similarly by writing instead of . This is not only a change of notation, as the last axiom of right modules (that is ) becomes , if left multiplication (by ring elements) is used for a right module.
Basic examples of modules are ideals, including the ring itself.
Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (the dimension of a vector space). In particular, not all modules have a basis.
The axioms of modules imply that , where the first minus denotes the additive inverse in the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers.
Any ring homomorphism induces a structure of a module: if is a ring homomorphism, then is a left module over by the multiplication: . If is commutative or if is contained in the center of , the ring is called a -algebra. In particular, every ring is an algebra over the integers.
Constructions
Direct product
Let and be rings. Then the product can be equipped with the following natural ring structure:
for all in and in . The ring with the above operations of addition and multiplication and the multiplicative identity is called the direct product of with . The same construction also works for an arbitrary family of rings: if are rings indexed by a set , then is a ring with componentwise addition and multiplication.
Let be a commutative ring and be ideals such that whenever . Then the Chinese remainder theorem says there is a canonical ring isomorphism:
A "finite" direct product may also be viewed as a direct sum of ideals. Namely, let be rings, the inclusions with the images (in particular are rings though not subrings). Then are ideals of and
as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic to . Equivalently, the above can be done through central idempotents. Assume that has the above decomposition. Then we can write
By the conditions on one has that are central idempotents and , (orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then let which are two-sided ideals. If each is not a sum of orthogonal central idempotents, then their direct sum is isomorphic to .
An important application of an infinite direct product is the construction of a projective limit of rings (see below). Another application is a restricted product of a family of rings (cf. adele ring).
Polynomial ring
Given a symbol (called a variable) and a commutative ring , the set of polynomials
forms a commutative ring with the usual addition and multiplication, containing as a subring. It is called the polynomial ring over . More generally, the set of all polynomials in variables forms a commutative ring, containing as subrings.
If is an integral domain, then is also an integral domain; its field of fractions is the field of rational functions. If is a Noetherian ring, then is a Noetherian ring. If is a unique factorization domain, then is a unique factorization domain. Finally, is a field if and only if is a principal ideal domain.
Let be commutative rings. Given an element of , one can consider the ring homomorphism
(that is, the substitution). If and , then . Because of this, the polynomial is often also denoted by . The image of the map is denoted by ; it is the same thing as the subring of generated by and .
Example: denotes the image of the homomorphism
In other words, it is the subalgebra of generated by and .
Example: let be a polynomial in one variable, that is, an element in a polynomial ring . Then is an element in and is divisible by in that ring. The result of substituting zero to in is , the derivative of at .
The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphism and an element in there exists a unique ring homomorphism such that and restricts to . For example, choosing a basis, a symmetric algebra satisfies the universal property and so is a polynomial ring.
To give an example, let be the ring of all functions from to itself; the addition and the multiplication are those of functions. Let be the identity function. Each in defines a constant function, giving rise to the homomorphism . The universal property says that this map extends uniquely to
( maps to ) where is the polynomial function defined by . The resulting map is injective if and only if is infinite.
Given a non-constant monic polynomial in , there exists a ring containing such that is a product of linear factors in .
Let be an algebraically closed field. The Hilbert's Nullstellensatz (theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals in and the set of closed subvarieties of . In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf. Gröbner basis.)
There are some other related constructions. A formal power series ring consists of formal power series
together with multiplication and addition that mimic those for convergent series. It contains as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it is local (in fact, complete).
Matrix ring and endomorphism ring
Let be a ring (not necessarily commutative). The set of all square matrices of size with entries in forms a ring with the entry-wise addition and the usual matrix multiplication. It is called the matrix ring and is denoted by . Given a right -module , the set of all -linear maps from to itself forms a ring with addition that is of function and multiplication that is of composition of functions; it is called the endomorphism ring of and is denoted by .
As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring: This is a special case of the following fact: If is an -linear map, then may be written as a matrix with entries in , resulting in the ring isomorphism:
Any ring homomorphism induces .
Schur's lemma says that if is a simple right -module, then is a division ring. If is a direct sum of -copies of simple -modules then
The Artin–Wedderburn theorem states any semisimple ring (cf. below) is of this form.
A ring and the matrix ring over it are Morita equivalent: the category of right modules of is equivalent to the category of right modules over . In particular, two-sided ideals in correspond in one-to-one to two-sided ideals in .
Limits and colimits of rings
Let be a sequence of rings such that is a subring of for all . Then the union (or filtered colimit) of is the ring defined as follows: it is the disjoint union of all 's modulo the equivalence relation if and only if in for sufficiently large .
Examples of colimits:
A polynomial ring in infinitely many variables:
The algebraic closure of finite fields of the same characteristic
The field of formal Laurent series over a field : (it is the field of fractions of the formal power series ring )
The function field of an algebraic variety over a field is where the limit runs over all the coordinate rings of nonempty open subsets (more succinctly it is the stalk of the structure sheaf at the generic point.)
Any commutative ring is the colimit of finitely generated subrings.
A projective limit (or a filtered limit) of rings is defined as follows. Suppose we are given a family of rings , running over positive integers, say, and ring homomorphisms , such that are all the identities and is whenever . Then is the subring of consisting of such that maps to under , .
For an example of a projective limit, see .
Localization
The localization generalizes the construction of the field of fractions of an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ring and a subset of , there exists a ring together with the ring homomorphism that "inverts" ; that is, the homomorphism maps elements in to unit elements in and, moreover, any ring homomorphism from that "inverts" uniquely factors through The ring is called the localization of with respect to . For example, if is a commutative ring and an element in , then the localization consists of elements of the form (to be precise, )
The localization is frequently applied to a commutative ring with respect to the complement of a prime ideal (or a union of prime ideals) in . In that case one often writes for is then a local ring with the maximal ideal This is the reason for the terminology "localization". The field of fractions of an integral domain is the localization of at the prime ideal zero. If is a prime ideal of a commutative ring , then the field of fractions of is the same as the residue field of the local ring and is denoted by
If is a left -module, then the localization of with respect to is given by a change of rings
The most important properties of localization are the following: when is a commutative ring and a multiplicatively closed subset
is a bijection between the set of all prime ideals in disjoint from and the set of all prime ideals in
running over elements in with partial ordering given by divisibility.
The localization is exact: is exact over whenever is exact over .
Conversely, if is exact for any maximal ideal then is exact.
A remark: localization is no help in proving a global existence. One instance of this is that if two modules are isomorphic at all prime ideals, it does not follow that they are isomorphic. (One way to explain this is that the localization allows one to view a module as a sheaf over prime ideals and a sheaf is inherently a local notion.)
In category theory, a localization of a category amounts to making some morphisms isomorphisms. An element in a commutative ring may be thought of as an endomorphism of any -module. Thus, categorically, a localization of with respect to a subset of is a functor from the category of -modules to itself that sends elements of viewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course, then maps to and -modules map to -modules.)
Completion
Let be a commutative ring, and let be an ideal of .
The completion of at is the projective limit it is a commutative ring. The canonical homomorphisms from to the quotients induce a homomorphism The latter homomorphism is injective if is a Noetherian integral domain and is a proper ideal, or if is a Noetherian local ring with maximal ideal , by Krull's intersection theorem. The construction is especially useful when is a maximal ideal.
The basic example is the completion of at the principal ideal generated by a prime number ; it is called the ring of -adic integers and is denoted The completion can in this case be constructed also from the -adic absolute value on The -adic absolute value on is a map from to given by where denotes the exponent of in the prime factorization of a nonzero integer into prime numbers (we also put and ). It defines a distance function on and the completion of as a metric space is denoted by It is again a field since the field operations extend to the completion. The subring of consisting of elements with is isomorphic to
Similarly, the formal power series ring is the completion of at (see also Hensel's lemma)
A complete ring has much simpler structure than a commutative ring. This owns to the Cohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between the integral closure and completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition of excellent ring.
Rings with generators and relations
The most general way to construct a ring is by specifying generators and relations. Let be a free ring (that is, free algebra over the integers) with the set of symbols, that is, consists of polynomials with integral coefficients in noncommuting variables that are elements of . A free ring satisfies the universal property: any function from the set to a ring factors through so that is the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring.
Now, we can impose relations among symbols in by taking a quotient. Explicitly, if is a subset of , then the quotient ring of by the ideal generated by is called the ring with generators and relations . If we used a ring, say, as a base ring instead of then the resulting ring will be over . For example, if then the resulting ring will be the usual polynomial ring with coefficients in in variables that are elements of (It is also the same thing as the symmetric algebra over with symbols .)
In the category-theoretic terms, the formation is the left adjoint functor of the forgetful functor from the category of rings to Set (and it is often called the free ring functor.)
Let , be algebras over a commutative ring . Then the tensor product of -modules is an -algebra with multiplication characterized by
Special kinds of rings
Domains
A nonzero ring with no nonzero zero-divisors is called a domain. A commutative domain is called an integral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is a unique factorization domain (UFD), an integral domain in which every nonunit element is a product of prime elements (an element is prime if it generates a prime ideal.) The fundamental question in algebraic number theory is on the extent to which the ring of (generalized) integers in a number field, where an "ideal" admits prime factorization, fails to be a PID.
Among theorems concerning a PID, the most important one is the structure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra. Let be a finite-dimensional vector space over a field and a linear map with minimal polynomial . Then, since is a unique factorization domain, factors into powers of distinct irreducible polynomials (that is, prime elements):
Letting we make a -module. The structure theorem then says is a direct sum of cyclic modules, each of which is isomorphic to the module of the form Now, if then such a cyclic module (for ) has a basis in which the restriction of is represented by a Jordan matrix. Thus, if, say, is algebraically closed, then all 's are of the form and the above decomposition corresponds to the Jordan canonical form of .
In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is a regular local ring. A regular local ring is a UFD.
The following is a chain of class inclusions that describes the relationship between rings, domains and fields:
Division ring
A division ring is a ring such that every non-zero element is a unit. A commutative division ring is a field. A prominent example of a division ring that is not a field is the ring of quaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that every finite domain (in particular finite division ring) is a field; in particular commutative (the Wedderburn's little theorem).
Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field.
The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, the Cartan–Brauer–Hua theorem.
A cyclic algebra, introduced by L. E. Dickson, is a generalization of a quaternion algebra.
Semisimple rings
A semisimple module is a direct sum of simple modules. A semisimple ring is a ring that is semisimple as a left module (or right module) over itself.
Examples
A division ring is semisimple (and simple).
For any division ring and positive integer , the matrix ring is semisimple (and simple).
For a field and finite group , the group ring is semisimple if and only if the characteristic of does not divide the order of (Maschke's theorem).
Clifford algebras are semisimple.
The Weyl algebra over a field is a simple ring, but it is not semisimple. The same holds for a ring of differential operators in many variables.
Properties
Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.)
For a ring , the following are equivalent:
is semisimple.
is artinian and semiprimitive.
is a finite direct product where each is a positive integer, and each is a division ring (Artin–Wedderburn theorem).
Semisimplicity is closely related to separability. A unital associative algebra over a field is said to be separable if the base extension is semisimple for every field extension . If happens to be a field, then this is equivalent to the usual definition in field theory (cf. separable extension.)
Central simple algebra and Brauer group
For a field , a -algebra is central if its center is and is simple if it is a simple ring. Since the center of a simple -algebra is a field, any simple -algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to a -algebra. The matrix ring of size over a ring will be denoted by .
The Skolem–Noether theorem states any automorphism of a central simple algebra is inner.
Two central simple algebras and are said to be similar if there are integers and such that Since the similarity is an equivalence relation. The similarity classes with the multiplication form an abelian group called the Brauer group of and is denoted by . By the Artin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring.
For example, is trivial if is a finite field or an algebraically closed field (more generally quasi-algebraically closed field; cf. Tsen's theorem). has order 2 (a special case of the theorem of Frobenius). Finally, if is a nonarchimedean local field (for example, then through the invariant map.
Now, if is a field extension of , then the base extension induces . Its kernel is denoted by . It consists of such that is a matrix ring over (that is, is split by .) If the extension is finite and Galois, then is canonically isomorphic to
Azumaya algebras generalize the notion of central simple algebras to a commutative local ring.
Valuation ring
If is a field, a valuation is a group homomorphism from the multiplicative group to a totally ordered abelian group such that, for any , in with nonzero, The valuation ring of is the subring of consisting of zero and all nonzero such that .
Examples:
The field of formal Laurent series over a field comes with the valuation such that is the least degree of a nonzero term in ; the valuation ring of is the formal power series ring
More generally, given a field and a totally ordered abelian group , let be the set of all functions from to whose supports (the sets of points at which the functions are nonzero) are well ordered. It is a field with the multiplication given by convolution: It also comes with the valuation such that is the least element in the support of . The subring consisting of elements with finite support is called the group ring of (which makes sense even if is not commutative). If is the ring of integers, then we recover the previous example (by identifying with the series whose th coefficient is .)
Rings with extra structure
A ring may be viewed as an abelian group (by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example:
An associative algebra is a ring that is also a vector space over a field such that the scalar multiplication is compatible with the ring multiplication. For instance, the set of -by- matrices over the real field has dimension as a real vector space.
A ring is a topological ring if its set of elements is given a topology which makes the addition map () and the multiplication map to be both continuous as maps between topological spaces (where inherits the product topology or any other product in the category). For example, -by- matrices over the real numbers could be given either the Euclidean topology, or the Zariski topology, and in either case one would obtain a topological ring.
A λ-ring is a commutative ring together with operations that are like th exterior powers:
For example, is a λ-ring with the binomial coefficients. The notion plays a central rule in the algebraic approach to the Riemann–Roch theorem.
A totally ordered ring is a ring with a total ordering that is compatible with ring operations.
Some examples of the ubiquity of rings
Many different kinds of mathematical objects can be fruitfully analyzed in terms of some associated ring.
Cohomology ring of a topological space
To any topological space one can associate its integral cohomology ring
a graded ring. There are also homology groups of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like the spheres and tori, for which the methods of point-set topology are not well-suited. Cohomology groups were later defined in terms of homology groups in a way which is roughly analogous to the dual of a vector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of the universal coefficient theorem. However, the advantage of the cohomology groups is that there is a natural product, which is analogous to the observation that one can multiply pointwise a -multilinear form and an -multilinear form to get a ()-multilinear form.
The ring structure in cohomology provides the foundation for characteristic classes of fiber bundles, intersection theory on manifolds and algebraic varieties, Schubert calculus and much more.
Burnside ring of a group
To any group is associated its Burnside ring which uses a ring to describe the various ways the group can act on a finite set. The Burnside ring's additive group is the free abelian group whose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of the representation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers.
Representation ring of a group ring
To any group ring or Hopf algebra is associated its representation ring or "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring from character theory, which is more or less the Grothendieck group given a ring structure.
Function field of an irreducible algebraic variety
To any irreducible algebraic variety is associated its function field. The points of an algebraic variety correspond to valuation rings contained in the function field and containing the coordinate ring. The study of algebraic geometry makes heavy use of commutative algebra to study geometric concepts in terms of ring-theoretic properties. Birational geometry studies maps between the subrings of the function field.
Face ring of a simplicial complex
Every simplicial complex has an associated face ring, also called its Stanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest in algebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension of simplicial polytopes.
Category-theoretic description
Every ring can be thought of as a monoid in Ab, the category of abelian groups (thought of as a monoidal category under the tensor product of -modules). The monoid action of a ring on an abelian group is simply an -module. Essentially, an -module is a generalization of the notion of a vector space – where rather than a vector space over a field, one has a "vector space over a ring".
Let be an abelian group and let be its endomorphism ring (see above). Note that, essentially, is the set of all morphisms of , where if is in , and is in , the following rules may be used to compute and :
where as in is addition in , and function composition is denoted from right to left. Therefore, associated to any abelian group, is a ring. Conversely, given any ring, , is an abelian group. Furthermore, for every in , right (or left) multiplication by gives rise to a morphism of , by right (or left) distributivity. Let . Consider those endomorphisms of , that "factor through" right (or left) multiplication of . In other words, let be the set of all morphisms of , having the property that . It was seen that every in gives rise to a morphism of : right multiplication by . It is in fact true that this association of any element of , to a morphism of , as a function from to , is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelian -group (by -group, it is meant a group with being its set of operators). In essence, the most general form of a ring, is the endomorphism group of some abelian -group.
Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms.
Generalization
Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms.
Rng
A rng is the same as a ring, except that the existence of a multiplicative identity is not assumed.
Nonassociative ring
A nonassociative ring is an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is a Lie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras.
Semiring
A semiring (sometimes rig) is obtained by weakening the assumption that is an abelian group to the assumption that is a commutative monoid, and adding the axiom that for all a in (since it no longer follows from the other axioms).
Examples:
the non-negative integers with ordinary addition and multiplication;
the tropical semiring.
Other ring-like objects
Ring object in a category
Let be a category with finite products. Let pt denote a terminal object of (an empty product). A ring object in is an object equipped with morphisms (addition), (multiplication), (additive identity), (additive inverse), and (multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an object equipped with a factorization of its functor of points through the category of rings:
Ring scheme
In algebraic geometry, a ring scheme over a base scheme is a ring object in the category of -schemes. One example is the ring scheme over , which for any commutative ring returns the ring of -isotypic Witt vectors of length over .
Ring spectrum
In algebraic topology, a ring spectrum is a spectrum together with a multiplication and a unit map from the sphere spectrum , such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as a monoid object in a good category of spectra such as the category of symmetric spectra.
See also
Algebra over a commutative ring
Categorical ring
Category of rings
Glossary of ring theory
Non-associative algebra
Ring of sets
Semiring
Spectrum of a ring
Simplicial commutative ring
Special types of rings:
Boolean ring
Dedekind ring
Differential ring
Exponential ring
Finite ring
Lie ring
Local ring
Noetherian and artinian rings
Ordered ring
Poisson ring
Reduced ring
Regular ring
Ring of periods
SBI ring
Valuation ring and discrete valuation ring
Notes
Citations
References
[corrected 5th printing]
General references
Special references
(also online)
Primary sources
Historical references
Bronshtein, I. N. and Semendyayev, K. A. (2004) Handbook of Mathematics, 4th ed. New York: Springer-Verlag .
History of ring theory at the MacTutor Archive
Faith, Carl (1999) Rings and things and a fine array of twentieth century associative algebra. Mathematical Surveys and Monographs, 65. American Mathematical Society .
Itô, K. editor (1986) "Rings." §368 in Encyclopedic Dictionary of Mathematics, 2nd ed., Vol. 2. Cambridge, MA: MIT Press.
Algebraic structures
Ring theory | Ring (mathematics) | [
"Mathematics"
] | 11,457 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
48,636 | https://en.wikipedia.org/wiki/Partition%20of%20unity | In mathematics, a partition of unity of a topological space is a set of continuous functions from to the unit interval [0,1] such that for every point :
there is a neighbourhood of where all but a finite number of the functions of are 0, and
the sum of all the function values at is 1, i.e.,
Partitions of unity are useful because they often allow one to extend local constructions to the whole space. They are also important in the interpolation of data, in signal processing, and the theory of spline functions.
Existence
The existence of partitions of unity assumes two distinct forms:
Given any open cover of a space, there exists a partition indexed over the same set such that supp Such a partition is said to be subordinate to the open cover
If the space is locally-compact, given any open cover of a space, there exists a partition indexed over a possibly distinct index set such that each has compact support and for each , supp for some .
Thus one chooses either to have the supports indexed by the open cover, or compact supports. If the space is compact, then there exist partitions satisfying both requirements.
A finite open cover always has a continuous partition of unity subordinated to it, provided the space is locally compact and Hausdorff.
Paracompactness of the space is a necessary condition to guarantee the existence of a partition of unity subordinate to any open cover. Depending on the category to which the space belongs, it may also be a sufficient condition. The construction uses mollifiers (bump functions), which exist in continuous and smooth manifolds, but not in analytic manifolds. Thus for an open cover of an analytic manifold, an analytic partition of unity subordinate to that open cover generally does not exist. See analytic continuation.
If and are partitions of unity for spaces and , respectively, then the set of all pairs is a partition of unity for the cartesian product space . The tensor product of functions act as
Example
We can construct a partition of unity on by looking at a chart on the complement of a point sending to with center . Now, let be a bump function on defined by then, both this function and can be extended uniquely onto by setting . Then, the set forms a partition of unity over .
Variant definitions
Sometimes a less restrictive definition is used: the sum of all the function values at a particular point is only required to be positive, rather than 1, for each point in the space. However, given such a set of functions one can obtain a partition of unity in the strict sense by dividing by the sum; the partition becomes where , which is well defined since at each point only a finite number of terms are nonzero. Even further, some authors drop the requirement that the supports be locally finite, requiring only that for all .
In the field of operator algebras, a partition of unity is composed of projections . In the case of -algebras, it can be shown that the entries are pairwise-orthogonal:
Note it is not the case that in a general *-algebra that the entries of a partition of unity are pairwise-orthogonal.
If is a normal element of a unital -algebra , and has finite spectrum , then the projections in the spectral decomposition:
form a partition of unity.
In the field of compact quantum groups, the rows and columns of the fundamental representation of a quantum permutation group form partitions of unity.
Applications
A partition of unity can be used to define the integral (with respect to a volume form) of a function defined over a manifold: one first defines the integral of a function whose support is contained in a single coordinate patch of the manifold; then one uses a partition of unity to define the integral of an arbitrary function; finally one shows that the definition is independent of the chosen partition of unity.
A partition of unity can be used to show the existence of a Riemannian metric on an arbitrary manifold.
Method of steepest descent employs a partition of unity to construct asymptotics of integrals.
Linkwitz–Riley filter is an example of practical implementation of partition of unity to separate input signal into two output signals containing only high- or low-frequency components.
The Bernstein polynomials of a fixed degree m are a family of m+1 linearly independent single-variable polynomials that are a partition of unity for the unit interval .
The weak Hilbert Nullstellensatz asserts that if are polynomials with no common vanishing points in , then there are polynomials with . That is, form a polynomial partition of unity subordinate to the Zariski-open cover .
Partitions of unity are used to establish global smooth approximations for Sobolev functions in bounded domains.
See also
Gluing axiom
Fine sheaf
References
, see chapter 13
External links
General information on partition of unity at [Mathworld]
Differential topology
Topology | Partition of unity | [
"Physics",
"Mathematics"
] | 981 | [
"Topology",
"Space",
"Differential topology",
"Geometry",
"Spacetime"
] |
49,090 | https://en.wikipedia.org/wiki/Ohm%27s%20law | Ohm's law states that the electric current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship:
where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. More specifically, Ohm's law states that the R in this relation is constant, independent of the current. If the resistance is not constant, the previous equation cannot be called Ohm's law, but it can still be used as a definition of static/DC resistance. Ohm's law is an empirical relation which accurately describes the conductivity of the vast majority of electrically conductive materials over many orders of magnitude of current. However some materials do not obey Ohm's law; these are called non-ohmic.
The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. Ohm explained his experimental results by a slightly more complex equation than the modern form above (see below).
In physics, the term Ohm's law is also used to refer to various generalizations of the law; for example the vector form of the law used in electromagnetics and material science:
where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ (sigma) is a material-dependent parameter called the conductivity, defined as the inverse of resistivity ρ (rho). This reformulation of Ohm's law is due to Gustav Kirchhoff.
History
In January 1781, before Georg Ohm's work, Henry Cavendish experimented with Leyden jars and glass tubes of varying diameter and length filled with salt solution. He measured the current by noting how strong a shock he felt as he completed the circuit with his body. Cavendish wrote that the "velocity" (current) varied directly as the "degree of electrification" (voltage). He did not communicate his results to other scientists at the time, and his results were unknown until James Clerk Maxwell published them in 1879.
Francis Ronalds delineated "intensity" (voltage) and "quantity" (current) for the dry pile—a high voltage source—in 1814 using a gold-leaf electrometer. He found for a dry pile that the relationship between the two parameters was not proportional under certain meteorological conditions.
Ohm did his work on resistance in the years 1825 and 1826, and published his results in 1827 as the book Die galvanische Kette, mathematisch bearbeitet ("The galvanic circuit investigated mathematically"). He drew considerable inspiration from Joseph Fourier's work on heat conduction in the theoretical explanation of his work. For experiments, he initially used voltaic piles, but later used a thermocouple as this provided a more stable voltage source in terms of internal resistance and constant voltage. He used a galvanometer to measure current, and knew that the voltage between the thermocouple terminals was proportional to the junction temperature. He then added test wires of varying length, diameter, and material to complete the circuit. He found that his data could be modeled through the equation
where x was the reading from the galvanometer, ℓ was the length of the test conductor, a depended on the thermocouple junction temperature, and b was a constant of the entire setup. From this, Ohm determined his law of proportionality and published his results.
In modern notation we would write,
where is the open-circuit emf of the thermocouple, is the internal resistance of the thermocouple and is the resistance of the test wire. In terms of the length of the wire this becomes,
where is the resistance of the test wire per unit length. Thus, Ohm's coefficients are,
Ohm's law was probably the most important of the early quantitative descriptions of the physics of electricity. We consider it almost obvious today. When Ohm first published his work, this was not the case; critics reacted to his treatment of the subject with hostility. They called his work a "web of naked fancies" and the Minister of Education proclaimed that "a professor who preached such heresies was unworthy to teach science." The prevailing scientific philosophy in Germany at the time asserted that experiments need not be performed to develop an understanding of nature because nature is so well ordered, and that scientific truths may be deduced through reasoning alone. Also, Ohm's brother Martin, a mathematician, was battling the German educational system. These factors hindered the acceptance of Ohm's work, and his work did not become widely accepted until the 1840s. However, Ohm received recognition for his contributions to science well before he died.
In the 1850s, Ohm's law was widely known and considered proved. Alternatives such as "Barlow's law", were discredited, in terms of real applications to telegraph system design, as discussed by Samuel F. B. Morse in 1855.
The electron was discovered in 1897 by J. J. Thomson, and it was quickly realized that it was the particle (charge carrier) that carried electric currents in electric circuits. In 1900, the first (classical) model of electrical conduction, the Drude model, was proposed by Paul Drude, which finally gave a scientific explanation for Ohm's law. In this model, a solid conductor consists of a stationary lattice of atoms (ions), with conduction electrons moving randomly in it. A voltage across a conductor causes an electric field, which accelerates the electrons in the direction of the electric field, causing a drift of electrons which is the electric current. However the electrons collide with atoms which causes them to scatter and randomizes their motion, thus converting kinetic energy to heat (thermal energy). Using statistical distributions, it can be shown that the average drift velocity of the electrons, and thus the current, is proportional to the electric field, and thus the voltage, over a wide range of voltages.
The development of quantum mechanics in the 1920s modified this picture somewhat, but in modern theories the average drift velocity of electrons can still be shown to be proportional to the electric field, thus deriving Ohm's law. In 1927 Arnold Sommerfeld applied the quantum Fermi-Dirac distribution of electron energies to the Drude model, resulting in the free electron model. A year later, Felix Bloch showed that electrons move in waves (Bloch electrons) through a solid crystal lattice, so scattering off the lattice atoms as postulated in the Drude model is not a major process; the electrons scatter off impurity atoms and defects in the material. The final successor, the modern quantum band theory of solids, showed that the electrons in a solid cannot take on any energy as assumed in the Drude model but are restricted to energy bands, with gaps between them of energies that electrons are forbidden to have. The size of the band gap is a characteristic of a particular substance which has a great deal to do with its electrical resistivity, explaining why some substances are electrical conductors, some semiconductors, and some insulators.
While the old term for electrical conductance, the mho (the inverse of the resistance unit ohm), is still used, a new name, the siemens, was adopted in 1971, honoring Ernst Werner von Siemens. The siemens is preferred in formal papers.
In the 1920s, it was discovered that the current through a practical resistor actually has statistical fluctuations, which depend on temperature, even when voltage and resistance are exactly constant; this fluctuation, now known as Johnson–Nyquist noise, is due to the discrete nature of charge. This thermal effect implies that measurements of current and voltage that are taken over sufficiently short periods of time will yield ratios of V/I that fluctuate from the value of R implied by the time average or ensemble average of the measured current; Ohm's law remains correct for the average current, in the case of ordinary resistive materials.
Ohm's work long preceded Maxwell's equations and any understanding of frequency-dependent effects in AC circuits. Modern developments in electromagnetic theory and circuit theory do not contradict Ohm's law when they are evaluated within the appropriate limits.
Scope
Ohm's law is an empirical law, a generalization from many experiments that have shown that current is approximately proportional to electric field for most materials. It is less fundamental than Maxwell's equations and is not always obeyed. Any given material will break down under a strong-enough electric field, and some materials of interest in electrical engineering are "non-ohmic" under weak fields.
Ohm's law has been observed on a wide range of length scales. In the early 20th century, it was thought that Ohm's law would fail at the atomic scale, but experiments have not borne out this expectation. As of 2012, researchers have demonstrated that Ohm's law works for silicon wires as small as four atoms wide and one atom high.
Microscopic origins
The dependence of the current density on the applied electric field is essentially quantum mechanical in nature; (see Classical and quantum conductivity.) A qualitative description leading to Ohm's law can be based upon classical mechanics using the Drude model developed by Paul Drude in 1900.
The Drude model treats electrons (or other charge carriers) like pinballs bouncing among the ions that make up the structure of the material. Electrons will be accelerated in the opposite direction to the electric field by the average electric field at their location. With each collision, though, the electron is deflected in a random direction with a velocity that is much larger than the velocity gained by the electric field. The net result is that electrons take a zigzag path due to the collisions, but generally drift in a direction opposing the electric field.
The drift velocity then determines the electric current density and its relationship to E and is independent of the collisions. Drude calculated the average drift velocity from p = −eEτ where p is the average momentum, −e is the charge of the electron and τ is the average time between the collisions. Since both the momentum and the current density are proportional to the drift velocity, the current density becomes proportional to the applied electric field; this leads to Ohm's law.
Hydraulic analogy
A hydraulic analogy is sometimes used to describe Ohm's law. Water pressure, measured by pascals (or PSI), is the analog of voltage because establishing a water pressure difference between two points along a (horizontal) pipe causes water to flow. The water volume flow rate, as in liters per second, is the analog of current, as in coulombs per second. Finally, flow restrictors—such as apertures placed in pipes between points where the water pressure is measured—are the analog of resistors. We say that the rate of water flow through an aperture restrictor is proportional to the difference in water pressure across the restrictor. Similarly, the rate of flow of electrical charge, that is, the electric current, through an electrical resistor is proportional to the difference in voltage measured across the resistor. More generally, the hydraulic head may be taken as the analog of voltage, and Ohm's law is then analogous to Darcy's law which relates hydraulic head to the volume flow rate via the hydraulic conductivity.
Flow and pressure variables can be calculated in fluid flow network with the use of the hydraulic ohm analogy. The method can be applied to both steady and transient flow situations. In the linear laminar flow region, Poiseuille's law describes the hydraulic resistance of a pipe, but in the turbulent flow region the pressure–flow relations become nonlinear.
The hydraulic analogy to Ohm's law has been used, for example, to approximate blood flow through the circulatory system.
Circuit analysis
In circuit analysis, three equivalent expressions of Ohm's law are used interchangeably:
Each equation is quoted by some sources as the defining relationship of Ohm's law,
or all three are quoted, or derived from a proportional form,
or even just the two that do not correspond to Ohm's original statement may sometimes be given.
The interchangeability of the equation may be represented by a triangle, where V (voltage) is placed on the top section, the I (current) is placed to the left section, and the R (resistance) is placed to the right. The divider between the top and bottom sections indicates division (hence the division bar).
Resistive circuits
Resistors are circuit elements that impede the passage of electric charge in agreement with Ohm's law, and are designed to have a specific resistance value R. In schematic diagrams, a resistor is shown as a long rectangle or zig-zag symbol. An element (resistor or conductor) that behaves according to Ohm's law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm's law and a single value for the resistance suffice to describe the behavior of the device over that range.
Ohm's law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm's law is valid for such circuits.
Resistors which are in series or in parallel may be grouped together into a single "equivalent resistance" in order to apply Ohm's law in analyzing the circuit.
Reactive circuits with time-varying signals
When reactive elements such as capacitors, inductors, or transmission lines are involved in a circuit to which AC or time-varying voltage or current is applied, the relationship between voltage and current becomes the solution to a differential equation, so Ohm's law (as defined above) does not directly apply since that form contains only resistances having value R, not complex impedances which may contain capacitance (C) or inductance (L).
Equations for time-invariant AC circuits take the same form as Ohm's law. However, the variables are generalized to complex numbers and the current and voltage waveforms are complex exponentials.
In this approach, a voltage or current waveform takes the form Ae, where t is time, s is a complex parameter, and A is a complex scalar. In any linear time-invariant system, all of the currents and voltages can be expressed with the same s parameter as the input to the system, allowing the time-varying complex exponential term to be canceled out and the system described algebraically in terms of the complex scalars in the current and voltage waveforms.
The complex generalization of resistance is impedance, usually denoted Z; it can be shown that for an inductor,
and for a capacitor,
We can now write,
where V and I are the complex scalars in the voltage and current respectively and Z is the complex impedance.
This form of Ohm's law, with Z taking the place of R, generalizes the simpler form. When Z is complex, only the real part is responsible for dissipating heat.
In a general AC circuit, Z varies strongly with the frequency parameter s, and so also will the relationship between voltage and current.
For the common case of a steady sinusoid, the s parameter is taken to be , corresponding to a complex sinusoid . The real parts of such complex current and voltage waveforms describe the actual sinusoidal currents and voltages in a circuit, which can be in different phases due to the different complex scalars.
Linear approximations
Ohm's law is one of the basic equations used in the analysis of electrical circuits. It applies to both metal conductors and circuit components (resistors) specifically made for this behaviour. Both are ubiquitous in electrical engineering. Materials and components that obey Ohm's law are described as "ohmic" which means they produce the same value for resistance (R = V/I) regardless of the value of V or I which is applied and whether the applied voltage or current is DC (direct current) of either positive or negative polarity or AC (alternating current).
In a true ohmic device, the same value of resistance will be calculated from R = V/I regardless of the value of the applied voltage V. That is, the ratio of V/I is constant, and when current is plotted as a function of voltage the curve is linear (a straight line). If voltage is forced to some value V, then that voltage V divided by measured current I will equal R. Or if the current is forced to some value I, then the measured voltage V divided by that current I is also R. Since the plot of I versus V is a straight line, then it is also true that for any set of two different voltages V1 and V2 applied across a given device of resistance R, producing currents I1 = V1/R and I2 = V2/R, that the ratio (V1 − V2)/(I1 − I2) is also a constant equal to R. The operator "delta" (Δ) is used to represent a difference in a quantity, so we can write ΔV = V1 − V2 and ΔI = I1 − I2. Summarizing, for any truly ohmic device having resistance R, V/I = ΔV/ΔI = R for any applied voltage or current or for the difference between any set of applied voltages or currents.
There are, however, components of electrical circuits which do not obey Ohm's law; that is, their relationship between current and voltage (their I–V curve) is nonlinear (or non-ohmic). An example is the p–n junction diode (curve at right). As seen in the figure, the current does not increase linearly with applied voltage for a diode. One can determine a value of current (I) for a given value of applied voltage (V) from the curve, but not from Ohm's law, since the value of "resistance" is not constant as a function of applied voltage. Further, the current only increases significantly if the applied voltage is positive, not negative. The ratio V/I for some point along the nonlinear curve is sometimes called the static, or chordal, or DC, resistance, but as seen in the figure the value of total over total varies depending on the particular point along the nonlinear curve which is chosen. This means the "DC resistance" V/I at some point on the curve is not the same as what would be determined by applying an AC signal having peak amplitude volts or amps centered at that same point along the curve and measuring . However, in some diode applications, the AC signal applied to the device is small and it is possible to analyze the circuit in terms of the dynamic, small-signal, or incremental resistance, defined as the one over the slope of the V–I curve at the average value (DC operating point) of the voltage (that is, one over the derivative of current with respect to voltage). For sufficiently small signals, the dynamic resistance allows the Ohm's law small signal resistance to be calculated as approximately one over the slope of a line drawn tangentially to the V–I curve at the DC operating point.
Temperature effects
Ohm's law has sometimes been stated as, "for a conductor in a given state, the electromotive force is proportional to the current produced. "That is, that the resistance, the ratio of the applied electromotive force (or voltage) to the current, "does not vary with the current strength."The qualifier "in a given state" is usually interpreted as meaning "at a constant temperature," since the resistivity of materials is usually temperature dependent. Because the conduction of current is related to Joule heating of the conducting body, according to Joule's first law, the temperature of a conducting body may change when it carries a current. The dependence of resistance on temperature therefore makes resistance depend upon the current in a typical experimental setup, making the law in this form difficult to directly verify. Maxwell and others worked out several methods to test the law experimentally in 1876, controlling for heating effects. Usually, the measurements of a sample resistance are carried out at low currents to prevent Joule heating. However, even a small current causes heating(cooling) at the first(second) sample contact due to the Peltier effect. The temperatures at the sample contacts become different, their difference is linear in current. The voltage drop across the circuit includes additionally the Seebeck thermoelectromotive force which again is again linear in current. As a result, there exists a thermal correction to the sample resistance even at negligibly small current. The magnitude of the correction could be comparable with the sample resistance.
Relation to heat conductions
Ohm's principle predicts the flow of electrical charge (i.e. current) in electrical conductors when subjected to the influence of voltage differences; Jean-Baptiste-Joseph Fourier's principle predicts the flow of heat in heat conductors when subjected to the influence of temperature differences.
The same equation describes both phenomena, the equation's variables taking on different meanings in the two cases. Specifically, solving a heat conduction (Fourier) problem with temperature (the driving "force") and flux of heat (the rate of flow of the driven "quantity", i.e. heat energy) variables also solves an analogous electrical conduction (Ohm) problem having electric potential (the driving "force") and electric current (the rate of flow of the driven "quantity", i.e. charge) variables.
The basis of Fourier's work was his clear conception and definition of thermal conductivity. He assumed that, all else being the same, the flux of heat is strictly proportional to the gradient of temperature. Although undoubtedly true for small temperature gradients, strictly proportional behavior will be lost when real materials (e.g. ones having a thermal conductivity that is a function of temperature) are subjected to large temperature gradients.
A similar assumption is made in the statement of Ohm's law: other things being alike, the strength of the current at each point is proportional to the gradient of electric potential. The accuracy of the assumption that flow is proportional to the gradient is more readily tested, using modern measurement methods, for the electrical case than for the heat case.
Other versions
Ohm's law, in the form above, is an extremely useful equation in the field of electrical/electronic engineering because it describes how voltage, current and resistance are interrelated on a "macroscopic" level, that is, commonly, as circuit elements in an electrical circuit. Physicists who study the electrical properties of matter at the microscopic level use a closely related and more general vector equation, sometimes also referred to as Ohm's law, having variables that are closely related to the V, I, and R scalar variables of Ohm's law, but which are each functions of position within the conductor. Physicists often use this continuum form of Ohm's Law:
where is the electric field vector with units of volts per meter (analogous to of Ohm's law which has units of volts), is the current density vector with units of amperes per unit area (analogous to of Ohm's law which has units of amperes), and ρ "rho" is the resistivity with units of ohm·meters (analogous to of Ohm's law which has units of ohms). The above equation is also written as where "sigma" is the conductivity which is the reciprocal of .
The voltage between two points is defined as:
with the element of path along the integration of electric field vector E. If the applied E field is uniform and oriented along the length of the conductor as shown in the figure, then defining the voltage V in the usual convention of being opposite in direction to the field (see figure), and with the understanding that the voltage V is measured differentially across the length of the conductor allowing us to drop the Δ symbol, the above vector equation reduces to the scalar equation:
Since the field is uniform in the direction of wire length, for a conductor having uniformly consistent resistivity ρ, the current density will also be uniform in any cross-sectional area and oriented in the direction of wire length, so we may write:
Substituting the above 2 results (for E and J respectively) into the continuum form shown at the beginning of this section:
The electrical resistance of a uniform conductor is given in terms of resistivity by:
where ℓ is the length of the conductor in SI units of meters, is the cross-sectional area (for a round wire if is radius) in units of meters squared, and ρ is the resistivity in units of ohm·meters.
After substitution of R from the above equation into the equation preceding it, the continuum form of Ohm's law for a uniform field (and uniform current density) oriented along the length of the conductor reduces to the more familiar form:
A perfect crystal lattice, with low enough thermal motion and no deviations from periodic structure, would have no resistivity, but a real metal has crystallographic defects, impurities, multiple isotopes, and thermal motion of the atoms. Electrons scatter from all of these, resulting in resistance to their flow.
The more complex generalized forms of Ohm's law are important to condensed matter physics, which studies the properties of matter and, in particular, its electronic structure. In broad terms, they fall under the topic of constitutive equations and the theory of transport coefficients.
Magnetic effects
If an external B-field is present and the conductor is not at rest but moving at velocity , then an extra term must be added to account for the current induced by the Lorentz force on the charge carriers.
In the rest frame of the moving conductor this term drops out because . There is no contradiction because the electric field in the rest frame differs from the E-field in the lab frame: .
Electric and magnetic fields are relative, see Lorentz transformation.
If the current is alternating because the applied voltage or E-field varies in time, then reactance must be added to resistance to account for self-inductance, see electrical impedance. The reactance may be strong if the frequency is high or the conductor is coiled.
Conductive fluids
In a conductive fluid, such as a plasma, there is a similar effect. Consider a fluid moving with the velocity in a magnetic field . The relative motion induces an electric field which exerts electric force on the charged particles giving rise to an electric current . The equation of motion for the electron gas, with a number density , is written as
where , and are the charge, mass and velocity of the electrons, respectively. Also, is the frequency of collisions of the electrons with ions which have a velocity field . Since, the electron has a very small mass compared with that of ions, we can ignore the left hand side of the above equation to write
where we have used the definition of the current density, and also put which is the electrical conductivity. This equation can also be equivalently written as
where is the electrical resistivity. It is also common to write instead of which can be confusing since it is the same notation used for the magnetic diffusivity defined as .
See also
Fick's law of diffusion
Hopkinson's law ("Ohm's law for magnetics")
Maximum power transfer theorem
Norton's theorem
Electric power
Sheet resistance
Superposition theorem
Thermal noise
Thévenin's theorem
Uses
LED-Resistor circuit
References
Further reading
Ohm's Law chapter from Lessons In Electric Circuits Vol 1 DC book and series.
John C. Shedd and Mayo D. Hershey,"The History of Ohm's Law", Popular Science, December 1913, pp. 599–614, Bonnier Corporation , gives the history of Ohm's investigations, prior work, Ohm's false equation in the first paper, illustration of Ohm's experimental apparatus.
Explores the conceptual change underlying Ohm's experimental work.
Kenneth L. Caneva, "Ohm, Georg Simon." Complete Dictionary of Scientific Biography. 2008
s:Scientific Memoirs/2/The Galvanic Circuit investigated Mathematically, a translation of Ohm's original paper.
External links
Ohms Law Calculator
Electronic engineering
Circuit theorems
Empirical laws
Eponymous laws of physics
Electrical resistance and conductance
Voltage
Law | Ohm's law | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 5,935 | [
"Equations of physics",
"Physical quantities",
"Electrical systems",
"Computer engineering",
"Quantity",
"Physical systems",
"Electrical engineering",
"Electronic engineering",
"Circuit theorems",
"Voltage",
"Wikipedia categories named after physical quantities",
"Electrical resistance and con... |
49,172 | https://en.wikipedia.org/wiki/Interval%20%28mathematics%29 | In mathematics, a real interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. A real interval can contain neither endpoint, either endpoint, or both endpoints, excluding any endpoint which is infinite.
For example, the set of real numbers consisting of , , and all numbers in between is an interval, denoted and called the unit interval; the set of all positive real numbers is an interval, denoted ; the set of all real numbers is an interval, denoted ; and any single real number is an interval, denoted .
Intervals are ubiquitous in mathematical analysis. For example, they occur implicitly in the epsilon-delta definition of continuity; the intermediate value theorem asserts that the image of an interval by a continuous function is an interval; integrals of real functions are defined over an interval; etc.
Interval arithmetic consists of computing with intervals instead of real numbers for providing a guaranteed enclosure of the result of a numerical computation, even in the presence of uncertainties of input data and rounding errors.
Intervals are likewise defined on an arbitrary totally ordered set, such as integers or rational numbers. The notation of integer intervals is considered in the special section below.
Definitions and terminology
An interval is a subset of the real numbers that contains all real numbers lying between any two numbers of the subset.
The endpoints of an interval are its supremum, and its infimum, if they exist as real numbers. If the infimum does not exist, one says often that the corresponding endpoint is Similarly, if the supremum does not exist, one says that the corresponding endpoint is
Intervals are completely determined by their endpoints and whether each endpoint belong to the interval. This is a consequence of the least-upper-bound property of the real numbers. This characterization is used to specify intervals by mean of , which is described below.
An does not include any endpoint, and is indicated with parentheses. For example, is the interval of all real numbers greater than and less than . (This interval can also be denoted by , see below). The open interval consists of real numbers greater than , i.e., positive real numbers. The open intervals are thus one of the forms
where and are real numbers such that When in the first case, the resulting interval is the empty set which is a degenerate interval (see below). The open intervals are those intervals that are open sets for the usual topology on the real numbers.
A is an interval that includes all its endpoints and is denoted with square brackets. For example, means greater than or equal to and less than or equal to . Closed intervals have one of the following forms in which and are real numbers such that
The closed intervals are those intervals that are closed sets for the usual topology on the real numbers. The empty set and are the only intervals that are both open and closed.
A has two endpoints and includes only one of them. It is said left-open or right-open depending on whether the excluded endpoint is on the left or on the right. These intervals are denoted by mixing notations for open and closed intervals. For example, means greater than and less than or equal to , while means greater than or equal to and less than . The half-open intervals have the form
Every closed interval is a closed set of the real line, but an interval that is a closed set need not be a closed interval. For example, intervals and are also closed sets in the real line. Intervals and are neither an open set nor a closed set. If one allows an endpoint in the closed side to be an infinity (such as ), the result will not be an interval, since it is not even a subset of the real numbers. Instead, the result can be seen as an interval in the extended real line, which occurs in measure theory, for example.
In summary, a set of the real numbers is an interval, if and only if it is an open interval, a closed interval, or a half-open interval.
A is any set consisting of a single real number (i.e., an interval of the form ). Some authors include the empty set in this definition. A real interval that is neither empty nor degenerate is said to be proper, and has infinitely many elements.
An interval is said to be left-bounded or right-bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. An interval is said to be bounded, if it is both left- and right-bounded; and is said to be unbounded otherwise. Intervals that are bounded at only one end are said to be half-bounded. The empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. Bounded intervals are also commonly known as finite intervals.
Bounded intervals are bounded sets, in the sense that their diameter (which is equal to the absolute difference between the endpoints) is finite. The diameter may be called the length, width, measure, range, or size of the interval. The size of unbounded intervals is usually defined as , and the size of the empty interval may be defined as (or left undefined).
The centre (midpoint) of a bounded interval with endpoints and is , and its radius is the half-length . These concepts are undefined for empty or unbounded intervals.
An interval is said to be left-open if and only if it contains no minimum (an element that is smaller than all other elements); right-open if it contains no maximum; and open if it contains neither. The interval , for example, is left-closed and right-open. The empty set and the set of all reals are both open and closed intervals, while the set of non-negative reals, is a closed interval that is right-open but not left-open. The open intervals are open sets of the real line in its standard topology, and form a base of the open sets.
An interval is said to be left-closed if it has a minimum element or is left-unbounded, right-closed if it has a maximum or is right unbounded; it is simply closed if it is both left-closed and right closed. So, the closed intervals coincide with the closed sets in that topology.
The interior of an interval is the largest open interval that is contained in ; it is also the set of points in which are not endpoints of . The closure of is the smallest closed interval that contains ; which is also the set augmented with its finite endpoints.
For any set of real numbers, the interval enclosure or interval span of is the unique interval that contains , and does not properly contain any other interval that also contains .
An interval is a subinterval of interval if is a subset of . An interval is a proper subinterval of if is a proper subset of .
However, there is conflicting terminology for the terms segment and interval, which have been employed in the literature in two essentially opposite ways, resulting in ambiguity when these terms are used. The Encyclopedia of Mathematics defines interval (without a qualifier) to exclude both endpoints (i.e., open interval) and segment to include both endpoints (i.e., closed interval), while Rudin's Principles of Mathematical Analysis calls sets of the form [a, b] intervals and sets of the form (a, b) segments throughout. These terms tend to appear in older works; modern texts increasingly favor the term interval (qualified by open, closed, or half-open), regardless of whether endpoints are included.
Notations for intervals
The interval of numbers between and , including and , is often denoted . The two numbers are called the endpoints of the interval. In countries where numbers are written with a decimal comma, a semicolon may be used as a separator to avoid ambiguity.
Including or excluding endpoints
To indicate that one of the endpoints is to be excluded from the set, the corresponding square bracket can be either replaced with a parenthesis, or reversed. Both notations are described in International standard ISO 31-11. Thus, in set builder notation,
Each interval , , and represents the empty set, whereas denotes the singleton set . When , all four notations are usually taken to represent the empty set.
Both notations may overlap with other uses of parentheses and brackets in mathematics. For instance, the notation is often used to denote an ordered pair in set theory, the coordinates of a point or vector in analytic geometry and linear algebra, or (sometimes) a complex number in algebra. That is why Bourbaki introduced the notation to denote the open interval. The notation too is occasionally used for ordered pairs, especially in computer science.
Some authors such as Yves Tillé use to denote the complement of the interval ; namely, the set of all real numbers that are either less than or equal to , or greater than or equal to .
Infinite endpoints
In some contexts, an interval may be defined as a subset of the extended real numbers, the set of all real numbers augmented with and .
In this interpretation, the notations , , , and are all meaningful and distinct. In particular, denotes the set of all ordinary real numbers, while denotes the extended reals.
Even in the context of the ordinary reals, one may use an infinite endpoint to indicate that there is no bound in that direction. For example, is the set of positive real numbers, also written as The context affects some of the above definitions and terminology. For instance, the interval = is closed in the realm of ordinary reals, but not in the realm of the extended reals.
Integer intervals
When and are integers, the notation ⟦a, b⟧, or or or just , is sometimes used to indicate the interval of all integers between and included. The notation is used in some programming languages; in Pascal, for example, it is used to formally define a subrange type, most frequently used to specify lower and upper bounds of valid indices of an array.
Another way to interpret integer intervals are as sets defined by enumeration, using ellipsis notation.
An integer interval that has a finite lower or upper endpoint always includes that endpoint. Therefore, the exclusion of endpoints can be explicitly denoted by writing , , or . Alternate-bracket notations like or are rarely used for integer intervals.
Properties
The intervals are precisely the connected subsets of It follows that the image of an interval by any continuous function from to is also an interval. This is one formulation of the intermediate value theorem.
The intervals are also the convex subsets of The interval enclosure of a subset is also the convex hull of
The closure of an interval is the union of the interval and the set of its finite endpoints, and hence is also an interval. (The latter also follows from the fact that the closure of every connected subset of a topological space is a connected subset.) In other words, we have
The intersection of any collection of intervals is always an interval. The union of two intervals is an interval if and only if they have a non-empty intersection or an open end-point of one interval is a closed end-point of the other, for example
If is viewed as a metric space, its open balls are the open bounded intervals , and its closed balls are the closed bounded intervals . In particular, the metric and order topologies in the real line coincide, which is the standard topology of the real line.
Any element of an interval defines a partition of into three disjoint intervals 1, 2, 3: respectively, the elements of that are less than , the singleton and the elements that are greater than . The parts 1 and 3 are both non-empty (and have non-empty interiors), if and only if is in the interior of . This is an interval version of the trichotomy principle.
Dyadic intervals
A dyadic interval is a bounded real interval whose endpoints are and where and are integers. Depending on the context, either endpoint may or may not be included in the interval.
Dyadic intervals have the following properties:
The length of a dyadic interval is always an integer power of two.
Each dyadic interval is contained in exactly one dyadic interval of twice the length.
Each dyadic interval is spanned by two dyadic intervals of half the length.
If two open dyadic intervals overlap, then one of them is a subset of the other.
The dyadic intervals consequently have a structure that reflects that of an infinite binary tree.
Dyadic intervals are relevant to several areas of numerical analysis, including adaptive mesh refinement, multigrid methods and wavelet analysis. Another way to represent such a structure is p-adic analysis (for ).
Generalizations
Balls
An open finite interval is a 1-dimensional open ball with a center at and a radius of The closed finite interval is the corresponding closed ball, and the interval's two endpoints form a 0-dimensional sphere. Generalized to -dimensional Euclidean space, a ball is the set of points whose distance from the center is less than the radius. In the 2-dimensional case, a ball is called a disk.
If a half-space is taken as a kind of degenerate ball (without a well-defined center or radius), a half-space can be taken as analogous to a half-bounded interval, with its boundary plane as the (degenerate) sphere corresponding to the finite endpoint.
Multi-dimensional intervals
A finite interval is (the interior of) a 1-dimensional hyperrectangle. Generalized to real coordinate space an axis-aligned hyperrectangle (or box) is the Cartesian product of finite intervals. For this is a rectangle; for this is a rectangular cuboid (also called a "box").
Allowing for a mix of open, closed, and infinite endpoints, the Cartesian product of any intervals, is sometimes called an -dimensional interval.
A facet of such an interval is the result of replacing any non-degenerate interval factor by a degenerate interval consisting of a finite endpoint of The faces of comprise itself and all faces of its facets. The corners of are the faces that consist of a single point of
Convex polytopes
Any finite interval can be constructed as the intersection of half-bounded intervals (with an empty intersection taken to mean the whole real line), and the intersection of any number of half-bounded intervals is a (possibly empty) interval. Generalized to -dimensional affine space, an intersection of half-spaces (of arbitrary orientation) is (the interior of) a convex polytope, or in the 2-dimensional case a convex polygon.
Domains
An open interval is a connected open set of real numbers. Generalized to topological spaces in general, a non-empty connected open set is called a domain.
Complex intervals
Intervals of complex numbers can be defined as regions of the complex plane, either rectangular or circular.
Intervals in posets and preordered sets
Definitions
The concept of intervals can be defined in arbitrary partially ordered sets or more generally, in arbitrary preordered sets. For a preordered set and two elements one similarly defines the intervals
where means Actually, the intervals with single or no endpoints are the same as the intervals with two endpoints in the larger preordered set
defined by adding new smallest and greatest elements (even if there were ones), which are subsets of In the case of one may take to be the extended real line.
Convex sets and convex components in order theory
A subset of the preordered set is (order-)convex if for every and every we have Unlike in the case of the real line, a convex set of a preordered set need not be an interval. For example, in the totally ordered set of rational numbers, the set
is convex, but not an interval of since there is no square root of two in
Let be a preordered set and let The convex sets of contained in form a poset under inclusion. A maximal element of this poset is called a convex component of By the Zorn lemma, any convex set of contained in is contained in some convex component of but such components need not be unique. In a totally ordered set, such a component is always unique. That is, the convex components of a subset of a totally ordered set form a partition.
Properties
A generalization of the characterizations of the real intervals follows. For a non-empty subset of a linear continuum the following conditions are equivalent.
The set is an interval.
The set is order-convex.
The set is a connected subset when is endowed with the order topology.
For a subset of a lattice the following conditions are equivalent.
The set is a sublattice and an (order-)convex set.
There is an ideal and a filter such that
Applications
In general topology
Every Tychonoff space is embeddable into a product space of the closed unit intervals Actually, every Tychonoff space that has a base of cardinality is embeddable into the product of copies of the intervals.
The concepts of convex sets and convex components are used in a proof that every totally ordered set endowed with the order topology is completely normal or moreover, monotonically normal.
Topological algebra
Intervals can be associated with points of the plane, and hence regions of intervals can be associated with regions of the plane. Generally, an interval in mathematics corresponds to an ordered pair taken from the direct product of real numbers with itself, where it is often assumed that . For purposes of mathematical structure, this restriction is discarded, and "reversed intervals" where are allowed. Then, the collection of all intervals can be identified with the topological ring formed by the direct sum of with itself, where addition and multiplication are defined component-wise.
The direct sum algebra has two ideals, { [x,0] : x ∈ R } and { [0,y] : y ∈ R }. The identity element of this algebra is the condensed interval . If interval is not in one of the ideals, then it has multiplicative inverse . Endowed with the usual topology, the algebra of intervals forms a topological ring. The group of units of this ring consists of four quadrants determined by the axes, or ideals in this case. The identity component of this group is quadrant I.
Every interval can be considered a symmetric interval around its midpoint. In a reconfiguration published in 1956 by M Warmus, the axis of "balanced intervals" is used along with the axis of intervals that reduce to a point. Instead of the direct sum the ring of intervals has been identified with the hyperbolic numbers by M. Warmus and D. H. Lehmer through the identification
where
This linear mapping of the plane, which amounts of a ring isomorphism, provides the plane with a multiplicative structure having some analogies to ordinary complex arithmetic, such as polar decomposition.
See also
Arc (geometry)
Inequality
Interval graph
Interval finite element
Interval (statistics)
Line segment
Partition of an interval
Unit interval
References
Bibliography
T. Sunaga, "Theory of interval algebra and its application to numerical analysis" , In: Research Association of Applied Geometry (RAAG) Memoirs, Ggujutsu Bunken Fukuy-kai. Tokyo, Japan, 1958, Vol. 2, pp. 29–46 (547-564); reprinted in Japan Journal on Industrial and Applied Mathematics, 2009, Vol. 26, No. 2-3, pp. 126–143.
External links
A Lucid Interval by Brian Hayes: An American Scientist article provides an introduction.
Interval computations website
Interval computations research centers
Interval Notation by George Beck, Wolfram Demonstrations Project.
Sets of real numbers
Order theory
Topology | Interval (mathematics) | [
"Physics",
"Mathematics"
] | 4,075 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Order theory"
] |
49,295 | https://en.wikipedia.org/wiki/Fine-structure%20constant | In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant that quantifies the strength of the electromagnetic interaction between elementary charged particles.
It is a dimensionless quantity (dimensionless physical constant), independent of the system of units used, which is related to the strength of the coupling of an elementary charge e with the electromagnetic field, by the formula . Its numerical value is approximately , with a relative uncertainty of
The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887.
Why the constant should have this value is not understood, but there are a number of ways to measure its value.
Definition
In terms of other physical constants, may be defined as:
where
is the elementary charge ();
is the Planck constant ();
is the reduced Planck constant, ()
is the speed of light ();
is the electric constant ().
Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity).
Alternative systems of units
The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes
A nondimensionalised system commonly used in high energy physics sets , where the expression for the fine-structure constant becomesAs such, the fine-structure constant is chiefly a quantity determining (or determined by) the elementary charge: in terms of such a natural unit of charge.
In the system of atomic units, which sets , the expression for the fine-structure constant becomes
Measurement
The CODATA recommended value of is
This has a relative standard uncertainty of
This value for gives , 0.8 times the standard uncertainty away from its old defined value, with the mean differing from the old value by only 0.13 parts per billion.
Historically the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is
While the value of can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry.
There is general agreement for the value of , as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant (the magnetic moment of the electron is also referred to as the electron -factor ). One of the most precise values of obtained experimentally (as of 2023) is based on a measurement of using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams:
This measurement of has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results.
Further refinement of the experimental value was published by the end of 2020, giving the value
with a relative accuracy of , which has a significant discrepancy from the previous experimental value.
Physical interpretations
The fine-structure constant, , has several physical interpretations. is:
When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in . Because is much less than one, higher powers of are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult.
Variation with energy scale
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, is the asymptotic value of the fine-structure constant at zero energy.
At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective ≈ 1/127.
As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.
History
Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887,
Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916.
The first physical interpretation of the fine-structure constant was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum.
Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula.
With the development of quantum electrodynamics (QED) the significance of has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.
History of measurements
The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.
Potential variation over time
Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary.
In the experiments below, represents the change in over time, which can be computed by prev − now . If the fine-structure constant really is a constant, then any experiment should show that
or as close to zero as experiment can measure. Any value far away from zero would indicate that does change over time. So far, most experimental data is consistent with being constant.
Past rate of change
The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.
Improved technology at the dawn of the 21st century made it possible to probe the value of at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in .
Using the Keck telescopes and a data set of 128 quasars at redshifts , Webb et al. found that their spectra were consistent with a slight increase in over the last 10–12 billion years. Specifically, they found that
In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that is not constant or that there is experimental error unaccounted for.
In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation:
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.
King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for for particular models. This suggests that the statistical uncertainties and best estimate for stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have yet to be verified.
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation.
They proposed using this effect to measure the value of during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on is strongly dependent upon effective integration time, going as . The European LOFAR radio telescope would only be able to constrain to about 0.3%. The collecting area required to constrain to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present.
Present rate of change
In 2008, Rosenband et al.
used the frequency ratio of and in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of , namely = per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories
that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
Spatial variation – Australian dipole
Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe.
These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues
that the study may contain wrong data due to subtle differences in the two telescopes
a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study.
Other research finds no meaningful variation in the fine structure constant.
Anthropic explanation
The anthropic principle is an argument about the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. One example is that, if modern grand unified theories are correct, then needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible.
Numerological explanations
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe.
This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137.
By the 1940s experimental values for deviated sufficiently from 137 to refute Eddington's arguments.
Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of differed, the universe would degenerate, and thus that = is a law of nature.
Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms:
Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.
Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.
In the late 20th century, multiple physicists, including Stephen Hawking in his 1988 book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe.
Quotes
See also
Dimensionless physical constant
Hyperfine structure
Footnotes
References
External links
(adapted from the Encyclopædia Britannica, 15th ed. by NIST)
Physicists Nail Down the ‘Magic Number’ That Shapes the Universe (Natalie Wolchover, Quanta magazine, December 2, 2020). The value of this constant is given here as 1/137.035999206 (note the difference in the last three digits). It was determined by a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris.
Dimensionless constants
Electromagnetism
Fundamental constants
Arnold Sommerfeld | Fine-structure constant | [
"Physics"
] | 3,076 | [
"Dimensionless constants",
"Electromagnetism",
"Physical phenomena",
"Physical quantities",
"Physical constants",
"Fundamental interactions",
"Fundamental constants"
] |
49,324 | https://en.wikipedia.org/wiki/Unit%20interval | In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted (capital letter ). In addition to its role in real analysis, the unit interval is used to study homotopy theory in the field of topology.
In the literature, the term "unit interval" is sometimes applied to the other shapes that an interval from 0 to 1 could take: , , and . However, the notation is most commonly reserved for the closed interval .
Properties
The unit interval is a complete metric space, homeomorphic to the extended real number line. As a topological space, it is compact, contractible, path connected and locally path connected. The Hilbert cube is obtained by taking a topological product of countably many copies of the unit interval.
In mathematical analysis, the unit interval is a one-dimensional analytical manifold whose boundary consists of the two points 0 and 1. Its standard orientation goes from 0 to 1.
The unit interval is a totally ordered set and a complete lattice (every subset of the unit interval has a supremum and an infimum).
Cardinality
The size or cardinality of a set is the number of elements it contains.
The unit interval is a subset of the real numbers . However, it has the same size as the whole set: the cardinality of the continuum. Since the real numbers can be used to represent points along an infinitely long line, this implies that a line segment of length 1, which is a part of that line, has the same number of points as the whole line. Moreover, it has the same number of points as a square of area 1, as a cube of volume 1, and even as an unbounded n-dimensional Euclidean space (see Space filling curve).
The number of elements (either real numbers or points) in all the above-mentioned sets is uncountable, as it is strictly greater than the number of natural numbers.
Orientation
The unit interval is a curve. The open interval (0,1) is a subset of the positive real numbers and inherits an orientation from them. The orientation is reversed when the interval is entered from 1, such as in the integral used to define natural logarithm for x in the interval, thus yielding negative values for logarithm of such x. In fact, this integral is evaluated as a signed area yielding negative area over the unit interval due to reversed orientation there.
Generalizations
The interval , with length two, demarcated by the positive and negative units, occurs frequently, such as in the range of the trigonometric functions sine and cosine and the hyperbolic function tanh. This interval may be used for the domain of inverse functions. For instance, when is restricted to then is in this interval and arcsine is defined there.
Sometimes, the term "unit interval" is used to refer to objects that play a role in various branches of mathematics analogous to the role that plays in homotopy theory. For example, in the theory of quivers, the (analogue of the) unit interval is the graph whose vertex set is and which contains a single edge e whose source is 0 and whose target is 1. One can then define a notion of homotopy between quiver homomorphisms analogous to the notion of homotopy between continuous maps.
Fuzzy logic
In logic, the unit interval can be interpreted as a generalization of the Boolean domain {0,1}, in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with ; conjunction (AND) is replaced with multiplication (); and disjunction (OR) is defined, per De Morgan's laws, as .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
See also
Interval notation
Unit square, cube, circle, hyperbola and sphere
Unit impulse
Unit vector
References
Robert G. Bartle, 1964, The Elements of Real Analysis, John Wiley & Sons.
Sets of real numbers
1 (number)
Topology | Unit interval | [
"Physics",
"Mathematics"
] | 895 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
49,503 | https://en.wikipedia.org/wiki/Inductively%20coupled%20plasma%20mass%20spectrometry | Inductively coupled plasma mass spectrometry (ICP-MS) is a type of mass spectrometry that uses an inductively coupled plasma to ionize the sample. It atomizes the sample and creates atomic and small polyatomic ions, which are then detected. It is known and used for its ability to detect metals and several non-metals in liquid samples at very low concentrations. It can detect different isotopes of the same element, which makes it a versatile tool in isotopic labeling.
Compared to atomic absorption spectroscopy, ICP-MS has greater speed, precision, and sensitivity. However, compared with other types of mass spectrometry, such as thermal ionization mass spectrometry (TIMS) and glow discharge mass spectrometry (GD-MS), ICP-MS introduces many interfering species: argon from the plasma, component gases of air that leak through the cone orifices, and contamination from glassware and the cones.
Components
Inductively coupled plasma
An inductively coupled plasma is a plasma that is energized (ionized) by inductively heating the gas with an electromagnetic coil, and contains a sufficient concentration of ions and electrons to make the gas electrically conductive. Not all of the gas needs to be ionized for the gas to have the characteristics of a plasma; as little as 1% ionization creates a plasma. The plasmas used in spectrochemical analysis are essentially electrically neutral, with each positive charge on an ion balanced by a free electron. In these plasmas the positive ions are almost all singly charged and there are few negative ions, so there are nearly equal numbers of ions and electrons in each unit volume of plasma.
The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density, and E to H heating mode transition occurs with external inputs. The Inductively Coupled Plasma Mass Spectrometry is operated in the H mode.
What makes Inductively Coupled Plasma Mass Spectrometry (ICP-MS) unique to other forms of inorganic mass spectrometry is its ability to sample the analyte continuously, without interruption. This is in contrast to other forms of inorganic mass spectrometry; Glow Discharge Mass Spectrometry (GDMS) and Thermal Ionization Mass Spectrometry (TIMS), that require a two-stage process: Insert sample(s) into a vacuum chamber, seal the vacuum chamber, pump down the vacuum, energize sample, thereby sending ions into the mass analyzer. With ICP-MS the sample to be analyzed is sitting at atmospheric pressure. Through the effective use of differential pumping; multiple vacuum stages separate by differential apertures (holes), the ions created in the argon plasma are, with the aid of various electrostatic focusing techniques, transmitted through the mass analyzer to the detector(s) and counted. Not only does this enable the analyst to radically increase sample throughput (amount of samples over time), but has also made it possible to do what is called "time resolved acquisition". Hyphenated techniques like Liquid Chromatography ICP-MS (LC-ICP-MS); Laser Ablation ICP-MS (LA-ICP-MS); Flow Injection ICP-MS (FIA-ICP-MS), etc. have benefited from this relatively new technology. It has stimulated the development new tools for research including geochemistry and forensic chemistry; biochemistry and oceanography. Additionally, increases in sample throughput from dozens of samples a day to hundreds of samples a day have revolutionized environmental analysis, reducing costs. Fundamentally, this is all due to the fact that while the sample resides at environmental pressure, the analyzer and detector are at 1/10,000,000 of that same pressure during normal operation.
An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz, although the inner tube (injector) can be sapphire if hydrofluoric acid is being used. The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 13 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electric spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 million cycles per second). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions. The temperature of the plasma is very high, of the order of 10,000 K. The plasma also produces ultraviolet light, so for safety should not be viewed directly.
The ICP can be retained in the quartz torch because the flow of gas between the two outermost tubes keeps the plasma away from the walls of the torch. A second flow of argon (around 1 liter per minute) is usually introduced between the central tube and the intermediate tube to keep the plasma away from the end of the central tube. A third flow (again usually around 1 liter per minute) of gas is introduced into the central tube of the torch. This gas flow passes through the centre of the plasma, where it forms a channel that is cooler than the surrounding plasma but still much hotter than a chemical flame. Samples to be analyzed are introduced into this central channel, usually as a mist of liquid formed by passing the liquid sample into a nebulizer.
To maximise plasma temperature (and hence ionisation efficiency) and stability, the sample should be introduced through the central tube with as little liquid (solvent load) as possible, and with consistent droplet sizes. A nebuliser can be used for liquid samples, followed by a spray chamber to remove larger droplets, or a desolvating nebuliser can be used to evaporate most of the solvent before it reaches the torch. Solid samples can also be introduced using laser ablation. The sample enters the central channel of the ICP, evaporates, molecules break apart, and then the constituent atoms ionise. At the temperatures prevailing in the plasma a significant proportion of the atoms of many chemical elements are ionized, each atom losing its most loosely bound electron to form a singly charged ion. The plasma temperature is selected to maximise ionisation efficiency for elements with a high first ionisation energy, while minimising second ionisation (double charging) for elements that have a low second ionisation energy.
Mass spectrometry
For coupling to mass spectrometry, the ions from the plasma are extracted through a series of cones into a mass spectrometer, usually a quadrupole. The ions are separated on the basis of their mass-to-charge ratio and a detector receives an ion signal proportional to the concentration.
The concentration of a sample can be determined through calibration with certified reference material such as single or multi-element reference standards. ICP-MS also lends itself to quantitative determinations through isotope dilution, a single point method based on an isotopically enriched standard. In order to increase reproducibility and compensate for errors by sensitivity variation, an internal standard can be added.
Other mass analyzers coupled to ICP systems include double focusing magnetic-electrostatic sector systems with both single and multiple collector, as well as time of flight systems (both axial and orthogonal accelerators have been used).
Applications
One of the largest volume uses for ICP-MS is in the medical and forensic field, specifically, toxicology. A physician may order a metal assay for a number of reasons, such as suspicion of heavy metal poisoning, metabolic concerns, and even hepatological issues. Depending on the specific parameters unique to each patient's diagnostic plan, samples collected for analysis can range from whole blood, urine, plasma, serum, to even packed red blood cells. Another primary use for this instrument lies in the environmental field. Such applications include water testing for municipalities or private individuals all the way to soil, water and other material analysis for industrial purposes.
In recent years, industrial and biological monitoring has presented another major need for metal analysis via ICP-MS. Individuals working in factories where exposure to metals is likely and unavoidable, such as a battery factory, are required by their employer to have their blood or urine analyzed for metal toxicity on a regular basis. This monitoring has become a mandatory practice implemented by the U.S. Occupational Safety and Health Administration, in an effort to protect workers from their work environment and ensure proper rotation of work duties (i.e. rotating employees from a high exposure position to a low exposure position).
ICP-MS is also used widely in the geochemistry field for radiometric dating, in which it is used to analyze relative abundance of different isotopes, in particular uranium and lead. ICP-MS is more suitable for this application than the previously used thermal ionization mass spectrometry, as species with high ionization energy such as osmium and tungsten can be easily ionized. For high precision ratio work, multiple collector instruments are normally used to reduce the effect noise on the calculated ratios.
In the field of flow cytometry, a new technique uses ICP-MS to replace the traditional fluorochromes. Briefly, instead of labelling antibodies (or other biological probes) with fluorochromes, each antibody is labelled with a distinct combinations of lanthanides. When the sample of interest is analysed by ICP-MS in a specialised flow cytometer, each antibody can be identified and quantitated by virtue of a distinct ICP "footprint". In theory, hundreds of different biological probes can thus be analysed in an individual cell, at a rate of ca. 1,000 cells per second. Because elements are easily distinguished in ICP-MS, the problem of compensation in multiplex flow cytometry is effectively eliminated.
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is a powerful technique for the elemental analysis of a wide variety of materials encountered in forensic casework. (LA-ICP-MS) has already successfully been applied to applications in forensics, metals, glasses, soils, car paints, bones and teeth, printing inks, trace elemental, fingerprint, and paper. Among these, forensic glass analysis stands out as an application for which this technique has great utility to provide highly.
Car hit and runs, burglaries, assaults, drive-by shootings and bombings such as these situations may cause glass fragments that could be used as evidence of association in glass transfer conditions. LA-ICP-MS is considered one of the best techniques for analysis of glass due to the short time for sample preparation and sample, small sample size of less than 250 nanograms. In addition there is no need for complex procedure and handling of dangerous materials that is used for digestion of the samples. This allows detecting major, minor and tracing elements with high level of precision and accuracy. There are set of properties that are used to measure glass sample such as physical and optical properties including color, thickness, density, refractive index (RI) and also, if necessary, elemental analysis can be conducted in order to enhance the value of an association.
Pharmaceutical industry
In the pharmaceutical industry, ICP-MS is used for detecting inorganic impurities in pharmaceuticals and their ingredients. New and reduced maximum permitted exposure levels of heavy metals from dietary supplements, introduced in USP (United States Pharmacopeia) «〈232〉Elemental Impurities—Limits» and USP «〈232〉Elemental Impurities—Procedures», will increase the need for ICP-MS technology, where, previously, other analytic methods have been sufficient.
Cosmetics, such as lipstick, recovered from a crime scene may provide valuable forensic information. Lipstick smears left on cigarette butts, glassware, clothing, bedding; napkins, paper, etc. may be valuable evidence. Lipstick recovered from clothing or skin may also indicate physical contact between individuals. Forensic analysis of recovered lipstick smear evidence can provide valuable information on the recent activities of a victim or suspect. Trace elemental analysis of lipstick smears could be used to complement existing visual comparative procedures to determine the lipstick brand and color.
Single Particle Inductively Coupled Plasma Mass Spectroscopy (SP ICP-MS) was designed for particle suspensions in 2000 by Claude Degueldre. He first tested this new methodology at the Forel Institute of the University of Geneva and presented this new analytical approach at the 'Colloid 2oo2' symposium during the spring 2002 meeting of the EMRS, and in the proceedings in 2003. This study presents the theory of SP ICP-MS and the results of tests carried out on clay particles (montmorillonite) as well as other suspensions of colloids. This method was then tested on thorium dioxide nanoparticles by Degueldre & Favarger (2004), zirconium dioxide by Degueldre et al (2004) and gold nanoparticles, which are used as a substrate in nanopharmacy, and published by Degueldre et al (2006). Subsequently, the study of uranium dioxide nano- and micro-particles gave rise to a detailed publication, Ref. Degueldre et al (2006). Since 2010 the interest for SP ICP-MS has exploded.
Previous forensic techniques employed for the organic analysis of lipsticks by compositional comparison include thin layer chromatography (TLC), gas chromatography (GC), and high-performance liquid chromatography (HPLC). These methods provide useful information regarding the identification of lipsticks. However, they all require long sample preparation times and destroy the sample. Nondestructive techniques for the forensic analysis of lipstick smears include UV fluorescence observation combined with purge-and-trap gas chromatography, microspectrophotometry and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS), and Raman spectroscopy.
Metal speciation
A growing trend in the world of elemental analysis has revolved around the speciation, or determination of oxidation state of certain metals such as chromium and arsenic. The toxicity of those elements varies with the oxidation state, so new regulations from food authorities requires speciation of some elements. One of the primary techniques to achieve this is to separate the chemical species with high-performance liquid chromatography (HPLC) or field flow fractionation (FFF) and then measure the concentrations with ICP-MS.
Quantification of proteins and biomolecules
There is an increasing trend of using ICP-MS as a tool in speciation analysis, which normally involves a front end chromatograph separation and an elemental selective detector, such as AAS and ICP-MS. For example, ICP-MS may be combined with size exclusion chromatography and preparative native PAGE for identifying and quantifying metalloproteins in biofluids. Also the phosphorylation status of proteins can be analyzed.
In 2007, a new type of protein tagging reagents called metal-coded affinity tags (MeCAT) were introduced to label proteins quantitatively with metals, especially lanthanides. The MeCAT labelling allows relative and absolute quantification of all kind of proteins or other biomolecules like peptides. MeCAT comprises a site-specific biomolecule tagging group with at least a strong chelate group which binds metals. The MeCAT labelled proteins can be accurately quantified by ICP-MS down to low attomol amount of analyte which is at least 2–3 orders of magnitude more sensitive than other mass spectrometry based quantification methods. By introducing several MeCAT labels to a biomolecule and further optimization of LC-ICP-MS detection limits in the zeptomol range are within the realm of possibility. By using different lanthanides MeCAT multiplexing can be used for pharmacokinetics of proteins and peptides or the analysis of the differential expression of proteins (proteomics) e.g. in biological fluids. Breakable PAGE SDS-PAGE (DPAGE, dissolvable PAGE), two-dimensional gel electrophoresis or chromatography is used for separation of MeCAT labelled proteins. Flow-injection ICP-MS analysis of protein bands or spots from DPAGE SDS-PAGE gels can be easily performed by dissolving the DPAGE gel after electrophoresis and staining of the gel. MeCAT labelled proteins are identified and relatively quantified on peptide level by MALDI-MS or ESI-MS.
Elemental analysis
The ICP-MS allows determination of elements with atomic mass ranges 7 to 250 (Li to U), and sometimes higher. Some masses are prohibited, such as 40 Da, due to the abundance of argon in the sample. Other interference regions may include mass 80 (due to the argon dimer) and mass 56 (due to ArO), the latter of which greatly hinders Fe detection unless the instrument is fitted with a reaction chamber. Such interferences can be reduced by using a high resolution ICP-MS (HR-ICP-MS) which uses two or more slits to constrict the beam and distinguish between nearby peaks. This comes at the cost of sensitivity. For example, distinguishing iron from argon requires a resolving power of about 10,000, which may reduce the iron sensitivity by around 99%. Interfering species can alternatively be distinguished through the use of a collision chamber, which can filter gasses by either chemical reaction or physical collision.
A single collector ICP-MS may use a multiplier in pulse counting mode to amplify very low signals, an attenuation grid or a multiplier in analogue mode to detect medium signals, and a Faraday cup/bucket to detect larger signals. A multi-collector ICP-MS may have more than one of any of these, typically Faraday buckets which are more cost-effective than other collectors. With this combination, a dynamic range of 12 orders of magnitude, from 1 part per quadrillion (ppq) to 100 parts per million (ppm) is possible.
ICP-MS is a common method for the determination of cadmium in biological samples.
Unlike atomic absorption spectroscopy, which can only measure a single element at a time, ICP-MS has the capability to scan for all elements simultaneously. This allows rapid sample processing. A simultaneous ICP-MS that can record the entire analytical spectrum from lithium to uranium in every analysis won the Silver Award at the 2010 Pittcon Editors' Awards. An ICP-MS may use multiple scan modes, each one striking a different balance between speed and precision. Using the magnet alone to scan is slow due to hysteresis but is precise. Electrostatic plates can be used in addition to the magnet to increase the speed, and with multiple collectors can allow a scan of every element from Lithium 6 to Uranium Oxide 256 in less than a quarter of a second. For low detection limits, interfering species and high precision, the counting time can increase substantially. The rapid scanning, large dynamic range and large mass range of ICP-MS is ideally suited to measuring multiple unknown concentrations and isotope ratios in samples that have had minimal preparation (an advantage over TIMS). The analysis of seawater, urine, and digested whole rock samples are examples of industry applications. These properties also lend well to laser-ablated rock samples, where the scanning rate is fast enough to enable a real-time plot of any number of isotopes. This also allows easy spatial mapping of mineral grains.
Hardware
In terms of input and output, ICP-MS instrument consumes prepared sample material and translates it into mass-spectral data. Actual analytical procedure takes some time; after that time the instrument can be switched to work on the next sample. Series of such sample measurements requires the instrument to have plasma ignited, meanwhile a number of technical parameters has to be stable in order for the results obtained to have feasibly accurate and precise interpretation. Maintaining the plasma requires a constant supply of carrier gas (usually, pure argon) and increased power consumption of the instrument. When these additional running costs are not considered justified, plasma and most of auxiliary systems can be turned off. In such standby mode only pumps are working to keep proper vacuum in mass-spectrometer.
The constituents of ICP-MS instrument are designed to allow for reproducible and/or stable operation.
Sample introduction
The first step in analysis is the introduction of the sample. This has been achieved in ICP-MS through a variety of means.
The most common method is the use of analytical nebulizers. A nebulizer converts liquids into an aerosol, and that aerosol can then be swept into the plasma to create the ions. Nebulizers work best with simple liquid samples (i.e. solutions). However, there have been instances of their use with more complex materials like a slurry. Many varieties of nebulizers have been coupled to ICP-MS, including pneumatic, cross-flow, Babington, ultrasonic, and desolvating types. The aerosol generated is often treated to limit it to only smallest droplets, commonly by means of a Peltier cooled double pass or cyclonic spray chamber. Use of autosamplers makes this easier and faster, especially for routine work and large numbers of samples. A Desolvating Nebuliser (DSN) may also be used; this uses a long heated capillary, coated with a fluoropolymer membrane, to remove most of the solvent and reduce the load on the plasma. Matrix removal introduction systems are sometimes used for samples, such as seawater, where the species of interest are at trace levels, and are surrounded by much more abundant contaminants.
Laser ablation is another method. Though less common in the past, it has become popular as a means of sample introduction, thanks to increased ICP-MS scanning speeds. In this method, a pulsed UV laser is focused on the sample and creates a plume of ablated material, which can be swept into the plasma. This allows geochemists to spatially map the isotope composition in cross-sections of rock samples, a tool which is lost if the rock is digested and introduced as a liquid sample. Lasers for this task are built to have highly controllable power outputs and uniform radial power distributions, to produce craters which are flat bottomed and of a chosen diameter and depth.
For both Laser Ablation and Desolvating Nebulisers, a small flow of nitrogen may also be introduced into the argon flow. Nitrogen exists as a dimer, so has more vibrational modes and is more efficient at receiving energy from the RF coil around the torch.
Other methods of sample introduction are also utilized. Electrothermal vaporization (ETV) and in torch vaporization (ITV) use hot surfaces (graphite or metal, generally) to vaporize samples for introduction. These can use very small amounts of liquids, solids, or slurries. Other methods like vapor generation are also known.
Plasma torch
The plasma used in an ICP-MS is made by partially ionizing argon gas (Ar → Ar+ + e−). The energy required for this reaction is obtained by pulsing an alternating electric current in load coil that surrounds the plasma torch with a flow of argon gas.
After the sample is injected, the plasma's extreme temperature causes the sample to separate into individual atoms (atomization). Next, the plasma ionizes these atoms (M → M+ + e−) so that they can be detected by the mass spectrometer.
An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz. The two major designs are the Fassel and Greenfield torches. The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 14 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electrical spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 MHz or 40 MHz). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions.
Advantage of argon
Making the plasma from argon, instead of other gases, has several advantages. First, argon is abundant (in the atmosphere, as a result of the radioactive decay of potassium) and therefore cheaper than other noble gases. Argon also has a higher first ionization potential than all other elements except He, F, and Ne. Because of this high ionization energy, the reaction (Ar+ + e− → Ar) is more energetically favorable than the reaction (M+ + e− → M). This ensures that the sample remains ionized (as M+) so that the mass spectrometer can detect it.
Argon can be purchased for use with the ICP-MS in either a refrigerated liquid or a gas form. However it is important to note that whichever form of argon purchased, it should have a guaranteed purity of 99.9% Argon at a minimum. It is important to determine which type of argon will be best suited for the specific situation. Liquid argon is typically cheaper and can be stored in a greater quantity as opposed to the gas form, which is more expensive and takes up more tank space. If the instrument is in an environment where it gets infrequent use, then buying argon in the gas state will be most appropriate as it will be more than enough to suit smaller run times and gas in the cylinder will remain stable for longer periods of time, whereas liquid argon will suffer loss to the environment due to venting of the tank when stored over extended time frames. However, if the ICP-MS is to be used routinely and is on and running for eight or more hours each day for several days a week, then going with liquid argon will be the most suitable. If there are to be multiple ICP-MS instruments running for long periods of time, then it will most likely be beneficial for the laboratory to install a bulk or micro bulk argon tank which will be maintained by a gas supply company, thus eliminating the need to change out tanks frequently as well as minimizing loss of argon that is left over in each used tank as well as down time for tank changeover.
Helium can be used either in place of, or mixed with, argon for plasma generation. Helium's higher first ionisation energy allows greater ionisation and therefore higher sensitivity for hard-to-ionise elements. The use of pure helium also avoids argon-based interferences such as ArO. However, many of the interferences can be mitigated by use of a collision cell, and the greater cost of helium has prevented its use in commercial ICP-MS.
Transfer of ions into vacuum
The carrier gas is sent through the central channel and into the very hot plasma. The sample is then exposed to radio frequency which converts the gas into a plasma. The high temperature of the plasma is sufficient to cause a very large portion of the sample to form ions. This fraction of ionization can approach 100% for some elements (e.g. sodium), but this is dependent on the ionization potential. A fraction of the formed ions passes through a ~1 mm hole (sampler cone) and then a ~0.4 mm hole (skimmer cone). The purpose of which is to allow a vacuum that is required by the mass spectrometer.
The vacuum is created and maintained by a series of pumps. The first stage is usually based on a roughing pump, most commonly a standard rotary vane pump. This removes most of the gas and typically reaches a pressure of around 133 Pa. Later stages have their vacuum generated by more powerful vacuum systems, most often turbomolecular pumps. Older instruments may have used oil diffusion pumps for high vacuum regions.
Ion optics
Before mass separation, a beam of positive ions has to be extracted from the plasma and focused into the mass-analyzer. It is important to separate the ions from UV photons, energetic neutrals and from any solid particles that may have been carried into the instrument from the ICP. Traditionally, ICP-MS instruments have used transmitting ion lens arrangements for this purpose. Examples include the Einzel lens, the Barrel lens, Agilent's Omega Lens and Perkin-Elmer's Shadow Stop. Another approach is to use ion guides (quadrupoles, hexapoles, or octopoles) to guide the ions into mass analyzer along a path away from the trajectory of photons or neutral particles. Yet another approach is Varian patented used by Analytik Jena ICP-MS 90 degrees reflecting parabolic "Ion Mirror" optics, which are claimed to provide more efficient ion transport into the mass-analyzer, resulting in better sensitivity and reduced background. Analytik Jena ICP-MS PQMS is the most sensitive instrument on the market.
A sector ICP-MS will commonly have four sections: an extraction acceleration region, steering lenses, an electrostatic sector and a magnetic sector. The first region takes ions from the plasma and accelerates them using a high voltage. The second uses may use a combination of parallel plates, rings, quadrupoles, hexapoles and octopoles to steer, shape and focus the beam so that the resulting peaks are symmetrical, flat topped and have high transmission. The electrostatic sector may be before or after the magnetic sector depending on the particular instrument, and reduces the spread in kinetic energy caused by the plasma. This spread is particularly large for ICP-MS, being larger than Glow Discharge and much larger than TIMS. The geometry of the instrument is chosen so that the instrument the combined focal point of the electrostatic and magnetic sectors is at the collector, known as Double Focusing (or Double Focussing).
If the mass of interest has a low sensitivity and is just below a much larger peak, the low mass tail from this larger peak can intrude onto the mass of interest. A Retardation Filter might be used to reduce this tail. This sits near the collector, and applies a voltage equal but opposite to the accelerating voltage; any ions that have lost energy while flying around the instrument will be decelerated to rest by the filter.
Collision reaction cell and iCRC
The collision/reaction cell is used to remove interfering ions through ion/neutral reactions. Collision/reaction cells are known under several names. The dynamic reaction cell is located before the quadrupole in the ICP-MS device. The chamber has a quadrupole and can be filled with reaction (or collision) gases (ammonia, methane, oxygen or hydrogen), with one gas type at a time or a mixture of two of them, which reacts with the introduced sample, eliminating some of the interference.
The integrated Collisional Reaction Cell (iCRC) used by Analytik Jena ICP-MS is a mini-collision cell installed in front of the parabolic ion mirror optics that removes interfering ions by injecting a collisional gas (He), or a reactive gas (H2), or a mixture of the two, directly into the plasma as it flows through the skimmer cone and/or the sampler cone. The iCRC removed interfering ions using a collisional kinetic energy discrimination (KED) phenomenon and chemical reactions with interfering ions similarly to traditionally used larger collision cells.
Routine maintenance
As with any piece of instrumentation or equipment, there are many aspects of maintenance that need to be encompassed by daily, weekly and annual procedures. The frequency of maintenance is typically determined by the sample volume and cumulative run time that the instrument is subjected to.
One of the first things that should be carried out before the calibration of the ICP-MS is a sensitivity check and optimization. This ensures that the operator is aware of any possible issues with the instrument and if so, may address them before beginning a calibration. Typical indicators of sensitivity are Rhodium levels, Cerium/Oxide ratios and DI water blanks. One common standard practice is to measure a standard tuning solution provided by the ICP manufacturer every time the plasma torch is started. Then the instrument is auto-calibrated for optimum sensitivity and the operator obtains a report providing certain parameters such as sensitivity, mass resolution and estimated amount of oxidized species and double-positive charged species.
One of the most frequent forms of routine maintenance is replacing sample and waste tubing on the peristaltic pump, as these tubes can get worn fairly quickly resulting in holes and clogs in the sample line, resulting in skewed results. Other parts that will need regular cleaning and/or replacing are sample tips, nebulizer tips, sample cones, skimmer cones, injector tubes, torches and lenses. It may also be necessary to change the oil in the interface roughing pump as well as the vacuum backing pump, depending on the workload put on the instrument.
Sample preparation
For most clinical methods using ICP-MS, there is a relatively simple and quick sample prep process. The main component to the sample is an internal standard, which also serves as the diluent. This internal standard consists primarily of deionized water, with nitric or hydrochloric acid and indium and/or gallium. The addition of volatile acids allows for the sample to decompose into its gaseous components in the plasma which minimizes the ability for concentrated salts and solvent loads to clog the cones and contaminate the instrument. Depending on the sample type, usually 5 mL of the internal standard is added to a test tube along with 10–500 microliters of sample. This mixture is then vortexed for several seconds or until mixed well and then loaded onto the autosampler tray. For other applications that may involve very viscous samples or samples that have particulate matter, a process known as sample digestion may have to be carried out before it can be pipetted and analyzed. This adds an extra first step to the above process and therefore makes the sample prep more lengthy.
References
External links
Scientific techniques
Mass spectrometry
Laboratory equipment
Analytical chemistry | Inductively coupled plasma mass spectrometry | [
"Physics",
"Chemistry"
] | 7,365 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Matter"
] |
49,569 | https://en.wikipedia.org/wiki/Bayes%27%20theorem | Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting conditional probabilities, allowing one to find the probability of a cause given its effect. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to someone of a known age to be assessed more accurately by conditioning it relative to their age, rather than assuming that the person is typical of the population as a whole. Based on Bayes' law, both the prevalence of a disease in a given population and the error rate of an infectious disease test must be taken into account to evaluate the meaning of a positive test result and avoid the base-rate fallacy.
One of Bayes' theorem's many applications is Bayesian inference, an approach to statistical inference, where it is used to invert the probability of observations given a model configuration (i.e., the likelihood function) to obtain the probability of the model configuration given the observations (i.e., the posterior probability).
History
Bayes' theorem is named after Thomas Bayes (), a minister, statistician, and philosopher. Bayes used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter. His work was published in 1763 as An Essay Towards Solving a Problem in the Doctrine of Chances. Bayes studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). After Bayes's death, his family gave his papers to a friend, the minister, philosopher, and mathematician Richard Price.
Price significantly edited the unpublished manuscript for two years before sending it to a friend who read it aloud at the Royal Society on 23 December 1763. Price edited Bayes's major work "An Essay Towards Solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions, and contains Bayes' theorem. Price wrote an introduction to the paper that provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions Bayes offered. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on Bayes's legacy. On 27 April, a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, in which Price applies this work to population and computing 'life-annuities'.
Independently of Bayes, Pierre-Simon Laplace used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work, in 1774, and summarized his results in Théorie analytique des probabilités (1812). The Bayesian interpretation of probability was developed mainly by Laplace.
About 200 years later, Sir Harold Jeffreys put Bayes's algorithm and Laplace's formulation on an axiomatic basis, writing in a 1973 book that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to geometry".
Stephen Stigler used a Bayesian argument to conclude that Bayes' theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes, but that is disputed. Martyn Hooper and Sharon McGrayne have argued that Richard Price's contribution was substantial:
Statement of theorem
Bayes' theorem is stated mathematically as the following equation:
where and are events and .
is a conditional probability: the probability of event occurring given that is true. It is also called the posterior probability of given .
is also a conditional probability: the probability of event occurring given that is true. It can also be interpreted as the likelihood of given a fixed because .
and are the probabilities of observing and respectively without any given conditions; they are known as the prior probability and marginal probability.
Proof
For events
Bayes' theorem may be derived from the definition of conditional probability:
where is the probability of both A and B being true. Similarly,
Solving for and substituting into the above expression for yields Bayes' theorem:
For continuous random variables
For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density:
Therefore,
General case
Let be the conditional distribution of given and let be the distribution of . The joint distribution is then . The conditional distribution of given is then determined by
Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in 1933. Kolmogorov underlines the importance of conditional probability, writing, "I wish to call attention to ... the theory of conditional probabilities and conditional expectations". Bayes' theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem, including in cases with improper priors.
Examples
Recreational mathematics
Bayes' rule and computing conditional probabilities provide a method to solve a number of popular puzzles, such as the Three Prisoners problem, the Monty Hall problem, the Two Child problem, and the Two Envelopes problem.
Drug testing
Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR) = 0.90. Therefore, it leads to 90% true positive results (correct identification of drug use) for cannabis users.
The test is also 80% specific, meaning true negative rate (TNR) = 0.80. Therefore, the test correctly identifies 80% of non-use for non-users, but also generates 20% false positives, or false positive rate (FPR) = 0.20, for non-users.
Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user?
The Positive predictive value (PPV) of a test is the proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as:
PPV = True positive / Tested positive
If sensitivity, specificity, and prevalence are known, PPV can be calculated using Bayes' theorem. Let mean "the probability that someone is a cannabis user given that they test positive", which is what PPV means. We can write:
The denominator is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive times the probability of being a user, plus the probability that a non-user tests positive, times the probability of being a non-user. This is true because the classifications user and non-user form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement.
In other words, if someone tests positive, the probability that they are a cannabis user is only 19%—because in this group, only 5% of people are users, and most positives are false positives coming from the remaining 95%.
If 1,000 people were tested:
950 are non-users and 190 of them give false positive (0.20 × 950)
50 of them are users and 45 of them give true positive (0.90 × 50)
The 1,000 people thus have 235 positive tests, of which only 45 are genuine, about 19%.
Sensitivity or specificity
The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability that someone who tests positive is a cannabis user rises only from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%.
Cancer rate
If all patients with pancreatic cancer have a certain symptom, it does not follow that anyone who has that symptom has a 100% chance of getting pancreatic cancer. Assuming the incidence rate of pancreatic cancer is 1/100000, while 10/99999 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is 9.1%, and the other 90.9% could be "false positives" (that is, falsely said to have cancer; "positive" is a confusing term when, as here, the test gives bad news).
Based on incidence rate, the following table presents the corresponding numbers per 100,000 people.
Which can then be used to calculate the probability of having cancer when you have the symptoms:
Defective item rate
A factory produces items using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective, while 3% of B's items and 1% of C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C?
Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by A, 300 by B, and 500 by C. Machine A will produce 5% × 200 = 10 defective items, B 3% × 300 = 9, and C 1% × 500 = 5, for a total of 24. Thus 24/1000 (2.4%) of the total output will be defective and the likelihood that a randomly selected defective item was produced by machine C is 5/24 (~20.83%).
This problem can also be solved using Bayes' theorem: Let Xi denote the event that a randomly chosen item was made by the i th machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information:
If the item was made by the first machine, then the probability that it is defective is 0.05; that is, P(Y | XA) = 0.05. Overall, we have
To answer the original question, we first find P(Y). That can be done in the following way:
Hence, 2.4% of the total output is defective.
We are given that Y has occurred and we want to calculate the conditional probability of XC. By Bayes' theorem,
Given that the item is defective, the probability that it was made by machine C is 5/24. C produces half of the total output but a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(XC) = 1/2 by the smaller posterior probability P(XC | Y) = 5/24.
Interpretations
The interpretation of Bayes' rule depends on the interpretation of probability ascribed to the terms. The two predominant interpretations are described below.
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes' theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might remain the same, depending on the results. For proposition A and evidence B,
P (A), the prior, is the initial degree of belief in A.
P (A | B), the posterior, is the degree of belief after incorporating news that B is true.
the quotient represents the support B provides for A.
For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference.
Frequentist interpretation
In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B | A) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B) is the proportion of those with A out of those with B (the posterior).
The role of Bayes' theorem can be shown with tree diagrams. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem links the different partitionings.
Example
An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern | Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)?
From the extended form of Bayes' theorem (since any beetle is either rare or common),
Forms
Events
Simple form
For events A and B, provided that P(B) ≠ 0,
In many applications, for instance in Bayesian inference, the event B is fixed in the discussion and we wish to consider the effect of its having been observed on our belief in various possible events A. In such situations the denominator of the last expression, the probability of the given evidence B, is fixed; what we want to vary is A. Bayes' theorem shows that the posterior probabilities are proportional to the numerator, so the last equation becomes:
In words, the posterior is proportional to the prior times the likelihood.
If events A1, A2, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c, we have:
Adding these two formulas we deduce that:
or
Alternative form
Another form of Bayes' theorem for two competing statements or hypotheses is:
For an epistemological interpretation:
For proposition A and evidence or background B,
is the prior probability, the initial degree of belief in A.
is the corresponding initial degree of belief in not-A, that A is false, where
is the conditional probability or likelihood, the degree of belief in B given that A is true.
is the conditional probability or likelihood, the degree of belief in B given that A is false.
is the posterior probability, the probability of A after taking into account B.
Extended form
Often, for some partition {Aj} of the sample space, the event space is given in terms of P(Aj) and P(B | Aj). It is then useful to compute P(B) using the law of total probability:
Or (using the multiplication rule for conditional probability),
In the special case where A is a binary variable:
Random variables
Consider a sample space Ω generated by two random variables X and Y with known probability distributions. In principle, Bayes' theorem applies to the events A = {X = x} and B = {Y = y}.
Terms become 0 at points where either variable has finite probability density. To remain useful, Bayes' theorem can be formulated in terms of the relevant densities (see Derivation).
Simple form
If X is continuous and Y is discrete,
where each is a density function.
If X is discrete and Y is continuous,
If both X and Y are continuous,
Extended form
A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral:
Bayes' rule in odds form
Bayes' theorem in odds form is:
where
is called the Bayes factor or likelihood ratio. The odds between two events is simply the ratio of the probabilities of the two events. Thus:
Thus the rule says that the posterior odds are the prior odds times the Bayes factor; in other words, the posterior is proportional to the prior times the likelihood.
In the special case that and , one writes , and uses a similar abbreviation for the Bayes factor and for the conditional odds. The odds on is by definition the odds for and against . Bayes' rule can then be written in the abbreviated form
or, in words, the posterior odds on equals the prior odds on times the likelihood ratio for given information . In short, posterior odds equals prior odds times likelihood ratio.
For example, if a medical test has a sensitivity of 90% and a specificity of 91%, then the positive Bayes factor is . Now, if the prevalence of this disease is 9.09%, and if we take that as the prior probability, then the prior odds is about 1:10. So after receiving a positive test result, the posterior odds of having the disease becomes 1:1, which means that the posterior probability of having the disease is 50%. If a second test is performed in serial testing, and that also turns out to be positive, then the posterior odds of having the disease becomes 10:1, which means a posterior probability of about 90.91%. The negative Bayes factor can be calculated to be 91%/(100%-90%)=9.1, so if the second test turns out to be negative, then the posterior odds of having the disease is 1:9.1, which means a posterior probability of about 9.9%.
The example above can also be understood with more solid numbers: assume the patient taking the test is from a group of 1,000 people, 91 of whom have the disease (prevalence of 9.1%). If all 1,000 take the test, 82 of those with the disease will get a true positive result (sensitivity of 90.1%), 9 of those with the disease will get a false negative result (false negative rate of 9.9%), 827 of those without the disease will get a true negative result (specificity of 91.0%), and 82 of those without the disease will get a false positive result (false positive rate of 9.0%). Before taking any test, the patient's odds for having the disease is 91:909. After receiving a positive result, the patient's odds for having the disease is
which is consistent with the fact that there are 82 true positives and 82 false positives in the group of 1,000.
Correspondence to other mathematical frameworks
Propositional logic
Where the conditional probability is defined, it can be seen to capture the implication . The probabilistic calculus then mirrors or even generalizes various logical inference rules. Beyond, for example, assigning binary truth values, here one assigns probability values to statements. The assertion is captured by the assertion , i.e. that the conditional probability take the extremal probability value . Likewise, the assertion of a negation of an implication is captured by the assignment of . So, for example, if , then (if it is defined) , which entails , the implication introduction in logic.
Similarly, as the product of two probabilities equaling necessitates that both factors are also , one finds that Bayes' theorem
entails , which now also includes modus ponens.
For positive values , if it equals , then the two conditional probabilities are equal as well, and vice versa. Note that this mirrors the generally valid .
On the other hand, reasoning about either of the probabilities equalling classically entails the following contrapositive form of the above: .
Bayes' theorem with negated gives
.
Ruling out the extremal case (i.e. ), one has and in particular
.
Ruling out also the extremal case , one finds they attain the maximum simultaneously:
which (at least when having ruled out explosive antecedents) captures the classical contraposition principle
.
Subjective logic
Bayes' theorem represents a special case of deriving inverted conditional opinions in subjective logic expressed as:
where denotes the operator for inverting conditional opinions. The argument denotes a pair of binomial conditional opinions given by source , and the argument denotes the prior probability (aka. the base rate) of . The pair of derivative inverted conditional opinions is denoted . The conditional opinion generalizes the probabilistic conditional , i.e. in addition to assigning a probability the source can assign any subjective opinion to the conditional statement . A binomial subjective opinion is the belief in the truth of statement with degrees of epistemic uncertainty, as expressed by source . Every subjective opinion has a corresponding projected probability . The application of Bayes' theorem to projected probabilities of opinions is a homomorphism, meaning that Bayes' theorem can be expressed in terms of projected probabilities of opinions:
Hence, the subjective Bayes' theorem represents a generalization of Bayes' theorem.
Generalizations
Bayes theorem for 3 events
A version of Bayes' theorem for 3 events results from the addition of a third event , with on which all probabilities are conditioned:
Derivation
Using the chain rule
And, on the other hand
The desired result is obtained by identifying both expressions and solving for .
Use in genetics
In genetics, Bayes' rule can be used to estimate the probability that someone has a specific genotype. Many people seek to assess their chances of being affected by a genetic disease or their likelihood of being a carrier for a recessive gene of interest. A Bayesian analysis can be done based on family history or genetic testing to predict whether someone will develop a disease or pass one on to their children. Genetic testing and prediction is common among couples who plan to have children but are concerned that they may both be carriers for a disease, especially in communities with low genetic variance.
Using pedigree to calculate probabilities
Example of a Bayesian analysis table for a female's risk for a disease based on the knowledge that the disease is present in her siblings but not in her parents or any of her four children. Based solely on the status of the subject's siblings and parents, she is equally likely to be a carrier as to be a non-carrier (this likelihood is denoted by the Prior Hypothesis). The probability that the subject's four sons would all be unaffected is 1/16 (⋅⋅⋅) if she is a carrier and about 1 if she is a non-carrier (this is the Conditional Probability). The Joint Probability reconciles these two predictions by multiplying them together. The last line (the Posterior Probability) is calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities.
Using genetic test results
Parental genetic testing can detect around 90% of known disease alleles in parents that can lead to carrier or affected status in their children. Cystic fibrosis is a heritable disease caused by an autosomal recessive mutation on the CFTR gene, located on the q arm of chromosome 7.
Here is a Bayesian analysis of a female patient with a family history of cystic fibrosis (CF) who has tested negative for CF, demonstrating how the method was used to determine her risk of having a child born with CF: because the patient is unaffected, she is either homozygous for the wild-type allele, or heterozygous. To establish prior probabilities, a Punnett square is used, based on the knowledge that neither parent was affected by the disease but both could have been carriers:
Given that the patient is unaffected, there are only three possibilities. Within these three, there are two scenarios in which the patient carries the mutant allele. Thus the prior probabilities are and .
Next, the patient undergoes genetic testing and tests negative for cystic fibrosis. This test has a 90% detection rate, so the conditional probabilities of a negative test are 1/10 and 1. Finally, the joint and posterior probabilities are calculated as before.
After carrying out the same analysis on the patient's male partner (with a negative test result), the chance that their child is affected is the product of the parents' respective posterior probabilities for being carriers times the chance that two carriers will produce an affected offspring ().
Genetic testing done in parallel with other risk factor identification
Bayesian analysis can be done using phenotypic information associated with a genetic condition. When combined with genetic testing, this analysis becomes much more complicated. Cystic fibrosis, for example, can be identified in a fetus with an ultrasound looking for an echogenic bowel, one that appears brighter than normal on a scan. This is not a foolproof test, as an echogenic bowel can be present in a perfectly healthy fetus. Parental genetic testing is very influential in this case, where a phenotypic facet can be overly influential in probability calculation. In the case of a fetus with an echogenic bowel, with a mother who has been tested and is known to be a CF carrier, the posterior probability that the fetus has the disease is very high (0.64). But once the father has tested negative for CF, the posterior probability drops significantly (to 0.16).
Risk factor calculation is a powerful tool in genetic counseling and reproductive planning but cannot be treated as the only important factor. As above, incomplete testing can yield falsely high probability of carrier status, and testing can be financially inaccessible or unfeasible when a parent is not present.
See also
Bayesian epistemology
Inductive probability
Quantum Bayesianism
Why Most Published Research Findings Are False, a 2005 essay in metascience by John Ioannidis
Regular conditional probability
Bayesian persuasion
Notes
References
Bibliography
Further reading
External links
Bayesian statistics
Probability theorems
Theorems in statistics | Bayes' theorem | [
"Mathematics"
] | 5,420 | [
"Mathematical problems",
"Theorems in probability theory",
"Mathematical theorems",
"Theorems in statistics"
] |
7,185,031 | https://en.wikipedia.org/wiki/Trestolone | Trestolone, also known as 7α-methyl-19-nortestosterone (MENT), is an experimental androgen/anabolic steroid (AAS) and progestogen medication which has been under development for potential use as a form of hormonal birth control for men and in androgen replacement therapy for low testosterone levels in men but has never been marketed for medical use. It is given as an implant that is placed into fat. As trestolone acetate, an androgen ester and prodrug of trestolone, the medication can also be given by injection into muscle.
Side effects Trestolone is an AAS, and hence is an agonist of the androgen receptor, the biological target of androgens like testosterone. It is also a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. Due to its androgenic and progestogenic activity, trestolone has antigonadotropic effects. These effects result in reversible suppression of sperm production and are responsible for the contraceptive effects of trestolone in men.
Trestolone was first described in 1963. Subsequently, it was not studied again until 1990. Development of trestolone for potential clinical use started by 1993 and continued thereafter. No additional development appears to have been conducted since 2013. The medication was developed by the Population Council, a non-profit, non-governmental organization dedicated to reproductive health.
Medical uses
Trestolone is an experimental medication and is not currently approved for medical use. It has been under development for potential use as a male hormonal contraceptive and in androgen replacement therapy for low testosterone levels. The medication has been studied and developed for use as a subcutaneous implant. An androgen ester and prodrug of trestolone, trestolone acetate, has also been developed, for use via intramuscular injection.
Side effects
Trestolone may cause sexual dysfunction (e.g., decreased sex drive, reduced erectile function) and decreased bone mineral density due to estrogen deficiency.
Pharmacology
Pharmacodynamics
As an AAS, trestolone is an agonist of the androgen receptor (AR), similarly to androgens like testosterone and dihydrotestosterone (DHT). Trestolone is not a substrate for 5α-reductase and hence is not potentiated or inactivated in so-called "androgenic" tissues like the skin, hair follicles, and prostate gland. As such, it has a high ratio of anabolic to androgenic activity, similarly to other nandrolone derivatives. Trestolone is a substrate for aromatase and hence produces the estrogen 7α-methylestradiol as a metabolite. However, trestolone has only weak estrogenic activity and an amount that would appear to be insufficient for replacement purposes, as evidenced by decreased bone mineral density in men treated with it for hypogonadism. Trestolone also has potent progestogenic activity. Both the androgenic and progestogenic activity of trestolone are thought to be involved in its antigonadotropic activity.
Mechanism of action
Spermatozoa are produced in the testes of males in a process called spermatogenesis. In order to render a man infertile, a hormone-based male contraceptive method must stop spermatogenesis by interrupting the release of gonadotropins from the pituitary gland. Even in low concentrations, trestolone is a potent inhibitor of the release of the gonadotropins, luteinizing hormone (LH) and follicle stimulating hormone (FSH). In order for spermatogenesis to occur in the testes, both FSH and testosterone must be present. By inhibiting release of FSH, trestolone creates an endocrine environment in which conditions for spermatogenesis are not ideal. Manufacture of sperm is further impaired by the suppression of LH, which in turn drastically curtails the production of testosterone. Sufficient regular doses of trestolone cause severe oligozoospermia or azoospermia, and therefore infertility, in most men. Trestolone-induced infertility has been found to be quickly reversible upon discontinuation.
When LH release is inhibited, the amount of testosterone made in the testes declines dramatically. As a result of trestolone's gonadotropin-suppressing qualities, levels of serum testosterone fall sharply in men treated with sufficient amounts of the medication. Testosterone is the main hormone responsible for maintenance of male secondary sex characteristics. Normally, an inadequate testosterone level causes undesirable effects such as fatigue, loss of skeletal muscle mass, reduced libido, and weight gain. However, the androgenic and anabolic properties of trestolone largely ameliorate this essentially, trestolone replaces testosterone's role as the primary male hormone in the body.
Pharmacokinetics
The pharmacokinetic properties of trestolone, such as poor oral bioavailability and short elimination half-life, make it unsuitable for oral administration or long-term intramuscular injection. As such, trestolone must be administered parenterally via a different and more practical route such as subcutaneous implant, transdermal patch, or topical gel. Trestolone acetate, a prodrug of trestolone, can be administered via intramuscular injection.
Chemistry
Trestolone, also known as 7α-methyl-19-nortestosterone (MENT) or as 7α-methylestr-4-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of nandrolone (19-nortestosterone). It is a modification of nandrolone with a methyl group at the C7α position. Closely related AAS include 7α-methyl-19-norandrostenedione (MENT dione, trestione) (an androgen prohormone of trestolone) and dimethandrolone (7α,11β-dimethyl-19-nortestosterone) (the C11β methylated derivative of trestolone), as well as mibolerone (7α,17α-dimethyl-19-nortestosterone) and dimethyltrienolone (7α,17α-dimethyl-δ9,11-19-nortestosterone). The progestin tibolone (7α-methyl-17α-ethynyl-δ5(10)-19-nortestosterone) is also closely related to trestolone.
History
Trestolone was first described in 1963. However, it was not subsequently studied again until 1990. Development of trestolone for potential use in male hormonal contraception and androgen replacement therapy was started by 1993, and continued thereafter. No additional development appears to have been conducted since 2013. Trestolone was developed by the Population Council, a non-profit, non-governmental organization dedicated to reproductive health..
Society and culture
Generic names
Trestolone is the generic name of the drug and its . It is also commonly known as 7α-methyl-19-nortestosterone (MENT).
References
Abandoned drugs
Secondary alcohols
Anabolic–androgenic steroids
Antigonadotropins
Contraception for males
Estranes
Experimental methods of birth control
Hormonal contraception
Ketones
Progestogens
Synthetic estrogens | Trestolone | [
"Chemistry"
] | 1,603 | [
"Ketones",
"Functional groups",
"Drug safety",
"Abandoned drugs"
] |
7,185,428 | https://en.wikipedia.org/wiki/Random%20dynamical%20system | In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.
An example of a random dynamical system is a stochastic differential equation; in this case the distribution Q is typically determined by noise terms. It consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. Another example is discrete state random dynamical system; some elementary contradistinctions between Markov chain and random dynamical system descriptions of a stochastic dynamics are discussed.
Motivation 1: Solutions to a stochastic differential equation
Let be a -dimensional vector field, and let . Suppose that the solution to the stochastic differential equation
exists for all positive time and some (small) interval of negative time dependent upon , where denotes a -dimensional Wiener process (Brownian motion). Implicitly, this statement uses the classical Wiener probability space
In this context, the Wiener process is the coordinate process.
Now define a flow map or (solution operator) by
(whenever the right hand side is well-defined). Then (or, more precisely, the pair ) is a (local, left-sided) random dynamical system. The process of generating a "flow" from the solution to a stochastic differential equation leads us to study suitably defined "flows" on their own. These "flows" are random dynamical systems.
Motivation 2: Connection to Markov Chain
An i.i.d random dynamical system in the discrete space is described by a triplet .
is the state space, .
is a family of maps of . Each such map has a matrix representation, called deterministic transition matrix. It is a binary matrix but it has exactly one entry 1 in each row and 0s otherwise.
is the probability measure of the -field of .
The discrete random dynamical system comes as follows,
The system is in some state in , a map in is chosen according to the probability measure and the system moves to the state in step 1.
Independently of previous maps, another map is chosen according to the probability measure and the system moves to the state .
The procedure repeats.
The random variable is constructed by means of composition of independent random maps, . Clearly, is a Markov Chain.
Reversely, can, and how, a given MC be represented by the compositions of i.i.d. random transformations? Yes, it can, but not unique. The proof for existence is similar with Birkhoff–von Neumann theorem for doubly stochastic matrix.
Here is an example that illustrates the existence and non-uniqueness.
Example: If the state space and the set of the transformations expressed in terms of deterministic transition matrices. Then a Markov transition matrix can be represented by the following decomposition by the min-max algorithm,
In the meantime, another decomposition could be
Formal definition
Formally, a random dynamical system consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. In detail.
Let be a probability space, the noise space. Define the base flow as follows: for each "time" , let be a measure-preserving measurable function:
for all and ;
Suppose also that
, the identity function on ;
for all , .
That is, , , forms a group of measure-preserving transformation of the noise . For one-sided random dynamical systems, one would consider only positive indices ; for discrete-time random dynamical systems, one would consider only integer-valued ; in these cases, the maps would only form a commutative monoid instead of a group.
While true in most applications, it is not usually part of the formal definition of a random dynamical system to require that the measure-preserving dynamical system is ergodic.
Now let be a complete separable metric space, the phase space. Let be a -measurable function such that
for all , , the identity function on ;
for (almost) all , is continuous;
satisfies the (crude) cocycle property: for almost all ,
In the case of random dynamical systems driven by a Wiener process , the base flow would be given by
.
This can be read as saying that "starts the noise at time instead of time 0". Thus, the cocycle property can be read as saying that evolving the initial condition with some noise for seconds and then through seconds with the same noise (as started from the seconds mark) gives the same result as evolving through seconds with that same noise.
Attractors for random dynamical systems
The notion of an attractor for a random dynamical system is not as straightforward to define as in the deterministic case. For technical reasons, it is necessary to "rewind time", as in the definition of a pullback attractor. Moreover, the attractor is dependent upon the realisation of the noise.
See also
Chaos theory
Diffusion process
Stochastic control
References
Stochastic differential equations
Stochastic processes | Random dynamical system | [
"Mathematics"
] | 1,114 | [
"Random dynamical systems",
"Dynamical systems"
] |
7,185,509 | https://en.wikipedia.org/wiki/Category%20of%20finite-dimensional%20Hilbert%20spaces | In mathematics, the category FdHilb has all finite-dimensional Hilbert spaces for objects and the linear transformations between them as morphisms. Whereas the theory described by the normal category of Hilbert spaces, Hilb, is ordinary quantum mechanics, the corresponding theory on finite dimensional Hilbert spaces is called fdQM.
Properties
This category
is monoidal,
possesses finite biproducts, and
is dagger compact.
According to a theorem of Selinger, the category of finite-dimensional Hilbert spaces is complete in the dagger compact category. Many ideas from Hilbert spaces, such as the no-cloning theorem, hold in general for dagger compact categories. See that article for additional details.
References
Monoidal categories
Dagger categories
Hilbert spaces | Category of finite-dimensional Hilbert spaces | [
"Physics",
"Mathematics"
] | 146 | [
"Mathematical structures",
"Category theory stubs",
"Hilbert spaces",
"Monoidal categories",
"Quantum mechanics",
"Category theory",
"Categories in category theory",
"Dagger categories"
] |
7,190,735 | https://en.wikipedia.org/wiki/Verdier%20duality | In mathematics, Verdier duality is a cohomological duality in algebraic topology that generalizes Poincaré duality for manifolds. Verdier duality was introduced in 1965 by as an analog for locally compact topological spaces of
Alexander Grothendieck's theory of
Poincaré duality in étale cohomology
for schemes in algebraic geometry. It is thus (together with the said étale theory and for example Grothendieck's coherent duality) one instance of Grothendieck's six operations formalism.
Verdier duality generalises the classical Poincaré duality of manifolds in two directions: it applies to continuous maps from one space to another (reducing to the classical case for the unique map from a manifold to a one-point space), and it applies to spaces that fail to be manifolds due to the presence of singularities. It is commonly encountered when studying constructible or perverse sheaves.
Verdier duality
Verdier duality states that (subject to suitable finiteness conditions discussed below)
certain derived image functors for sheaves are actually adjoint functors. There are two versions.
Global Verdier duality states that for a continuous map of locally compact Hausdorff spaces, the derived functor of the direct image with compact (or proper) supports has a right adjoint in the derived category of
sheaves, in other words, for (complexes of) sheaves (of abelian groups) on and on we have
Local Verdier duality states that
in the derived category of sheaves on Y.
It is important to note that the distinction between the global and local versions is that the former relates morphisms between
complexes of sheaves in the derived categories, whereas the latter relates internal Hom-complexes and so can be evaluated locally. Taking global sections of both sides in the local statement gives the global Verdier duality.
These results hold subject to the compactly supported direct image functor having finite cohomological dimension.
This is the case if there is a bound such that the compactly supported cohomology
vanishes for all fibres (where )
and . This holds if all the fibres are at most -dimensional manifolds or more generally at most -dimensional CW-complexes.
The discussion above is about derived categories of sheaves of abelian groups. It is instead possible to consider a ring
and (derived categories of) sheaves of -modules; the case above corresponds to
.
The dualizing complex on is defined to be
where p is the map from to a point. Part of what makes Verdier duality interesting in the singular setting is that when is not a manifold (a graph or singular algebraic variety for example) then the dualizing complex is not quasi-isomorphic to a sheaf concentrated in a single degree. From this perspective the derived category is necessary in the study of singular spaces.
If is a finite-dimensional locally compact space, and the bounded derived category of sheaves of abelian groups over , then the Verdier dual is a contravariant functor
defined by
It has the following properties:
Relation to classical Poincaré duality
Poincaré duality can be derived as a special case of Verdier duality. Here one explicitly calculates cohomology of a space using the machinery of sheaf cohomology.
Suppose X is a compact orientable n-dimensional manifold, k is a field and is the constant sheaf on X with coefficients in k. Let be the constant map to a point. Global Verdier duality then states
To understand how Poincaré duality is obtained from this statement, it is perhaps easiest to understand both sides piece by piece. Let
be an injective resolution of the constant sheaf. Then by standard facts on right derived functors
is a complex whose cohomology is the compactly supported cohomology of X. Since morphisms between complexes of sheaves (or vector spaces) themselves form a complex we find that
where the last non-zero term is in degree 0 and the ones to the left are in negative degree. Morphisms in the derived category are obtained from the homotopy category of chain complexes of sheaves by taking the zeroth cohomology of the complex, i.e.
For the other side of the Verdier duality statement above, we have to take for granted the fact that when X is a compact orientable n-dimensional manifold
which is the dualizing complex for a manifold. Now we can re-express the right hand side as
We finally have obtained the statement that
By repeating this argument with the sheaf kX replaced with the same sheaf placed in degree i we get the classical Poincaré duality
See also
Poincaré duality
Six operations
Coherent duality
Derived category
References
, Exposés I and II contain the corresponding theory in the étale situation
Topology
Homological algebra
Sheaf theory
Duality theories | Verdier duality | [
"Physics",
"Mathematics"
] | 1,007 | [
"Mathematical structures",
"Sheaf theory",
"Topology",
"Space",
"Duality theories",
"Geometry",
"Category theory",
"Fields of abstract algebra",
"Spacetime",
"Homological algebra"
] |
7,190,885 | https://en.wikipedia.org/wiki/Four-dimensionalism | In philosophy, four-dimensionalism (also known as the doctrine of temporal parts) is the ontological position that an object's persistence through time is like its extension through space. Thus, an object that exists in time has temporal parts in the various subregions of the total region of time it occupies, just like an object that exists in a region of space has at least one part in every subregion of that space.
Four-dimensionalists typically argue for treating time as analogous to space, usually leading them to endorse the doctrine of eternalism. This is a philosophical approach to the ontological nature of time, according to which all points in time are equally "real", as opposed to the presentist idea that only the present is real. As some eternalists argue by analogy, just as all spatially distant objects and events are as real as those close to us, temporally distant objects and events are as real as those currently present to us.
Perdurantism—or perdurance theory—is a closely related philosophical theory of persistence and identity, according to which an individual has distinct temporal parts throughout its existence, and the persisting object is the sum or set of all of its temporal parts. This sum or set is colloquially referred to as a "space-time worm", which has earned the perdurantist view the moniker of "the worm view". While all perdurantists are plausibly considered four dimensionalists, at least one variety of four dimensionalism does not count as perdurantist in nature. This variety, known as exdurantism or the "stage view", is closely akin to the perdurantist position. They also countenance a view of persisting objects that have temporal parts that succeed one another through time. However, instead of identifying the persisting object as the entire set or sum of its temporal parts, the exdurantist argues that any object under discussion is a single stage (time-slice, temporal part, etc.), and that the other stages or parts that comprise the persisting object are related to that part by a "temporal counterpart" relation.
Though they have often been conflated, eternalism is a theory of what time is like and what times exist, while perdurantism is a theory about persisting objects and their identity conditions over time. Eternalism and perdurantism tend to be discussed together because many philosophers argue for a combination of eternalism and perdurantism. Sider (1997) uses the term four-dimensionalism to refer to perdurantism, but Michael Rea uses the term "four-dimensionalism" to mean the view that presentism is false as opposed to "perdurantism", the view that endurantism is false and persisting objects have temporal parts.
Four-dimensionalism about material objects
Four-dimensionalism is a name for different positions. One of these uses four-dimensionalism as a position of material objects with respect to dimensions. Four-dimensionalism is the view that in addition to spatial parts, objects have temporal parts.
According to this view, four-dimensionalism cannot be used as a synonym for perdurantism. Perdurantists have to hold a four-dimensional view of material objects: it is impossible that perdurantists, who believe that objects persist by having different temporal parts at different times, do not believe in temporal parts. However, the reverse is not true. Four-dimensionalism is compatible with either perdurantism or exdurantism.
A-series and B-series
J.M.E. McTaggart in The Unreality of Time identified two descriptions of time, which he called the A-series and the B-series. The A-series identifies positions in time as past, present, or future, and thus assumes that the "present" has some objective reality, as in both presentism and the growing block universe. The B-series defines a given event as earlier or later than another event, but does not assume an objective present, as in four-dimensionalism. Much of the contemporary literature in the metaphysics of time has been taken to spring forth from this distinction, and thus takes McTaggart's work as its starting point.
Contrast with three-dimensionalism
Unlike the four dimensionalist, the three dimensionalist considers time to be a unique dimension that is not analogous to the three spatial dimensions: length, width and height. Whereas the four dimensionalist proposes that objects are extended across time, the three dimensionalist adheres to the belief that all objects are wholly present at any moment at which they exist. While the three dimensionalist agrees that the parts of an object can be differentiated based on their spatial dimensions, they do not believe an object can be differentiated into temporal parts across time. For example, in the three dimensionalist account, "Descartes in 1635" is the same object as "Descartes in 1620", and both are identical to Descartes, himself. However, the four dimensionalist considers these to be distinct temporal parts.
Prominent arguments in favor of four-dimensionalism
Several lines of argumentation have been advanced in favor of four-dimensionalism:
Firstly, four-dimensional accounts of time are argued to better explain paradoxes of change over time (often referred to as the paradox of the Ship of Theseus) than three-dimensional theories. A contemporary account of this paradox is introduced in Ney (2014), but the original problem has its roots in Greek antiquity. A typical Ship of Theseus paradox involves taking some changeable object with multiple material parts, for example a ship, then sequentially removing and replacing its parts until none of the original components are left. At each stage of the replacement, the ship is presumably identical with the original, since the replacement of a single part need not destroy the ship and create an entirely new one. But, it is also plausible that an object with none of the same material parts as another is not identical with the original object. So, how can an object survive the replacement of any of its parts, and in fact all of its parts? The four-dimensionalist can argue that the persisting object is a single space-time worm which has all the replacement stages as temporal parts, or in the case of the stage view that each succeeding stage bears a temporal counterpart relation to the original stage under discussion.
Secondly, problems of temporary intrinsics are argued to be best explained by four-dimensional views of time that involve temporal parts. As presented by David Lewis, the problem of temporary intrinsics involves properties of an object that are both had by that object regardless of how anything else in the world is (and thus intrinsic), and subject to change over time (thus temporary). Shape is argued to be one such property. So, if an object is capable of having a particular shape, and also changing its shape at another time, there must be some way for the same object to be, say, both round and square. Lewis argues that separate temporal parts having the incompatible properties best explains an object being able to change its shape in this way, because other accounts of three-dimensional time eliminate intrinsic properties by indexing them to times and making them relational instead of intrinsic.
See also
Extended modal realism
Four-dimensional space
Multiple occupancy view
Rietdijk–Putnam argument advocating this position
Spacetime
World line
Light cone
References
Sources
Armstrong, David M. (1980) "Identity Through Time", pages 67,8 in Peter van Inwagen (editor), Time and Cause, D. Reidel.
Hughes, C. (1986) "Is a Thing Just the Sum of Its Parts?", Proceedings of the Aristotelian Society 85: 213-33.
Heller, Mark (1984). "Temporal Parts of Four Dimensional Objects", Philosophical Studies 46: 323-34. Reprinted in Rea 1997: 12.-330. Heller, Mark (1990) The Ontology of Physical Objects: Four-dimensional Hunks of Matter, Cambridge University Press.
Heller, Mark (1992) "Things Change", Philosophy and Phenomenological Research 52: 695-304
Heller, Mark (1993) "Varieties of Four Dimensionalism", Australasian Journal of Philosophy 71: 47-59.
Lewis, David (1983). "Survival and Identity", in Philosophical Papers, Volume 1, 55-7. Oxford University Press. With postscripts. Originally published in Amelie O. Rorty, editor (1976) The Identities of Persons University of California Press, pages 17-40.
Lewis, David (1986a). On the Plurality of Worlds. Oxford: Basil Blackwell.
Lewis, David (1986b). Philosophical Papers, Volume 2. Oxford: Oxford University Press.
McTaggart John Ellis (1908) The Unreality of time, originally published in Mind: A Quarterly Review of Psychology and Philosophy 17: 456-473.
(1976) "Survival and identity", pages 17-40 in editor, The identities of persons. Berkeley: University of California Press. Google books
(2004) "A defense of presentism", pages 47-82 in editor, Oxford Studies in Metaphysics, Volume 1, Oxford University Press. Google books
(2005) Review of Four-dimensionalism: an ontology of persistence and time by Theodore Sider, Ars Disputandi 5
(1985) "Can amoebae divide without multiplying?", Australasian Journal of Philosophy 63(3): 299–319.
External links
Rea, M. C., "Four Dimensionalism" in The Oxford Handbook for Metaphysics. Oxford Univ. Press. Describes presentism and four-dimensionalism.
"Time" in the Internet Encyclopedia of Philosophy''
Theories of time
Philosophy of physics
Spacetime | Four-dimensionalism | [
"Physics",
"Mathematics"
] | 2,016 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Vector spaces",
"Space (mathematics)",
"Theory of relativity",
"Spacetime"
] |
7,191,704 | https://en.wikipedia.org/wiki/Cryo | Cryo- is from the Ancient Greek κρύος (krúos, “ice, icy cold, chill, frost”). Uses of the prefix Cryo- include:
Physics and geology
Cryogenics, the study of the production and behaviour of materials at very low temperatures and the study of producing extremely low temperatures
Cryoelectronics, the study of superconductivity under cryogenic conditions and its applications
Cryosphere, those portions of Earth's surface where water ice naturally occurs
Cryotron, a switch that uses superconductivity
Cryovolcano, a theoretical type of volcano that erupts volatiles instead of molten rock
Biology and medicine
Cryobiology, the branch of biology that studies the effects of low temperatures on living things
Cryonics, the low-temperature preservation of people who cannot be sustained by contemporary medicine
Cryoprecipitate, a blood-derived protein product used to treat some bleeding disorders
Cryotherapy, medical treatment using cold
Cryoablation, tissue removal using cold
Cryosurgery, surgery using cold
Cryo-electron microscopy (cryoEM), a technique that fires beams of electrons at proteins that have been frozen in solution, to deduce the biomolecules’ structure
Other uses
Cryo Interactive, a video game company
Cryos, a planet in the video game Darkspore
See also
Kryo, a brand of CPUs by Qualcomm
External links
Cryogenics
Cryobiology
Cryonics
Superconductivity | Cryo | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 315 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Physical quantities",
"Superconductivity",
"Cryobiology",
"Cryogenics",
"Materials science",
"Condensed matter physics",
"Biochemistry",
"Electrical resistance and conductance"
] |
2,180,754 | https://en.wikipedia.org/wiki/Donaldson%27s%20theorem | In mathematics, and especially differential topology and gauge theory, Donaldson's theorem states that a definite intersection form of a compact, oriented, smooth manifold of dimension 4 is diagonalizable. If the intersection form is positive (negative) definite, it can be diagonalized to the identity matrix (negative identity matrix) over the . The original version of the theorem required the manifold to be simply connected, but it was later improved to apply to 4-manifolds with any fundamental group.
History
The theorem was proved by Simon Donaldson. This was a contribution cited for his Fields medal in 1986.
Idea of proof
Donaldson's proof utilizes the moduli space of solutions to the anti-self-duality equations on a principal -bundle over the four-manifold . By the Atiyah–Singer index theorem, the dimension of the moduli space is given by
where is a Chern class, is the first Betti number of , and is the dimension of the positive-definite subspace of with respect to the intersection form. When is simply-connected with definite intersection form, possibly after changing orientation, one always has and . Thus taking any principal -bundle with , one obtains a moduli space of dimension five.
This moduli space is non-compact and generically smooth, with singularities occurring only at the points corresponding to reducible connections, of which there are exactly many. Results of Clifford Taubes and Karen Uhlenbeck show that whilst is non-compact, its structure at infinity can be readily described. Namely, there is an open subset of , say , such that for sufficiently small choices of parameter , there is a diffeomorphism
.
The work of Taubes and Uhlenbeck essentially concerns constructing sequences of ASD connections on the four-manifold with curvature becoming infinitely concentrated at any given single point . For each such point, in the limit one obtains a unique singular ASD connection, which becomes a well-defined smooth ASD connection at that point using Uhlenbeck's removable singularity theorem.
Donaldson observed that the singular points in the interior of corresponding to reducible connections could also be described: they looked like cones over the complex projective plane . Furthermore, we can count the number of such singular points. Let be the -bundle over associated to by the standard representation of . Then, reducible connections modulo gauge are in a 1-1 correspondence with splittings where is a complex line bundle over . Whenever we may compute:
,
where is the intersection form on the second cohomology of . Since line bundles over are classified by their first Chern class , we get that reducible connections modulo gauge are in a 1-1 correspondence with pairs such that . Let the number of pairs be . An elementary argument that applies to any negative definite quadratic form over the integers tells us that , with equality if and only if is diagonalizable.
It is thus possible to compactify the moduli space as follows: First, cut off each cone at a reducible singularity and glue in a copy of . Secondly, glue in a copy of itself at infinity. The resulting space is a cobordism between and a disjoint union of copies of (of unknown orientations). The signature of a four-manifold is a cobordism invariant. Thus, because is definite:
,
from which one concludes the intersection form of is diagonalizable.
Extensions
Michael Freedman had previously shown that any unimodular symmetric bilinear form is realized as the intersection form of some closed, oriented four-manifold. Combining this result with the Serre classification theorem and Donaldson's theorem, several interesting results can be seen:
1) Any indefinite non-diagonalizable intersection form gives rise to a four-dimensional topological manifold with no differentiable structure (so cannot be smoothed).
2) Two smooth simply-connected 4-manifolds are homeomorphic, if and only if, their intersection forms have the same rank, signature, and parity.
See also
Unimodular lattice
Donaldson theory
Yang–Mills equations
Rokhlin's theorem
Notes
References
Differential topology
Theorems in topology
Quadratic forms | Donaldson's theorem | [
"Mathematics"
] | 841 | [
"Mathematical theorems",
"Quadratic forms",
"Theorems in topology",
"Topology",
"Differential topology",
"Mathematical problems",
"Number theory"
] |
2,181,360 | https://en.wikipedia.org/wiki/Tarski%27s%20axioms | Tarski's axioms are an axiom system for Euclidean geometry, specifically for that portion of Euclidean geometry that is formulable in first-order logic with identity (i.e. is formulable as an elementary theory). As such, it does not require an underlying set theory. The only primitive objects of the system are "points" and the only primitive predicates are "betweenness" (expressing the fact that a point lies on a line segment between two other points) and "congruence" (expressing the fact that the distance between two points equals the distance between two other points). The system contains infinitely many axioms.
The axiom system is due to Alfred Tarski who first presented it in 1926. Other modern axiomizations of Euclidean geometry are Hilbert's axioms (1899) and Birkhoff's axioms (1932).
Using his axiom system, Tarski was able to show that the first-order theory of Euclidean geometry is consistent, complete and decidable: every sentence in its language is either provable or disprovable from the axioms, and we have an algorithm which decides for any given sentence whether it is provable or not.
Overview
Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point:
From Enriques, Tarski learned of the work of Mario Pieri, an Italian geometer who was strongly influenced by Peano. Tarski preferred Pieri's system [of his Point and Sphere memoir], where the logical structure and the complexity of the axioms were more transparent.
Givant then says that "with typical thoroughness" Tarski devised his system:
What was different about Tarski's approach to geometry? First of all, the axiom system was much simpler than any of the axiom systems that existed up to that time. In fact the length of all of Tarski's axioms together is not much more than just one of Pieri's 24 axioms. It was the first system of Euclidean geometry that was simple enough for all axioms to be expressed in terms of the primitive notions only, without the help of defined notions. Of even greater importance, for the first time a clear distinction was made between full geometry and its elementary — that is, its first order — part.
Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences, whose construction respects formal syntactical rules, and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's, Tarski's axiomatization has no primitive objects other than points, so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tarski's system is a first-order theory, it is not even possible to define lines as sets of points. The only primitive relations (predicates) are "betweenness" and "congruence" among points.
Tarski's axiomatization is shorter than its rivals, in a sense Tarski and Givant (1999) make explicit. It is more concise than Pieri's because Pieri had only two primitive notions while Tarski introduced three: point, betweenness, and congruence. Such economy of primitive and defined notions means that Tarski's system is not very convenient for doing Euclidean geometry. Rather, Tarski designed his system to facilitate its analysis via the tools of mathematical logic, i.e., to facilitate deriving its metamathematical properties. Tarski's system has the unusual property that all sentences can be written in universal-existential form, a special case of the prenex normal form. This form has all universal quantifiers preceding any existential quantifiers, so that all sentences can be recast in the form This fact allowed Tarski to prove that Euclidean geometry is decidable: there exists an algorithm which can determine the truth or falsity of any sentence. Tarski's axiomatization is also complete. This does not contradict Gödel's first incompleteness theorem, because Tarski's theory lacks the expressive power needed to interpret Robinson arithmetic .
The axioms
Alfred Tarski worked on the axiomatization and metamathematics of Euclidean geometry intermittently from 1926 until his death in 1983, with Tarski (1959) heralding his mature interest in the subject. The work of Tarski and his students on Euclidean geometry culminated in the monograph Schwabhäuser, Szmielew, and Tarski (1983), which set out the 10 axioms and one axiom schema shown below, the associated metamathematics, and a fair bit of the subject. Gupta (1965) made important contributions, and Tarski and Givant (1999) discuss the history.
Fundamental relations
These axioms are a more elegant version of a set Tarski devised in the 1920s as part of his investigation of the metamathematical properties of Euclidean plane geometry. This objective required reformulating that geometry as a first-order theory. Tarski did so by positing a universe of points, with lower case letters denoting variables ranging over that universe. Equality is provided by the underlying logic (see First-order logic#Equality and its axioms). Tarski then posited two primitive relations:
Betweenness, a triadic relation. The atomic sentence Bxyz denotes that the point y is "between" the points x and z, in other words, that y is a point on the line segment xz. (This relation is interpreted inclusively, so that Bxyz is trivially true whenever x=y or y=z).
Congruence (or "equidistance"), a tetradic relation. The atomic sentence Cwxyz or commonly wx ≡ yz can be interpreted as wx is congruent to yz, in other words, that the length of the line segment wx is equal to the length of the line segment yz.
Betweenness captures the affine aspect (such as the parallelism of lines) of Euclidean geometry; congruence, its metric aspect (such as angles and distances). The background logic includes identity, a binary relation denoted by =.
The axioms below are grouped by the types of relation they invoke, then sorted, first by the number of existential quantifiers, then by the number of atomic sentences. The axioms should be read as universal closures; hence any free variables should be taken as tacitly universally quantified.
Congruence axioms
Reflexivity of Congruence
Identity of Congruence
Transitivity of Congruence
Commentary
While the congruence relation is, formally, a 4-way relation among points, it may also be thought of, informally, as a binary relation between two line segments and . The "Reflexivity" and "Transitivity" axioms above, combined, prove both:
that this binary relation is in fact an equivalence relation
it is reflexive: .
it is symmetric .
it is transitive .
and that the order in which the points of a line segment are specified is irrelevant.
.
.
.
The "transitivity" axiom asserts that congruence is Euclidean, in that it respects the first of Euclid's "common notions".
The "Identity of Congruence" axiom states, intuitively, that if xy is congruent with a segment that begins and ends at the same point, x and y are the same point. This is closely related to the notion of reflexivity for binary relations.
Betweenness axioms
Identity of Betweenness
The only point on the line segment is itself.
Axiom of Pasch
Axiom schema of Continuity
Let φ(x) and ψ(y) be first-order formulae containing no free instances of either a or b. Let there also be no free instances of x in ψ(y) or of y in φ(x). Then all instances of the following schema are axioms:
Let r be a ray with endpoint a. Let the first order formulae φ and ψ define subsets X and Y of r, such that every point in Y is to the right of every point of X (with respect to a). Then there exists a point b in r lying between X and Y. This is essentially the Dedekind cut construction, carried out in a way that avoids quantification over sets.
Note that the formulae φ(x) and ψ(y) may contain parameters, i.e. free variables different from a, b, x, y. And indeed, each instance of the axiom scheme that does not contain parameters can be proven from the other axioms.
Lower Dimension
There exist three noncollinear points. Without this axiom, the theory could be modeled by the one-dimensional real line, a single point, or even the empty set.
Congruence and betweenness
Upper Dimension
Three points equidistant from two distinct points form a line. Without this axiom, the theory could be modeled by three-dimensional or higher-dimensional space.
Axiom of Euclid
Three variants of this axiom can be given, labeled A, B and C below. They are equivalent to each other given the remaining Tarski's axioms, and indeed equivalent to Euclid's parallel postulate.
A:
Let a line segment join the midpoint of two sides of a given triangle. That line segment will be half as long as the third side. This is equivalent to the interior angles of any triangle summing to two right angles.
B:
Given any triangle, there exists a circle that includes all of its vertices.
C:
Given any angle and any point v in its interior, there exists a line segment including v, with an endpoint on each side of the angle.
Each variant has an advantage over the others:
A dispenses with existential quantifiers;
B has the fewest variables and atomic sentences;
C requires but one primitive notion, betweenness. This variant is the usual one given in the literature.
Five Segment
Begin with two triangles, xuz and x'u'z'. Draw the line segments yu and y'u', connecting a vertex of each triangle to a point on the side opposite to the vertex. The result is two divided triangles, each made up of five segments. If four segments of one triangle are each congruent to a segment in the other triangle, then the fifth segments in both triangles must be congruent.
This is equivalent to the side-angle-side rule for determining that two triangles are congruent; if the angles uxz and u'x'z' are congruent (there exist congruent triangles xuz and x'u'z'), and the two pairs of incident sides are congruent (xu ≡ x'u' and xz ≡ x'z'), then the remaining pair of sides is also congruent (uz ≡ u'z).
Segment Construction
For any point y, it is possible to draw in any direction (determined by x) a line congruent to any segment ab.
Discussion
According to Tarski and Givant (1999: 192-93), none of the above axioms are fundamentally new. The first four axioms establish some elementary properties of the two primitive relations. For instance, Reflexivity and Transitivity of Congruence establish that congruence is an equivalence relation over line segments. The Identity of Congruence and of Betweenness govern the trivial case when those relations are applied to nondistinct points. The theorem xy≡zz ↔ x=y ↔ Bxyx extends these Identity axioms.
A number of other properties of Betweenness are derivable as theorems including:
Reflexivity: Bxxy ;
Symmetry: Bxyz → Bzyx ;
Transitivity: (Bxyw ∧ Byzw) → Bxyz ;
Connectivity: (Bxyw ∧ Bxzw) → (Bxyz ∨ Bxzy).
The last two properties totally order the points making up a line segment.
The Upper and Lower Dimension axioms together require that any model of these axioms have dimension 2, i.e. that we are axiomatizing the Euclidean plane. Suitable changes in these axioms yield axiom sets for Euclidean geometry for dimensions 0, 1, and greater than 2 (Tarski and Givant 1999: Axioms 8(1), 8(n), 9(0), 9(1), 9(n) ). Note that solid geometry requires no new axioms, unlike the case with Hilbert's axioms. Moreover, Lower Dimension for n dimensions is simply the negation of Upper Dimension for n - 1 dimensions.
When the number of dimensions is greater than 1, Betweenness can be defined in terms of congruence (Tarski and Givant, 1999). First define the relation "≤" (where is interpreted "the length of line segment is less than or equal to the length of line segment "):
In the case of two dimensions, the intuition is as follows: For any line segment xy, consider the possible range of lengths of xv, where v is any point on the perpendicular bisector of xy. It is apparent that while there is no upper bound to the length of xv, there is a lower bound, which occurs when v is the midpoint of xy. So if xy is shorter than or equal to zu, then the range of possible lengths of xv will be a superset of the range of possible lengths of zw, where w is any point on the perpendicular bisector of zu.
Betweenness can then be defined by using the intuition that the shortest distance between any two points is a straight line:
The Axiom Schema of Continuity assures that the ordering of points on a line is complete (with respect to first-order definable properties). As was pointed out by Tarski, this first-order axiom schema may be replaced by a more powerful second-order Axiom of Continuity if one allows for variables to refer to arbitrary sets of points. The resulting second-order system is equivalent to Hilbert's set of axioms. (Tarski and Givant 1999)
The Axioms of Pasch and Euclid are well known. The Segment Construction axiom makes measurement and the Cartesian coordinate system possible—simply assign the length 1 to some arbitrary non-empty line segment. Indeed, it is shown in (Schwabhäuser 1983) that by specifying two distinguished points on a line, called 0 and 1, we can define an addition, multiplication and ordering, turning the set of points on that line into a real-closed ordered field. We can then introduce coordinates from this field, showing that every model of Tarski's axioms is isomorphic to the two-dimensional plane over some real-closed ordered field.
The standard geometric notions of parallelism and intersection of lines (where lines are represented by two distinct points on them), right angles, congruence of angles, similarity of triangles, tangency of lines and circles (represented by a center point and a radius) can all be defined in Tarski's system.
Let wff stand for a well-formed formula (or syntactically correct first-order formula) in Tarski's system. Tarski and Givant (1999: 175) proved that Tarski's system is:
Consistent: There is no wff such that it and its negation can both be proven from the axioms;
Complete: Every wff or its negation is a theorem provable from the axioms;
Decidable: There exists an algorithm that decides for every wff whether is it is provable or disprovable from the axioms. This follows from Tarski's:
Decision procedure for the real closed field, which he found by quantifier elimination (the Tarski–Seidenberg theorem);
Axioms admitting the above-mentioned representation as a two-dimensional plane over a real closed field.
This has the consequence that every statement of (second-order, general) Euclidean geometry which can be formulated as a first-order sentence in Tarski's system is true if and only if it is provable in Tarski's system, and this provability can be automatically checked with Tarski's algorithm. This, for instance, applies to all theorems in Euclid's Elements, Book I. An example of a theorem of Euclidean geometry which cannot be so formulated is the Archimedean property: to any two positive-length line segments S1 and S2 there exists a natural number n such that nS1 is longer than S2. (This is a consequence of the fact that there are real-closed fields that contain infinitesimals.) Other notions that cannot be expressed in Tarski's system are the constructability with straightedge and compass and statements that talk about "all polygones" etc.
Gupta (1965) proved the Tarski's axioms independent, excepting Pasch and Reflexivity of Congruence.
Negating the Axiom of Euclid yields hyperbolic geometry, while eliminating it outright yields absolute geometry. Full (as opposed to elementary) Euclidean geometry requires giving up a first order axiomatization: replace φ(x) and ψ(y) in the axiom schema of Continuity with x ∈ A and y ∈ B, where A and B are universally quantified variables ranging over sets of points.
Comparison with Hilbert's system
Hilbert's axioms for plane geometry number 16, and include Transitivity of Congruence and a variant of the Axiom of Pasch. The only notion from intuitive geometry invoked in the remarks to Tarski's axioms is triangle. (Versions B and C''' of the Axiom of Euclid refer to "circle" and "angle," respectively.) Hilbert's axioms also require "ray," "angle," and the notion of a triangle "including" an angle. In addition to betweenness and congruence, Hilbert's axioms require a primitive binary relation "on," linking a point and a line.
Hilbert uses two axioms of Continuity, and they require second-order logic. By contrast, Tarski's Axiom schema of Continuity consists of infinitely many first-order axioms. Such a schema is indispensable; Euclidean geometry in Tarski's (or equivalent) language cannot be finitely axiomatized as a first-order theory.
Hilbert's system is therefore considerably stronger: every model is isomorphic to the real plane (using the standard notions of points and lines). By contrast, Tarski's system has many non-isomorphic models: for every real-closed field F, the plane F2'' provides one such model (where betweenness and congruence are defined in the obvious way).
The first four groups of axioms of Hilbert's axioms for plane geometry are bi-interpretable with Tarski's axioms minus continuity.
See also
Euclidean geometry
Euclidean space
Notes
References
.
Available as a 2007 reprint, Brouwer Press,
Elementary geometry
Foundations of geometry
Mathematical axioms | Tarski's axioms | [
"Mathematics"
] | 4,042 | [
"Mathematical logic",
"Mathematical axioms",
"Elementary mathematics",
"Elementary geometry",
"Foundations of geometry"
] |
2,184,047 | https://en.wikipedia.org/wiki/Beryllium%20fluoride | Beryllium fluoride is the inorganic compound with the formula BeF2. This white solid is the principal precursor for the manufacture of beryllium metal. Its structure resembles that of quartz, but BeF2 is highly soluble in water.
Properties
Beryllium fluoride has distinctive optical properties. In the form of fluoroberyllate glass, it has the lowest refractive index for a solid at room temperature of 1.275. Its dispersive power is the lowest for a solid at 0.0093, and the nonlinear coefficient is also the lowest at 2 × 10−14.
Structure and bonding
The structure of solid BeF2 resembles that of cristobalite. Be2+ centers are four coordinate and tetrahedral and the fluoride centers are two-coordinate. The Be-F bond lengths are about 1.54 Å. Analogous to SiO2, BeF2 can also adopt a number of related structures. An analogy also exists between BeF2 and AlF3: both adopt extended structures at mild temperature.
Gaseous and liquid BeF2
Gaseous beryllium fluoride adopts a linear structure, with a Be-F distance of 143 pm. BeF2 reaches a vapor pressure of 10 Pa at 686 °C, 100 Pa at 767 °C, 1 kPa at 869 °C, 10 kPa at 999 °C, and 100 kPa at 1172 °C. Molecular in the gaseous state is isoelectronic to carbon dioxide.
As a liquid, beryllium fluoride has a tetrahedral structure. The density of liquid BeF2 decreases near its freezing point, as Be2+ and F− ions begin to coordinate more strongly with one another, leading to the expansion of voids between formula units.
Production
The processing of beryllium ores generates impure Be(OH)2. This material reacts with ammonium bifluoride to give ammonium tetrafluoroberyllate:
Be(OH)2 + 2 (NH4)HF2 → (NH4)2BeF4 + 2 H2O
Tetrafluoroberyllate is a robust ion, which allows its purification by precipitation of various impurities as their hydroxides. Heating purified (NH4)2BeF4 gives the desired product:
(NH4)2BeF4 → 2 NH3 + 2 HF + BeF2
In general the reactivity of BeF2 ions with fluoride are quite analogous to the reactions of SiO2 with oxides.
Applications
Reduction of BeF2 at 1300 °C with magnesium in a graphite crucible provides the most practical route to metallic beryllium:
BeF2 + Mg → Be + MgF2
The Beryllium chloride is not a useful precursor because of its volatility.
Niche uses
Beryllium fluoride is used in biochemistry, particularly protein crystallography as a mimic of phosphate. Thus, ADP and beryllium fluoride together tend to bind to ATP sites and inhibit protein action, making it possible to crystallise proteins in the bound state.
Beryllium fluoride forms a basic constituent of the preferred fluoride salt mixture used in liquid-fluoride nuclear reactors. Typically beryllium fluoride is mixed with lithium fluoride to form a base solvent (FLiBe), into which fluorides of uranium and thorium are introduced. Beryllium fluoride is exceptionally chemically stable, and LiF/BeF2 mixtures (FLiBe) have low melting points (360–459 °C) and the best neutronic properties of fluoride salt combinations appropriate for reactor use. MSRE used two different mixtures in the two cooling circuits.
Safety
Beryllium compounds are highly toxic. The increased toxicity of beryllium in the presence of fluoride has been noted as early as 1949. The in mice is about 100 mg/kg by ingestion and 1.8 mg/kg by intravenous injection.
References
External links
IARC Monograph "Beryllium and Beryllium Compounds"
National Pollutant Inventory: Beryllium and compounds fact sheet
National Pollutant Inventory: Fluoride and compounds fact sheet
Hazards of Beryllium fluoride
MSDS from which the LD50 figures
Beryllium compounds
Fluorides
Alkaline earth metal halides | Beryllium fluoride | [
"Chemistry"
] | 906 | [
"Highly-toxic chemical substances",
"Harmful chemical substances",
"Fluorides",
"Salts"
] |
2,184,383 | https://en.wikipedia.org/wiki/Boiling-point%20elevation | Boiling-point elevation is the phenomenon whereby the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope.
Explanation
The boiling point elevation is a colligative property, which means that boiling point elevation is dependent on the number of dissolved particles but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures).
The vapor pressure affects the solute shown by Raoult's Law while the free energy change and chemical potential are shown by Gibbs free energy. Most solutes remain in the liquid phase and do not enter the gas phase, except at very high temperatures.
In terms of vapor pressure, a liquid boils when its vapor pressure equals the surrounding pressure. A nonvolatile solute lowers the solvent’s vapor pressure, meaning a higher temperature is needed for the vapor pressure to equalize the surrounding pressure, causing the boiling point to elevate.
In terms of chemical potential, at the boiling point, the liquid and gas phases have the same chemical potential. Adding a nonvolatile solute lowers the solvent’s chemical potential in the liquid phase, but the gas phase remains unaffected. This shifts the equilibrium between phases to a higher temperature, elevating the boiling point.
Relationship between Freezing-point Depression
Freezing-point depression is analogous to boiling point elevation, though the magnitude of freezing-point depression is higher for the same solvent and solute concentration. These phenomena extend the liquid range of a solvent in the presence of a solute.
Related equations for Calculating Boiling Point
The extent of boiling-point elevation can be calculated by applying Clausius–Clapeyron relation and Raoult's law together with the assumption of the non-volatility of the solute. The result is that in dilute ideal solutions, the extent of boiling-point elevation is directly proportional to the molal concentration (amount of substance per mass) of the solution according to the equation:
ΔTb = Kb · bc
where the boiling point elevation, is defined as Tb (solution) − Tb (pure solvent).
Kb, the ebullioscopic constant, which is dependent on the properties of the solvent. It can be calculated as Kb = RTb2M/ΔHv, where R is the gas constant, and Tb is the boiling temperature of the pure solvent [in K], M is the molar mass of the solvent, and ΔHv is the heat of vaporization per mole of the solvent.
bc is the colligative molality, calculated by taking dissociation into account since the boiling point elevation is a colligative property, dependent on the number of particles in solution. This is most easily done by using the van 't Hoff factor i as bc = bsolute · i, where bsolute is the molality of the solution. The factor i accounts for the number of individual particles (typically ions) formed by a compound in solution. Examples:
i = 1 for sugar in water
i = 1.9 for sodium chloride in water, due to the near full dissociation of NaCl into Na+ and Cl− (often simplified as 2)
i = 2.3 for calcium chloride in water, due to nearly full dissociation of CaCl2 into Ca2+ and 2Cl− (often simplified as 3)
Non integer i factors result from ion pairs in solution, which lower the effective number of particles in the solution.
Equation after including the van 't Hoff factor
ΔTb = Kb · bsolute · i
The above formula reduces precision at high concentrations, due to nonideality of the solution. If the solute is volatile, one of the key assumptions used in deriving the formula is not true because the equation derived is for solutions of non-volatile solutes in a volatile solvent. In the case of volatile solutes, the equation can represent a mixture of volatile compounds more accurately, and the effect of the solute on the boiling point must be determined from the phase diagram of the mixture. In such cases, the mixture can sometimes have a lower boiling point than either of the pure components; a mixture with a minimum boiling point is a type of azeotrope.
Ebullioscopic constants
Values of the ebullioscopic constants Kb for selected solvents:
Uses
Together with the formula above, the boiling-point elevation can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called ebullioscopy (Latin-Greek "boiling-viewing"). However, superheating is a factor that can affect the precision of the measurement and would be challenging to avoid because of the decrease in molecular mobility. Therefore, ΔTb would be hard to measure precisely even though superheating can be partially overcome by the invention of the Beckmann thermometer. In reality, cryoscopy is used more often because the freezing point is often easier to measure with precision.
See also
Colligative properties
Freezing point depression
Dühring's rule
List of boiling and freezing information of solvents
References
Amount of substance
Chemical properties
Physical chemistry | Boiling-point elevation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,251 | [
"Scalar physical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Amount of substance",
"nan",
"Wikipedia categories named after physical quantities",
"Physical chemistry"
] |
13,339,992 | https://en.wikipedia.org/wiki/ARC-ECRIS | ARC-ECRIS is an Electron Cyclotron Resonance Ion Source (ECRIS) based on arc-shaped coils unlike the conventional ECRIS which bases on a multipole magnet (usually a hexapole magnet) inside a solenoid magnet.
First time the arc-shaped coils were used already in the 1960s in fusion experiments, for example at the Lawrence Livermore National Laboratory (MFTF, Baseball II, ...) and later in Japan (GAMMA10, ...).
In 2006 the JYFL ion source group designed, constructed and tested similar plasma trap to produce highly charged heavy ion beams. The first tests were promising and showed that a stable plasma can be confined in an arc-coil magnetic field structure (see references).
References
External links
YouTube video of a conventional ECRIS plasma (hexapolar magnetic field)
Ion source | ARC-ECRIS | [
"Physics"
] | 179 | [
"Spectrum (physical sciences)",
"Ion source",
"Mass spectrometry",
"Particle physics",
"Particle physics stubs"
] |
13,341,622 | https://en.wikipedia.org/wiki/Unparticle%20physics | In theoretical physics, unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant.
Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics"
and "Another Odd Thing About Unparticle Physics". His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics, astrophysics, cosmology, CP violation, lepton flavour violation, muon decay, neutrino oscillations, and supersymmetry.
Background
All particles exist in states that may be characterized by a certain energy, momentum and mass. In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons, for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons, can exist with their properties scaled equally. This immunity to scaling is called "scale invariance".
The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass.
Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it might come up with some discoveries that will help us update or replace our best description of the particles that make up matter and the forces that glue them together.
Properties
Unparticles would have properties in common with neutrinos, which have almost zero mass and are therefore nearly scale invariant. Neutrinos barely interact with matter – most of the time physicists can infer their presence only by calculating the "missing" energy and momentum after an interaction. By looking at the same interaction many times, a probability distribution is built up that tells more specifically how many and what sort of neutrinos are involved. They couple very weakly to ordinary matter at low energies, and the effect of the coupling increases as the energy increases.
A similar technique could be used to search for evidence of unparticles. According to scale invariance, a distribution containing unparticles would become apparent because it would resemble a distribution for a fractional number of massless particles.
This scale invariant sector would interact very weakly with the rest of the Standard Model, making it possible to observe evidence for unparticle stuff, if it exists. The unparticle theory is a high-energy theory that contains both Standard Model fields and Banks–Zaks fields, which have scale-invariant behavior at an infrared point. The two fields can interact through the interactions of ordinary particles if the energy of the interaction is sufficiently high.
These particle interactions would appear to have "missing" energy and momentum that would not be detected by the experimental apparatus. Certain distinct distributions of missing energy would signify the production of unparticle stuff. If such signatures are not observed, bounds on the model can be set and refined.
Experimental indications
Unparticle physics has been proposed as an explanation for anomalies in superconducting cuprate materials, where the charge measured by ARPES appears to exceed predictions from Luttinger's theorem for the quantity of electrons.
References
External links
Particle physics
Theoretical physics
Fringe physics | Unparticle physics | [
"Physics"
] | 827 | [
"Theoretical physics",
"Particle physics"
] |
13,342,572 | https://en.wikipedia.org/wiki/Single-input%20single-output%20system | In control engineering, a single-input and single-output (SISO) system is a simple single-variable control system with one input and one output. In radio, it is the use of only one antenna both in the transmitter and receiver.
Details
SISO systems are typically less complex than multiple-input multiple-output (MIMO) systems. Usually, it is also easier to make an order of magnitude or trending predictions "on the fly" or "back of the envelope". MIMO systems have too many interactions for most of us to trace through them quickly, thoroughly, and effectively in our heads.
Frequency domain techniques for analysis and controller design dominate SISO control system theory. Bode plot, Nyquist stability criterion, Nichols plot, and root locus are the usual tools for SISO system analysis. Controllers can be designed through the polynomial design, root locus design methods to name just two of the more popular. Often SISO controllers will be PI, PID, or lead-lag.
See also
Control theory
References
Control engineering
Transfer functions | Single-input single-output system | [
"Engineering"
] | 215 | [
"Control engineering"
] |
13,345,478 | https://en.wikipedia.org/wiki/Schilder%27s%20theorem | In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.
Statement of the theorem
Let C0 = C0([0, T]; Rd) be the Banach space of continuous functions such that , equipped with the supremum norm ||⋅||∞ and be the subspace of absolutely continuous functions whose derivative is in (the so-called Cameron-Martin space). Define the rate function
on and let be two given functions, such that (the "action") has a unique minimum .
Then under some differentiability and growth assumptions on which are detailed in Schilder 1966, one has
where denotes expectation with respect to the Wiener measure on and is the Hessian of at the minimum ; is meant in the sense of an inner product.
Application to large deviations on the Wiener measure
Let B be a standard Brownian motion in d-dimensional Euclidean space Rd starting at the origin, 0 ∈ Rd; let W denote the law of B, i.e. classical Wiener measure. For ε > 0, let Wε denote the law of the rescaled process B. Then, on the Banach space C0 = C0([0, T]; Rd) of continuous functions such that , equipped with the supremum norm ||⋅||∞, the probability measures Wε satisfy the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω is absolutely continuous, and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and
Example
Taking ε = 1/c2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion B strays further than c from its starting point over the time interval [0, T], i.e. the probability
as c tends to infinity. Here Bc(0; ||⋅||∞) denotes the open ball of radius c about the zero function in C0, taken with respect to the supremum norm. First note that
Since the rate function is continuous on A, Schilder's theorem yields
making use of the fact that the infimum over paths in the collection A is attained for . This result can be heuristically interpreted as saying that, for large and/or large
In fact, the above probability can be estimated more precisely: for a standard Brownian motion in , and any and , we have:
References
(See theorem 5.2)
Asymptotic analysis
Theorems regarding stochastic processes
Large deviations theory | Schilder's theorem | [
"Mathematics"
] | 649 | [
"Theorems about stochastic processes",
"Theorems in probability theory",
"Mathematical analysis",
"Asymptotic analysis"
] |
13,345,571 | https://en.wikipedia.org/wiki/Sammon%20mapping | Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.
It is particularly suited for use in exploratory data analysis.
The method was proposed by John W. Sammon in 1969.
It is considered a non-linear approach as the mapping cannot be represented as a linear combination of the original variables as possible in techniques such as principal component analysis, which also makes it more difficult to use for classification applications.
Denote the distance between ith and jth objects in the original space by , and the distance between their projections by .
Sammon's mapping aims to minimize the following error function, which is often referred to as Sammon's stress or Sammon's error:
The minimization can be performed either by gradient descent, as proposed initially, or by other means, usually involving iterative methods.
The number of iterations needs to be experimentally determined and convergent solutions are not always guaranteed.
Many implementations prefer to use the first Principal Components as a starting configuration.
The Sammon mapping has been one of the most successful nonlinear metric multidimensional scaling methods since its advent in 1969, but effort has been focused on algorithm improvement rather than on the form of the stress function.
The performance of the Sammon mapping has been improved by extending its stress function using left Bregman divergence
and right Bregman divergence.
See also
Prefrontal cortex basal ganglia working memory
State–action–reward–state–action
Constructing skill trees
References
External links
HiSee – an open-source visualizer for high dimensional data
A C# based program with code on CodeProject.
Matlab code and method introduction
Functions and mappings
Dimension reduction | Sammon mapping | [
"Mathematics"
] | 380 | [
"Mathematical analysis",
"Mathematical relations",
"Mathematical objects",
"Functions and mappings"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.